Vox

41 readers
20 users here now

founded 9 months ago
MODERATORS
1
 
 

Donald Trump, wearing a suit with an open-collar white shirt and a red MAGA hat, speaks on Air Force One.

President Donald Trump speaks to journalists aboard Air Force One on July 4, 2025. | Brendan Smialowski/AFP via Getty Images

This story appeared in The Logoff, a daily newsletter that helps you stay informed about the Trump administration without letting political news take over your life. Subscribe here.

Welcome to The Logoff: President Donald Trump is preparing to rev up his trade war again, even as he extends a pause on some tariffs until next month.

What just happened? Trump announced new tariff rates on multiple countries Monday in a slew of form letters, citing their “unsustainable Trade Deficits” with the US. The tariffs — including on Japan, South Korea, Malaysia, and numerous others — would take effect August 1.

The White House also said that previously announced “reciprocal” tariffs, which had been set to take effect this week, would be suspended until August 1.

What’s the context? Trump announced draconian tariffs on many countries in April, only to pause them a week later after financial markets cratered. He left in place a 10 percent “global tariff,” as well as tariffs on China (since reduced from their triple-digit highs).

Why is Trump doing this now? When Trump paused his tariffs in April, he said he would use the next 90 days to strike trade deals. But almost 90 days later, relatively few deals have been reached. Trump may now hope to force the issue with renewed threats of economic pain.

Will any of these tariffs actually take effect? Trump’s trade policy has been mercurial, to describe it generously, and it’s even unclear how he’s deciding on new tariff rates. Previously, a drastic dip in the markets managed to spook him into backing down, and the markets reacted negatively to his Monday announcement — but it’s unclear what we should expect at this point.

There’s also a pending court case over the legality of Trump’s tariffs, which he has imposed using emergency authority, but don’t expect a quick resolution there, as a ruling by the US Court of International Trade blocking the tariffs is currently on hold.

And with that, it’s time to log off…

The Toledo Mud Hens, a minor league baseball team affiliated with the Detroit Tigers, have a new mascot: Webley the kitten, who was rescued from the team’s stadium last week and has been adopted by a team employee. You can — and should! — read about his adventures here.


From Vox via this RSS feed

2
 
 

Zohran Mamdani on stage speaking

As New York City’s mayoral election unfolds, it’s clear that public figures are feeling more and more emboldened to make openly racist statements against candidate Zohran Mamdani. | Yuki Iwamura/Getty Images

Throughout Zohran Mamdani’s campaign for New York City mayor, he’s faced a barrage of attacks that have only gotten worse since he handily won the Democratic primary two weeks ago. And this isn’t just happening at the local level; New York City’s mayoral race has drawn attention from across the country, and politicians and pundits have been fearmongering about Mamdani from afar.

Here’s just a sampling:

On X, US Rep. Marjorie Taylor Greene (R-GA) shared a photo of the Statue of Liberty dressed in a burqa shortly after Mamdani’s victory, saying, “This hits hard.” US Rep. Brandon Gill, of Texas, criticized Mamdani for eating with his hands, saying, “Civilized people in America don’t eat like this.” US Rep. Andy Ogles, of Tennessee, who referred to the Democratic nominee for New York City mayor as “Zohran ‘little Muhammad’ Mamdani,” called for Mamdani to be denaturalized and deported. And shortly after Mamdani’s primary win, David Frum, an Atlantic staff writer, posted on X, “Well, at least we can retire that faded and false line, ‘antisemitism has no place in New York City.’”

It’s also not just conservatives. In an interview on CNN, US Rep. Eric Swalwell (D-CA) said, “I don’t associate myself with what [Mamdani] has said about the Jewish people,” without expanding on what, exactly, Mamdani has said. (While Mamdani has criticized Zionism and the Israeli government, he has not said anything negative about Jewish people.) Kirsten Gillibrand, New York’s Democratic senator, falsely claimed that Mamdani had made “references to global jihad” in a radio interview. She later apologized to Mamdani, according to her team, “for mischaracterizing Mamdani’s record and for her tone.”

Anti-Muslim and anti-immigrant bigotry is, of course, not a new feature of American politics. But how emboldened public figures clearly feel to make such brazen and openly racist statements seems to have reached a fever pitch. Since when, for example, is it acceptable to call on deporting American citizens? Here are three reasons why the racism against Mamdani in particular has been so extreme:

1) Trumpism has ushered in a new age of bullying

The kind of rhetoric directed at Mamdani is a product of an era of politics where hate speech and cruelty have become normalized. That has made public figures far more comfortable saying things in public that they might have thought twice about before. Stephen Miller, a senior Trump administration official, said that New York City is “the clearest warning yet of what happens to a society when it fails to control migration. Even the president’s son retweeted a post that said, “I’m old enough to remember when New Yorkers endured 9/11 instead of voting for it,” adding, “New York City has fallen.” It’s no longer shocking to see members of Congress, pundits, and business leaders criticizing entire peoples and cultures within the US as un-American.

In addition, President Donald Trump has pushed anti-immigrant rhetoric since he launched his campaign for president in 2015, and has only become more extreme in return to office. In this new era, meanness is not only politically rewarded but openly embraced and promoted by the White House and a variety of official online accounts. Trump’s White House, for example, has turned videos of deportations into memes, taken dehumanizing photo ops, and used AI-generated images to make light of genuinely cruel policies. This kind of politics has made hate speech all the more acceptable.

Part of the reason the attacks on Mamdani — who was born in Uganda and is of Indian descent — have been so widespread is precisely because this type of rhetoric gets spewed from the very top of American politics and government on a regular basis.

Since Mamdani’s recent rise in New York City politics, there have been calls to deport him despite the fact that he moved to New York with his family when he was 7 and became a naturalized citizen of the United States in 2018. Trump himself has threatened to arrest Mamdani, saying, “Look, we don’t need a communist in this country.” That we are now at a point where we’re talking about deporting citizens is a new low, but it is a direct product of Trump’s style of politics, which has ushered in a new era of online bullying, extreme xenophobia, and open racism.

2) Mamdani is a victim of anti-Palestinian racism

As I wrote last year, anti-Palestinian racism specifically targets people because they support the cause of Palestinian liberation — even if they aren’t Palestinian themselves. This is why crackdowns on college campus protests were so extreme, and why the Trump administration has detained and attempted to deport international students, including non-Palestinians, simply because of what they have said about Israel.

Mamdani isn’t Palestinian, but has been a vocal critic of Israel and has a history of organizing and advocating for Palestinian rights. Like many activists in pro-Palestinian spaces, Mamdani has been baselessly smeared as antisemitic. That’s why comments like Frum’s have cropped up since Mamdani won his primary: By supporting Palestinians, Mamdani is inherently viewed as a threat — not just to Israel but to Jewish people as a whole.

Frum, for example, later posted on X about the NYC primary, “[…]people with zero (or worse) regard for Jewish life and Jewish safety scolding actual Jews about how wrong and stupid we are about Jewish life and Jewish safety.” But this has nothing to do with what Mamdani has said about Jewish people. In fact, Mamdani’s platform also includes addressing antisemitism in the city by dramatically increasing funding for hate-crime prevention.

Anti-Palestinian racism is still an acceptable form of bigotry that we often see displayed in American politics and media without receiving the kind of pushback that other forms of racism do. It also results in amplifying other forms of racism when its victims come from other marginalized groups. “Like many other forms of hate, there can be intersectionalities, and that’s also true when it’s allies of ours who are speaking for Palestinian human rights,” one expert told me last year. “If it’s a Black ally, we will see anti-Black racism. If it’s an Indigenous ally, we will see anti-Indigenous racism. [If it’s] queer allies, trans allies, we will see homophobic and anti-queer rhetoric.”

3) Islamophobia is broadly acceptable

Anti-Muslim bigotry has long been a constant in American politics, and it has been especially potent since the War on Terror. Former President Barack Obama, for example, was accused of being Muslim — as though that would disqualify him from public office — even though he is a Christian. Since Rashida Tlaib and Ilhan Omar were elected to Congress in 2018, they have been routinely victimized by smear campaigns and hate speech that has specifically targeted their identities. Tlaib, for example, has been accused by her colleagues in Congress of engaging in “antisemitic activity” and “sympathizing with terrorist organizations.” And at a fundraiser, Republican Rep. Lauren Boebert of Colorado called Omar the “jihad squad.”

Now, Mamdani is seeing his own Muslim background be weaponized against him. He has been said to come “from a culture that lies about everything,” that he is uncivilized, and that he is a threat to people’s safety simply because New York might have a “Muslim mayor.”

There are still several months until New Yorkers head to the polls to vote for a new mayor in the general election. And unfortunately, this kind of open bigotry against Mamdani is likely to only get worse as Election Day nears. But while the attacks on Mamdani might seem like just one attempt at bringing down a candidate in a local race, their ultimate effect is much more damaging: They will make US politics all the more toxic and will only further normalize this kind of extreme bigotry against Muslims and immigrants in America.


From Vox via this RSS feed

3
 
 

Kerrville resident Leighton Sterling watches flood waters along the Guadalupe River on July 4, 2025. She is wearing a stars and stripped patterned blouse.

Kerrville resident Leighton Sterling watches flood waters along the Guadalupe River on July 4, 2025. | Eric Vryn/Getty Images

At least 90 people have died in central Texas in extraordinary floods, the deadliest in the Lone Star State since Hurricane Harvey killed 89 people.

A torrential downpour started off the July 4 weekend with several months’ worth of rain falling in a few hours, lifting water levels in the Guadalupe River as high as 22 feet. Among the dead are 27 children and counselors at a summer camp near Kerrville in Kerr County. One adult at the camp may have died trying to rescue children. More people are still missing, and more rain is in the forecast.

View Link

The storm arose from the fading remnants of Tropical Storm Barry, which formed on June 28. It was well ahead of schedule for the typical second named storm of the Atlantic hurricane season, which usually forms in mid-July. The weather system parked over Texas where it converged with a band of moisture moving north, forming thunderstorms that squeezed out a torrential downpour.

With its topography of hills and rivers as well as a history of sudden downpours, this region in Texas has been dubbed “flash flood alley.” Kerrville itself experienced a deadly flood in 1987 when the Guadalupe River received 11 inches of rain in less than five hours, raising water in some portions by 29 feet. The flood killed 10 people.

But there were several factors that converged to make this storm so deadly — and not all of them had to do with the sheer amount of rain. Here are some things to know about disasters like this:

Texas isn’t in the tropics. How did it get hit so hard by a tropical storm?

Kerr County, population 54,000, is a couple hundred miles inland from the Gulf of Mexico, but it has a history of tropical storms and hurricanes passing through the region on occasion. So the leftovers from Tropical Storm Barry reaching the area isn’t too surprising. Scientists, however, are still trying to find out how storms that are powered by warm ocean water continue to get energy over land.

The recent flooding is occurring in an era where even “ordinary” storms are becoming more dangerous. Strong thunderstorms and tornadoes are a common sight in Texas summer skies and the state has a history of deadly floods. Over the years, the amount of rain falling from major storms has been increasing.

As average temperatures rise due to climate change, air can retain more moisture, which means when storms occur, there’s more water falling out of the sky, turning roads into rivers and submerging the landscape.

Did something go wrong here with the forecast or disaster warnings?

Ahead of the Texas floods, the Texas Division of Emergency Management activated its emergency response system on July 2 in anticipation of major floods, including mobilizing water rescue squads, helicopters, and road-clearing equipment. On July 3, the National Weather Service issued a flood watch. (NPR has a very useful timeline of the planning and response to the floods.)

View Link

But as the watches turned to warnings, they revealed gaps in the communication system. There are spots along the Guadalupe River that don’t have flood warning sirens, including Kerr County. Officials there contemplated installing a flood warning system, but it was rejected for being too expensive.

Text message alerts did go out, but they were sent in the middle of the night after the July Fourth holiday, when many people were camping or traveling in unfamiliar places. Parts of the county also have spotty cell service. And residents who did get the alerts weren’t sure what to do about them, whether to stay or evacuate, until the water levels were perilously high.

The National Weather Service this year has lost 600 employees between layoffs, buyouts, and retirements spurred by the Trump administration’s “Department of Government Efficiency.” That included Paul Yura, the warning coordination meteorologist at the National Weather Service Austin/San Antonio office, which is responsible for Kerr County. However, National Weather Service staff said the office was operating normally during the floods and wasn’t dealing with a staff shortage.

In general, natural disasters are killing fewer people over time. There are a lot of reasons why, like stronger building codes that can better resist fires, floods, and earthquakes.

One of the most important lifesaving trends is better warning systems ahead of huge storms. Improvements in observations, a growing understanding of the underlying physics, and advances in computer modeling have led forecasters to build up their lead time ahead of severe weather. Researchers are even starting to get more forewarnings about volcanic eruptions and earthquakes.

But warnings are only effective if people have the knowledge and the tools to react to them. During floods, people often underestimate currents and try to cross dangerous submerged areas. “Purposely driving or walking into floodwaters accounts for more than 86% of total flood fatalities,” according to a study of flood deaths in the US between 1959 and 2019.

It is possible to protect lives against the forces of nature, but it requires a lot of parts working together — planning, infrastructure, forecasting, alerts, and evacuations.

Are floods getting more difficult to predict?

Not necessarily, but the baselines are changing.

Most assessments of flood risk are based on historical data. Local, state, and federal agencies can map out high watermarks of the past and show which properties might be at the greatest risk. But at best, these maps are conservative estimates; they don’t show the full potential of where water can reach. Often, flood maps aren’t revised regularly and don’t take into account how the risk landscape is changing.

For instance, more construction in an area can lead to more impervious surfaces that retain water or shunt it toward a certain neighborhood. Losing natural watersheds that normally soak up rain can increase the probability of floods. Overdrawing groundwater can cause land to sink.

In coastal areas, rising sea levels are increasing the reach of coastal flooding, while rainstorms inland are pouring out more water. Disasters can also compound each other. A major wildfire can wipe out trees and grasses anchoring soil, leading to floods and landslides when the rain comes, for example.

Inflation, growing populations, and rising property values mean that when floods do occur, they extract a much bigger price from the economy. Kerr County’s population has grown about 25 percent since 2000.

As a result, when it comes to floods, many people don’t even realize that they’re at risk. And even in the wake of a major inundation, the lessons are quickly forgotten.

One analysis showed that people buy more flood insurance after a major flood recedes, but gradually, they let their policies lapse, returning to the baseline insurance rate in three years in some cases. That’s why one of the biggest challenges in disaster risk reduction is simply trying to get people to understand that bad things can happen to them and they should prepare.


From Vox via this RSS feed

4
 
 

Measles outbreak in Texas leads to record US cases

The US has seen more measles cases in 2025 than in any year since 1992. | Jan Sonnenmair/Getty Images

The US has now recorded 1,277 measles cases this year, according to case data collected by Johns Hopkins University, making the current outbreak the most severe since 1992. The disease continues to spread, and by now most schools are out for the summer. July summer camps have opened and family vacations are picking up — all creating new opportunities for the virus to transmit.

The next few months will be crucial for getting measles under control in the US.

Three people have already died this year, the first measles deaths in the country in a decade. Most of the cases have been concentrated in a major outbreak in the Southwest — in New Mexico, Oklahoma, and Texas — but the virus is now also spreading in Arkansas, Illinois, Indiana, Kansas, Michigan, and Ohio. New outbreaks have recently sparked in Colorado, Montana and North Dakota. Though cases are still rising, they’re doing so more modestly than they were in March, when the country was seeing 100 or more new cases in a week. The measles outbreak does appear to be slowing down, experts say.

But as cases continue to spread, the Centers for Disease Control and Prevention in mid-June urged kids’ summer camps to check their participants for measles immunity status. Measles is one of the world’s most contagious viruses and, with vaccination rates declining among children across the country, a lot of kids tightly cloistered for several days creates a prime environment for measles to spread quickly once introduced. The CDC’s new checklist for summer camps advises camp organizers to collect vaccine records from campers and keep the documentation on hand, to check campers and staff for any signs of fever or rash upon arrival, and to set up an isolation area if anyone begins to feel sick once camp has started.

We are living in a new reality: Measles is spreading widely, vaccination rates are down, and the country’s top health official, Robert F. Kennedy Jr., has backed away from urging vaccination — even though they are 97 percent effective — as the best way to tamp down on measles’ spread. Kennedy has installed a new expert vaccine committee, opening up a review of the childhood vaccination schedule that includes the MMR shot.

Given this more lax approach from the Trump administration, now is a good time to look out for ourselves and our loved ones. Here’s what you need to know about measles as the season with the biggest potential to spread heats up:

Measles is an extremely contagious and dangerous virus

The US has been largely free of measles — a disease that still kills over 100,000 people worldwide every year, most of them young children — since the 1990s. Its risks have for most people become largely hypothetical.

2025 is the worst year for measles in the United States in decades

But for unvaccinated Americans, those risks remain very real. Measles is an extremely contagious virus that can lead to high fever and rash. Some patients develop pneumonia or encephalitis, a brain inflammation, both of which can be deadly.

Measles has a fatality rate of 0.1 percent, but about 20 percent of cases can put patients in the hospital. The virus can be particularly dangerous for kids, especially young infants, as well as pregnant women and people who are immunocompromised.

Some vaccine skeptics, including Kennedy, downplay the measles’ risk. “It used to be, when I was a kid, that everybody got measles. And the measles gave you lifetime protection against measles infection,” he said on Fox News in March.

But measles has never been some harmless disease: In the decade before a vaccine was introduced in 1963, between 400 to 500 children died annually from measles in the US. From 1968 to 2016, there were about 550 measles deaths total in the US, according to the CDC. But before this year, it had been 10 years since any measles deaths had been recorded in the US.

Even people who survive a measles infection can endure long-term health consequences, the risks of which are greater for vulnerable groups. Some of those infected in the current outbreak may have their health affected for years. Measles can cause what’s called immunity amnesia. The virus can wipe out more than half a person’s preexisting antibodies that provide protection against other pathogens. That can leave the patient at higher risk from other diseases for years after their measles infection.

And in very rare cases, measles can lead to fatal brain swelling years after the initial infection. Patients can also experience hearing loss from ear infections that started their illness, while the people who develop acute encephalitis can suffer permanent neurological damage.

You can protect your kids from measles. Here’s how.

Vaccination is without a doubt the best defense against measles: two doses of the MMR vaccine — given to protect against measles, mumps, and rubella. It is one the most effective vaccines we have for any disease, and any risks from the vaccine are extremely low when compared to the dangers of measles itself.

But in this new world, you’re more likely to see a measles outbreak in your community. People may want to be more proactive about protecting themselves and their loved ones. Here’s what you can do:

Parents of young children — the group most at risk from measles — should talk to their pediatrician about measles vaccination.

Children usually get their first shot around the time they turn 1 and receive another shot around age 5, but there is some flexibility. The CDC recommends that infants as young as 6 months receive one dose if they are traveling internationally, and the recommended age for the second dose ranges from 4 to 6 years old. Several leading public health experts, including former CDC director Rochelle Walensky, wrote in a JAMA op-ed over the spring that the recommendations should be updated to advise one shot for infants traveling in the US to areas with higher risk of exposure.

There have been reports of vaccinated individuals getting infected during the current outbreaks, which has raised questions about how protected vaccinated people actually are. As good as it is, 97 percent effectiveness is not 100 percent, and it is possible to be vaccinated against measles and still get sick. For a very small percentage of people, the vaccine simply doesn’t produce immunity. It is also possible that immunity could wane over time, but that was previously not an issue because high vaccination rates had snuffed out the virus’s spread. Per the CDC’s estimates, about 3 percent of measles cases this year have been vaccinated people — consistent with the 97 percent efficacy rate.

The priority should be vaccinating those people who do not have any protection at all: very young children and the people who weren’t vaccinated as kids. Pregnant women should not receive the vaccine, but women planning to become pregnant could consult with their doctor about a booster shot; likewise, people with immune conditions should talk to their doctor before getting any additional doses, because the vaccine’s live virus could present a risk depending on how compromised their immune system is.

People who are at a higher risk may want to take extra precautions, such as wearing a face covering, if there are any reports of measles infections in their immediate area.

For other people who have already been vaccinated but are still worried about transmission, it may be reasonable to consider a booster shot. But there are some important steps you should take first.

First, check your vaccination records if you can find them. Anyone who received two doses as a child very likely had a successful immune response; only three in 100 people don’t. And if you received one dose — which was generally the norm before 1989 — you’re probably still protected, but it is slightly more likely that you never developed immunity, Aaron Milstone, an infectious disease pediatrician at the Johns Hopkins University School of Medicine, told me earlier this year.

The next step would be to talk to your doctor, who can give you a “titer test” that measures the measles antibodies in your body. If they’re still present — then you should be good. But if they’re not, you may want to ask your doctor about getting an additional measles shot.

The risks from measles should be kept in context: If you’re not near any confirmed measles cases, your personal risk probably remains low. If you’re vaccinated and have antibodies, you are very likely protected from the virus even if there is local transmission. But summer travel introduces some new risks: Several smaller outbreaks this year have been traced to infectious travelers passing through US airports.

Measles can not be ignored. Milstone said he and his fellow infectious disease doctors could not believe it when they heard the news in February of a child’s death from measles in the United States of America.

“You hope people don’t have to die for others to take this seriously,” Milstone said.


From Vox via this RSS feed

5
 
 

Your Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. To submit a question, fill out this anonymous form or email [email protected]. Here’s this week’s question from a reader, condensed and edited for clarity:

I graduate college soon, and like everyone around me, I’m working hard to find a job. But unlike those around me, I have a sense for how inactivity enlivens me — I get lots of joy from silence, reflection, and complete agency over my mind. I’ve quit most social media, and I got into meditation a while ago and never looked back. This awareness makes me tilt towards a life that optimizes for this. But I also have very altruistic leanings, which could become serious scruples if I don’t do good in the world.

Should I be trying to balance the pursuit of two seemingly opposed life goals — pursuing true happiness through inactivity and contemplation (as hypothesized by thinkers like Aristotle and Byung-Chul Han) and striving to do good in the world through robust goal-oriented action? The first is indifferent to which ends (if any) one’s life contributes to, as long as it is blanketed in leisurely contemplation and true inactivity. The second invites and rewards behaviors that are constantly opposed to prolonged inactivity (working efficiently, constantly learning, etc). So I really don’t know how to handle this.

Dear Contemplative and Caring,

Matthieu Ricard is known as the “world’s happiest man.” When he lay down in an MRI scanner so scientists could look at his brain, they saw that the regions associated with happiness were exploding with activity, while those associated with negative emotions were nearly silent. The scientists were stunned. How did his brain get that way?

The answer: 60,000 hours of meditation. See, Ricard grew up in France, earned a PhD in genetics, and then, at age 26, abandoned a bright scientific career in favor of going to Tibet. He became a Buddhist monk and spent nearly three decades training his mind in love and compassion. The result was that one stupendously joyous brain.

But what if he’d instead spent 60,000 hours bringing joy to other people?

Philosopher Peter Singer once put this question to Ricard, basically asking if it was self-indulgent to spend so much time in a hermitage when there are problems in the world that urgently need fixing. Ricard gave a complex answer, and I think looking at all three components of it will be helpful to you.

Have a question you want me to answer in the next Your Mileage May Vary column?

Feel free to email me at [email protected] or fill out this anonymous form! Newsletter subscribers will get my column before anyone else does and their questions will be prioritized for future editions. Sign up here!

For one thing, Ricard pointed out that there are many different values in life. Helping other people is absolutely a wonderful value. But there are others, too: art, for instance. He noted that we don’t go around scolding Yo-Yo Ma for the thousands of hours he spent perfecting the cello; instead, we appreciate the beauty of his music. Spiritual growth through contemplation or meditation is like that, Ricard suggested. It’s another value intrinsically worth pursuing.

Ricard also emphasized, though, that helping others is something he values very deeply. Just like you, he prizes both contemplation and altruism. But he doesn’t necessarily see a conflict between them. Instead, he’s convinced that contemplative training actually helps you act altruistically in the world. If you don’t have a calm and steady mind, it’s hard to be present at someone’s bedside and comfort them while they’re dying. If you haven’t learned to relinquish your grip on the self, it’s hard to lead a nonprofit without falling prey to a clash of egos.

Still, Ricard admitted that he is not without regret about his lifestyle. His regret, he said, was “not to have put compassion into action” for so many years. In his 50s, he decided to address this by setting up a foundation doing humanitarian work in Tibet, Nepal, and India. But the fact that he’d neglected to concretely help humanity for half a century seemed to weigh on him.

What can we learn from Ricard’s example?

For someone like you, who values both contemplation and altruism, it’s important to realize that each one can actually bolster the other. We’ve already seen Ricard make the point that contemplation can improve altruistic action. But another famous Buddhist talked about how action in the wider world can improve contemplation, too.

That Buddhist was Thich Nhat Hanh, the Zen teacher and peace activist who in the 1950s developed Engaged Buddhism, which urges followers to actively work on the social, political, and environmental issues of the day. Asked about the idea that people need to choose between engaging in social change or working on spiritual growth, the teacher said:

I think that view is rather dualistic. The [meditation] practice should address suffering: the suffering within yourself and the suffering around you. They are linked to each other. When you go to the mountain and practice alone, you don’t have the chance to recognize the anger, jealousy, and despair that’s in you. That’s why it’s good that you encounter people — so you know these emotions. So that you can recognize them and try to look into their nature. If you don’t know the roots of these afflictions, you cannot see the path leading to their cessation. That’s why suffering is very important for our practice.

I would add that contact with the world improves contemplation not only because it teaches us about suffering, but also because it gives us access to joyful insights. For example, Thich Nhat Hanh taught that one of the most important spiritual insights is “interbeing” — the notion that all things are mutually dependent on all other things. A great way to access that would be through a moment of wonder in a complex natural ecosystem, or through the experience of pregnancy, when cells from one individual integrate into the body of another seemingly separate self!

At this point, you might have a question for these Buddhists: Okay, it’s all well and good for you guys to talk about spiritual growth and social engagement going hand-in-hand, but you had the luxury of doing years of spiritual growth uninterrupted first! How am I supposed to train my mind while staying constantly engaged with a modern world that’s designed to fragment my attention?

Part of the answer, Buddhist teachers say, is to practice both “on and off the cushion.” When we think about meditation, we often picture ourselves sitting on a cushion with our eyes closed. But it doesn’t have to look that way. It can also be a state of mind with which we do whatever else it is we’re doing: volunteering, commuting to work, drinking a cup of tea, washing the dishes. Thich Nhat Hanh was fond of saying, “Washing the dishes is like bathing a baby Buddha. The profane is the sacred. Everyday mind is Buddha’s mind.”

But I think it’s really hard to do that in any kind of consistent way unless you’ve already had concerted periods of practice. And that’s the reason why retreats exist.

Buddhist monks commonly do this — sometimes for three years, or for three months, depending on their tradition — but you don’t have to be a monk or even a Buddhist to do it. Anyone can go on a retreat. I’ve found that even short, weekend-long retreats, where you’re supported by the silent company of other practitioners and the guidance of teachers, can provide a helpful container for intensive meditation and catalyze your growth. It’s a lot like language immersion: Sure, you can learn Italian by studying a few words on Duolingo alone each night, but you’ll probably learn a whole lot faster if you spend a chunk of time living in a Tuscan villa.

So here’s what I’d suggest to you: Pursue a career that includes actively doing good in the world — but be intentional about building in substantial blocks of time for contemplation, too. That could mean a year (or two or three) of meditative training before you go on the job market, to give you a stable base to launch off from. But it could also mean scheduling regular retreats for yourself — anywhere from three days to three months — in between your work commitments.

More broadly, though, I want you to remember that the ideas about the good life that you’re thinking through didn’t emerge in a vacuum. They’re conditioned by history.

As the 20th-century thinker Hannah Arendt points out, vita contemplativa (the contemplative life) has been deemed superior tovita activa (the life of activity) by most pre-modern Western thinkers, from the Ancient Greeks to the medieval Christians. But why? Aristotle, whom you mentioned, put contemplation on a pedestal because he believed it was what free men did, whereas men who labored were coerced by the necessity to stay alive, and were thus living as if they were enslaved whether they were literally enslaved or not.

In our modern world, Arendt notes, the hierarchy has been flipped upside down. Capitalist society valorizes the vita activa and downgrades the vita contemplativa. But this reversal still keeps the relationship between the two modes stable: It keeps them positioned in a hierarchical order. Arendt thinks that’s silly. Rather than placing one above the other, she encourages us to consider the distinct values of both.

I think she’s right. Not only does contemplation need action to survive (even philosophers have to eat), but contemplation without action is impoverished. If Aristotle had had an open-minded encounter with enslaved people, maybe he would have been a better philosopher, one who challenged hierarchies rather than reinforcing them.

It can be perfectly okay, and potentially very beneficial, to spend some stretch of time in pure contemplation like Aristotle — or like the Buddhist monk Ricard. But if you do it forever, chances are you’ll end up with the same regret as the monk: the regret of not putting compassion into action.

Bonus: What I’m reading

Not only does modern life make it hard to think deeply and contemplatively — with the advent of AI, it also risks homogenizing our thoughts. The New Yorker’s Kyle Chayka examines the growing body of evidence suggesting that chatbots are degrading our capacity for creative thought.

This week, I learned that rich Europeans in the 18th century actually paid men to live in their gardens as…“ornamental hermits”? Apparently it was trendy to have an isolated man in a goat’s hair robe wandering around in contemplative silence! Some scholars think the trend took off because philosopher Jean-Jacques Rousseau had just argued that people living in a “state of nature” are morally superior to those corrupted by modern society.Twentieth-century Trappist monk Thomas Merton was a great lover of stillness. His poem “In Silence” is mainly an ode to the contemplative life. But he ends the poem with these cryptic lines: “How can a man be still or listen to all things burning? How can he dare to sit with them when all their silence is on fire?”


From Vox via this RSS feed

6
 
 

For the last half-century, America’s population growth has been concentrated in the sweltering, equal parts bone-dry and waterlogged, yet ever-sprawling Sunbelt. Undeterred by the limits of hydrology or climate, metro areas from Las Vegas to Miami have gotten one thing undeniably right. They have long led the country in housing construction, resulting in a relative plenitude and affordability that shames coastal cities in California and the Northeast, as well as a booming industry of takes imploring blue cities to learn from red states on housing.

But that abundance is already becoming a thing of the past. Across Sunbelt metros like Phoenix, Dallas, and Atlanta, housing supply growth has actually plummeted since the early 2000s, to rates almost as low as in hyper-expensive coastal cities, according to a new working paper by the leading urban economists Edward Glaeser and Joe Gyourko. Housing costs in these metros, while still lower than major coastal cities, have surged as a result.

Gyourko, a professor at the University of Pennsylvania’s Wharton School who decades ago documented slowing housing growth in superstar cities like New York and San Francisco, told me he was surprised to find the same pattern again unfolding, as if on a 20-year lag, in a region known for its lax regulations and enthusiasm for building things. Looking at the data, he thought, “Wow, Phoenix and Miami look like LA did as it was gentrifying in the ’80s.”

Although metro Phoenix, to unpack one example, is building a similar absolute number of homes as it did in the early 2000s, its population has grown by more than 58 percent since the turn of the century, so as a share of its current housing stock — the number that most matters, Gyourko says — it’s now building far less. If that trend continues even as demand to live in the Sunbelt remains undimmed, he said, “you would expect them to start to look more and more like Los Angeles.” By 2045, Arizona might be facing unaffordability and population loss crises much like those choking California today.

Chart showing rates of housing supply growth decreasing steadily each decade from 1950 to today, in Atlanta, Dallas, Detroit, LA, Miami, and Phoenix.

For many years, suburbs and exurbs have been the leading drivers of housing growth in Sunbelt cities, capturing most of the new population moving to the region. “The concepts ‘Sunbelt city’ and ‘suburb’ are nearly synonymous,” as historian Becky Nicolaides put it. But the slowdown in new housing builds across the region, Glaeser and Gyourko found, has been especially pronounced in well-off, low-density suburbs with desirable amenities like good schools. These suburbs have plenty of room to densify and welcome more neighbors — they just aren’t doing it.

“America’s suburban frontier,” the authors warn, “appears to be closing.”

The findings suggest that the fundamentals of housing in Raleigh, Orlando, or Miami are not so different from every other hot real estate market in the country. In most parts of the US with a growing economy and good jobs, the housing market has become badly broken to a degree that transcends the usual explanations, like regional differences in construction licensure rules or environmental review requirements — although those factors, without a doubt, matter.

So what’s really going on? Housing markets are complicated, and economic shocks like the Great Recession and the recent spike in interest rates have surely played a role. But the downturn in housing builds predates both those things, Glaeser and Gyourko found, suggesting a deeper cause. The Sunbelt may be confronting the same obstacle that has paralyzed growth elsewhere. It’s one of the most taken-for-granted facts of modern American life: the suburban model itself, and all its attendant political, regulatory, and financial problems.

Since the end of World War II, housing supply growth in the United States has overwhelmingly been driven by suburban sprawl radiating ever outward from city centers. Instead of building up, with density, we largely built out. But that engine may be running out of steam — and as a strategy for filling our national housing shortage, it’s failing spectacularly.

“It hasn’t been working in the supply-constrained coastal markets for four decades. What’s new is it looks like it’s starting not to work in the Sunbelt,” the country’s fastest-growing, most economically dynamic region, Gyourko said. “That changes the nature of America.”

The strangeness of housing policy in the US can be summed up like this: On a national level, we long for growth. On a local level, we do everything possible to smother it. That contradiction stems, in part, from our dependence on sprawl.

America is a nation of suburbs — that’s certainly not changing any time soon. And there’s nothing inherently wrong with suburbs, a housing arrangement as old and varied as human civilization. But to solve the housing crisis that is at the root of so many national problems, Americans will have to fundamentally rethink what the suburb is, and what it could become.

American suburbia, briefly explained

If you, like me, are too online for your own good, perhaps you’ve seen some version of this meme.

![A screenshot of a tweet by "bob's burgers urbanist" (@yhdistyminen).

The tweet reads: "This kind of smart, walkable, mixed-use urbanism is illegal to build in many American cities."

The image attached to the tweet is a still from the animated show "Bob's Burgers," showing the street view of the restaurant and its neighboring buildings. The colorful, multi-story buildings are built close together, with businesses like Bob's Burgers and Jimmy Pesto's Pizzeria on the ground floor and apartments on the floors above. The scene depicts a dense, traditional city street with power lines overhead.](https://platform.vox.com/wp-content/uploads/sites/2/2025/06/dwvm9h70bm591.jpg?quality=90&strip=all&crop=0%2C26.653797432782%2C100%2C46.692405134435 "A screenshot of a tweet by "bob's burgers urbanist" (@yhdistyminen).

The tweet reads: "This kind of smart, walkable, mixed-use urbanism is illegal to build in many American cities."

The image attached to the tweet is a still from the animated show "Bob's Burgers," showing the street view of the restaurant and its neighboring buildings. The colorful, multi-story buildings are built close together, with businesses like Bob's Burgers and Jimmy Pesto's Pizzeria on the ground floor and apartments on the floors above. The scene depicts a dense, traditional city street with power lines overhead.")

That image is a pretty accurate reflection of what American cities used to look like by default. Our suburbs, too, once looked much like this — remnants of the pattern can still be seen in places like Oak Park, Illinois (a suburb of Chicago), University City, Missouri (outside St. Louis), or Brookline, Massachusetts (neighboring Boston). Derived from the Latin word “suburbium,” meaning the area under or near a city, suburbs are so old that if you’ve ever thought about them, congratulations, you’ve been thinking about the Roman Empire.

Of course, what dense, older suburbs like Brookline or Oak Park have in common is that, like the cities they neighbor, they were largely laid out before mass car ownership. It was only relatively recently that suburbs became synonymous with a specific, car- and sprawl-oriented development style.

If the Western frontier defined American optimism in the 19th century, the suburban frontier defined it in the 20th. It’s a story you may already know in broad strokes: Before World War II, only a small share of Americans lived in suburbs, with the bulk living in rural areas and central cities. After the war, a complex alchemy of factors — including a national economic and population boom, federally backed mortgages that favored suburban homes, a Great Depression- and war-era housing shortage, and white flight — produced one of the greatest social and spatial transformations in the country’s history.

It would be easy, from our 21st-century perspective, to simply be bewildered by the urban planning decisions that fueled this wave of suburbanization. But those choices make a lot more sense when framed by the daily realities of mid-century urban life. Much of the prewar urban housing stock was genuinely terrible — many people lacked access to a full bathroom or even a flush toilet. Cities were still manufacturing centers and had the pollution to go with it. Americans who could afford to move were understandably pulled toward modern, spacious houses being built on an unprecedented scale in new tracts outside the city.

A high-angle, black and white aerial photograph of a sprawling, post-World War II American suburban development. Hundreds of nearly identical, two-story single-family homes are arranged in uniform rows along curving streets. The community is carved out of a dense forest, which borders the neighborhood and stretches into the background where a body of water is visible.

As this shift took place, the nature of the suburbs changed, from an organic extension of the city to what must have looked, to some at the time, like an alien planet. By 1970, most Americans dwelling in metropolitan areas — meaning a core city and its adjacent areas — were living in suburbs, and by 2010, most Americans were, full stop. Sunbelt cities like Phoenix and Las Vegas, which in the decades after World War II grew from little more than desert towns to megacities, developed in a particularly suburban, car-dependent form.

For a long time, that model worked well for a lot of people. But there was a problem that slowly made itself felt: Though they were themselves the product of a major transformation, postwar American suburbia relied on a restrictive set of rules that made suburban neighborhoods, once built, very difficult to change. Irrationally rigid regulations on housing remain in place across the country today. If you live in a single-family home, there’s a very good chance you’re banned from dividing your house into a duplex, redeveloping it into a triplex or apartment building, renting out a floor to a tenant, or opening a corner store.

These rules are set out by a system known as zoning: local regulations on what kind of things can be built where. Zoning, including single-family-exclusive zoning, first spread across the US in the early 20th century (before that, development was far more freewheeling and improvised). It reached its full expression after World War II, when it became a near-universal toolkit for shaping suburban America.

At first glance, the idea of zoning seems reasonable enough: Factories that emit toxic pollutants should probably be kept away from residential areas, for example. In practice, it has amounted to an extraordinarily heavy-handed, totalizing form of central planning controlling the fabric of daily life.

The overwhelming majority of residential land nationwide, as housing advocates are fond of saying, bans the construction of anything other than detached single-family houses — and that’s just the beginning. Zoning codes include legions of other rules, often including minimum size requirements (effectively banning starter homes) and mandatory parking spots for at least two cars. Apartments, in many areas, are zoned out of existence.

Suburbs exist all over the world. But the US, despite our national reputation for freedom and individualism, is relatively unique in having such a prescriptive segregation of land uses governing what people are allowed to do with what is, don’t forget, their own property, as Sonia Hirt, an architecture and planning professor at the University of Georgia, explains in her book Zoned in the USA.

We’re also unusual in granting privileged status to one specific, costly, and resource-intensive type of housing. “I could find no evidence in other countries that this particular form — the detached single-family home — is routinely, as in the United States, considered to be so incompatible with all other types of urbanization as to warrant a legally defined district all its own, a district where all other major land uses and building types are outlawed,” Hirt writes.

Suburban-style zoning has become widespread not just in suburbs proper, but also in core cities, many of which have adopted similar zoning codes that would have made their original growth and housing diversity impossible.

In that sense, suburbia isn’t just a specific place — it’s a mindset that’s become the default American settlement pattern. For mid-century home buyers, the costs of our suburban revolution were distant. But it didn’t take long for those costs to become felt nationally.

The suburban wall

By rigidly defining what a community is allowed to look like, suburban zoning has done more than simply shape the physical form of our cities. It has also made it all but impossible for many communities to adapt and grow, as human societies always have, which has created severe distortions in housing markets.

“The suburban development model is built on the premise of stasis,” as Charles Marohn, a civil engineer and founder of the advocacy group Strong Towns, has put it. “These neighborhoods are frozen at their current number of households, no matter how much the surrounding city transforms. No matter how many jobs are created. No matter how desirable the area is or how high rents get,” he wrote in his recent book Escaping the Housing Trap.

That stasis quickly froze America’s most desirable metro areas, leaving them unable to build enough housing to meet demand. And when housing becomes scarce relative to the number of people who want to live in the community, it simply becomes more expensive.

Starting in the 1970s, home construction plummeted and prices soared in high-opportunity coastal cities because of restrictions on supply. Los Angeles, incredibly, downzoned its residential areas to such an extent between 1960 and 1990 that its total population capacity, as measured by the number of households it’s zoned to accommodate, declined from 10 million people to about 4 million, which is the level the city’s population has hovered around for the last decade.

The upshot is that many of America’s metropolitan areas have become dominated by what economist Issi Romem identified in 2018 as a “dormant suburban interior.” After World War II, cities and suburbs built out and out, mostly low-density single-family homes, before they largely stopped building altogether because zoning laws forced them to maintain an inflexible suburban form. Despite a few pockets of dense growth, most residential areas have been locked out of building incrementally and thickening up, even as demand to live there increases.

A gif of a map showing the Los Angeles area building progressively less housing every decade since 1940.

When a high-demand city refuses to allow greater housing density, the dynamic becomes progressively more toxic, not just because homes become more scarce, but also because market incentives can push developers to replace cheaper, smaller single-family houses with more costly McMansions (as opposed to, in a healthier market, building apartments or a set of townhomes that could house more people in the same amount of space, for less money per household).

In expensive cities, proposals to build more housing have, famously, often been blocked by angry neighbors (derisively called NIMBYs) who rely on a labyrinthine tangle of zoning laws to foil change that they don’t like. Now, that vicious cycle is poised to catch up with the South and Southwest, where, Glaeser and Gyourko believe, the decline in housing starts is likely a function of incumbent residents using regulation to make it harder to build.

“People in the Sunbelt, now that things have gotten big enough, they’ve figured out what the Bostonians figured out a long time ago, and the Angelenos,” Gyourko said. (And plenty of anecdotal evidence from local housing fights in the Sunbelt, Slate reporter Henry Grabar has noted, points to the same thing.)

Suburbia offered Americans an implicit bargain: Neighborhoods would never have to change, and we could instead accommodate more people by sprawling outward forever. To a great extent, that’s what’s happened, and it’s given us lots of single-family homes, but also a mind-boggling expanse of costly, deteriorating infrastructure, nightmare commutes, unrestrained greenhouse gas emissions, sedentary, disease-promoting lifestyles, and one of the highest car crash death rates in the developed world.

And we’re still in a housing crisis, because even in the sprawl-hardened, car-loving Southwest, sprawl has its limits. I put this question to Gyourko: Once the most distant, low-density exurbs of, say, Dallas declare themselves full, why don’t developers simply keep building the next ring of sprawl 50-plus miles away? “People don’t want to live that far,” he said (he later clarified that we don’t know the precise outer limit beyond which housing demand dwindles). Human prosperity has always depended on proximity to one another and to opportunity — and even in 2025, it still does.

Let people do things

The US has gotten steadily more suburban over the last century, but not uniformly so. In the early 2010s, many core cities, including Denver, Atlanta, and Washington, DC, grew faster than suburbs, due to a combination of younger generations’ increasing interest in urban lifestyles and a collapse in suburban home construction after the Great Recession.

Some of the most expensive homes in the country are consistently those located in dense, vibrant prewar cities, a clear signal that there’s high demand for those amenities. The revival of cities in the last few decades and the ongoing suburbanization of the US, Gyourko said, have both been happening at the same time.

Nevertheless, many Americans today, particularly post-Covid, still demonstrate a preference for the suburbs, for all sorts of reasons, including cheaper, larger homes for families of the kind that can be hard to find in cities. Americans are also spending more time alone and at home, and working remotely, which might increase their preference for spacious living quarters and diminish interest in urban life.

“There’s a pendulum that swings between loving the city and loving the suburbs, and it was absolutely shifting towards loving the city” in the 2010s, Romem told me. “And then the pandemic came and undid all of that.”

The disruptions of Covid also revealed the fragility created by American-style urban planning. Because of our preexisting shortage of about 3.8 million homes, a small share of Americans moving residences upended housing markets across the country.

We’re starting to see big shifts in housing policy

Plenty of cities and states, especially since the post-2020 run-up in home prices, have finally begun to take their largely self-inflicted housing shortages seriously. “A bunch of broken policies that seemed unfixable a year ago are actively being fixed,” said M. Nolan Gray of California YIMBY.

The sheer volume of new laws meant to make it easier to build homes has been overwhelming, reflecting the morass of local obstacles. Here are just a few:

2016: California made it much easier to build accessory dwelling units (ADUs), also known as mother-in-law suites or granny flats, alongside houses on single-family lots. The state has since passed several additional laws to close loopholes that localities were using to block ADU construction.2018: Minneapolis became the first major US city to end all single-family-exclusive zoning, prompting national discussion about why we ban apartments in residential areas at all.2019: Oregon required municipalities larger than 10,000 people to allow duplexes, and those over 25,000 to allow duplexes, triplexes, and other multi-family housing, on single-family lots.2023: Montana and Washington state required many cities and suburbs to allow multi-family housing and ADUs.2025: California exempted apartment construction in its cities and suburbs from onerous environmental review requirements that in practice have often been weaponized to block density. North Carolina’s House unanimously passed a bill to prevent local governments from requiring parking spots — which are expensive and take up lots of space — in new housing.

If it sounds draconian for states to interfere in cities’ and suburbs’ policies, consider that the US is unusual in its hyperlocal control over housing. Although huge barriers remain, we’re just beginning to see the contours of a major shift in how housing in America gets regulated and built.

Skyrocketing housing prices since the pandemic have given new fuel to the YIMBY (or “Yes in my backyard”) movement, which for more than a decade has sought to legalize the full diversity of housing options across the US. At bottom, YIMBYism is about freeing cities and suburbs from “the zoning straitjacket,” as M. Nolan Gray, an urban planner and senior director of legislation and research for the housing advocacy nonprofit California YIMBY, put it. In other words, he said: “Let people do things.”

“A city is the ultimate form of emergent order. A city represents the plans of the millions of people who live there and work there and play there and study there,” he said. “The basic instinct of zoning is that we can sit down and write out the exact appropriate types of uses, scale of those uses, and exactly where those uses can go — and it’s just such a presumptuous way to govern a city.”

The deeper implication is not just that we need more housing, but also that suburbs must be allowed to function like the miniature cities they are. They should be flexible enough to support a range of human aspirations — not just the hallmarks of stereotypical suburban life, but also the amenities of urban life. “No neighborhood can be exempt from change,” as Marohn put it.

Zoning exclusively for detached single-family homes, for example, has never made much sense, but it especially doesn’t make sense in 2025, when most Americans live in household sizes of two or one. Recognizing this, along with the severity of their housing crises, a number of cities and states have gotten rid of single-family-exclusive zoning in the last decade, along with other barriers to building housing. But because zoning codes are enormously complicated, repealing one barrier often isn’t enough to actually allow multifamily housing to get built — things like height limits or excessive parking minimums can still make it infeasible.

“Housing is like a door with a bunch of deadbolts on it,” Alli Thurmond Quinlan, an architect and small-scale developer based in Fayetteville, Arkansas, told me. “You have to unlock all the deadbolts, but as soon as you do, there’s an enormous amount of human creativity” that rushes in. She stresses that communities shouldn’t be afraid of going too far in repealing zoning rules, and that if anything, they should err on the side of going further.

Repeal minimum lot sizes, and a developer might find a way to build a cute narrow house in a gap between existing houses. Removing parking requirements made it possible to build this lovely set of densely clustered cottages — a development style that can blend harmoniously into suburban neighborhoods — in Atlanta at a significantly lower cost:

A cluster of four modern cottage-style homes arranged around a shared green lawn on a sunny day. The houses vary in size and color, including a two-story dark blue house on the left, and a two-story light teal and a one-story darker teal house on the right. The homes are nestled among large, mature trees under a clear blue sky. This image showcases a pocket neighborhood or cottage court development.

A single-family house, meanwhile, can be turned into a duplex:

An exterior photo of a modern, two-story red brick duplex situated on a residential street between two older, traditional houses. The new building has a boxy shape with several gabled rooflines, dark metal balconies, and a central section that rises higher than the rest. To its right is a classic Victorian home in shades of yellow and cream, creating a stark architectural contrast between the new infill construction and the historic style of the neighborhood. The photo is taken on a sunny day with a clear blue sky.

Right now, what little density is being added to cities and suburbs often comes in the form of large apartment buildings (you may know them as “gentrification buildings”). There’s nothing wrong with those, and they have an important role to play in mitigating the housing shortage. Yet many people don’t want them built in single-family neighborhoods. Making it legal to incrementally densify single-family neighborhoods would allow suburbs to still look like suburbs, while greatly increasing their population capacity and their ability to support essential services like public transit.

“The dormant suburban sea is so vast that if the taboo on densification there were broken, even modest gradual redevelopment — tearing down one single-family home at a time and replacing it with a duplex or a small apartment building — could grow the housing stock immensely,” Romem wrote in 2018.

That style of neighborhood development — gradually over time, rather than building to completion all at once — also happens to be the secret to creating places with a visually appealing vernacular character, Romem said. “True character comes from layer upon layer over a span of many years, from many people’s different, disparate decisions. And that requires change.”

What should suburbs be for?

At the dawn of mass suburbanization, Americans had legitimate reasons for wanting to move out of cities, where substandard housing and overcrowding were still commonplace. But “one generation’s solutions can become the next generation’s problems,” as journalists Ezra Klein (a Vox co-founder) and Derek Thompson wrote in their book Abundance. The same forces that built the American dream 80 years ago are now suffocating it, inflicting profound pain on families across the country.

For me, this subject is personal: I’ve lived in apartments literally my entire life, a form of housing often treated as second-class, if it’s even permitted to exist. Some of that has been in cities, and some in a suburb. My immigrant mother worked incredibly hard to find homes that were safe, stable, and affordable to raise a child in. America gave me everything, but our national housing reality made things far more difficult for her than they needed to be.

There’s no shortage of wonky policy ideas about how to fix housing in the US — and they go far beyond just zoning codes (you don’t want to hear me get started on building codes or impact fees). We will also need a society-wide paradigm shift beyond policy: The financial and real estate industries will need to relearn models for supporting incremental densification, which, experts consistently told me, have fallen by the wayside since the entrenchment of sprawl and restrictive zoning.

More than that, our minds will have to open up to the inevitability of constant change, and abandon the idea that any of us has a right to veto our community’s evolution. As Marohn points out in Escaping the Housing Trap, “a community that has lost all affordable starter housing already has changed irreversibly. It is only the buildings that have not.”

The suburbs, above all, must be allowed to be plural. Across cultures and centuries, people of all sorts of circumstances have lived on the outskirts of urban life. Today, Americans of every social class seek homes in the suburbs. Some are affluent; many are not. Others want to be near a good school, a job, a support system, or simply a hard-won foothold of affordability. It’s not the role of a planning board or a legacy zoning map to decide. We don’t know what the future of the suburbs will be — but we can free them to become what we need of them.


From Vox via this RSS feed

7
 
 

People react after getting their certificate of naturalization during a naturalization ceremony.

People react after getting their certificate of naturalization during a naturalization ceremony at the JFK Library in Boston, Massachusetts, on May 22.

President Donald Trump is reviving a familiar playbook to target naturalized US citizens.

The Justice Department recently announced a new push to strip certain people of their citizenship through denaturalization proceedings. Individuals who pose a danger to national security, have committed violent crimes, or fail to disclose a felony history (or make other misrepresentations) on their citizenship application are among those now being prioritized for denaturalization and deportation. In doing so, the administration is likely seeking to expand an authority that the Supreme Court drastically limited decades ago.

The president and White House officials have suggested that some prominent denaturalization targets could include one-time Trump megadonor Elon Musk, with whom the president had a public falling out, and Zohran Mamdani, a progressive who recently won the Democratic nomination for mayor of New York City. It’s not clear, however, what legitimate grounds the administration might have to denaturalize either of them.

The news may rattle any of the estimated 24.5 million naturalized citizens currently living in the US. That might especially be the case for those who have voiced opposition to Trump, given that his administration has already weaponized immigration policy against dissidents.

Ostensibly, denaturalization is about protecting the integrity of the citizenship process. In practice, the new push “is about targeting speech the government doesn’t like, and it is chilling all naturalized citizens,” said Amanda Frost, a professor at the University of Virginia School of Law and author of You Are Not American: Citizenship Stripping From Dred Scott to the Dreamers.

This wouldn’t be the first time denaturalization has been used as a tool of political repression. During the Red Scare following World War II, the US pursued denaturalization cases with an eye toward rooting out un-American behavior, both real and perceived.

Scholars now see echoes of that era in Trump’s strategy.

“There’s increasing rhetoric of trying to take people’s citizenship away for political reasons,” said Cassandra Burke Robinson, a professor at Case Western Reserve University School of Law who has studied denaturalization. “I think any time you treat that as even a possibility to be considered, you’re going down a really dangerous slope.”

What denaturalization looked like during the Red Scare

In the 1950s and 1960s, fears about the spread of communism took hold of the US. A political movement known as McCarthyism — named after then-Senator Joseph McCarthy — sought to purge anyone in government with connections to the Communist Party. Denaturalization was one of the tools McCarthyites relied on, and, at the height of the movement, the US was denaturalizing more than 20,000 people per year, Burke Robinson said.

In these cases, the government argued that if an individual became a member of the Communist Party at any time, that person had been lying when taking an oath of allegiance to the US as part of their citizenship test and, therefore, could be denaturalized. Later, that argument evolved to target Americans with disfavored political views or who were perceived as disloyal to the US more broadly, not just Communist Party members.

One of the primary targets of denaturalization were members of the German American Bund, the American Nazi organization. However, targets also included political gadflies, such as labor leaders, journalists, and anarchists.

“Those whose speech the government didn’t like could get removed, and everyone else could stay. They used their discretion in this area to accomplish that goal,” Frost said.

Among those targeted for denaturalization was the Australian-born labor leader Harry Bridges, who led longshoremen strikes in California. He accepted support from the Communist Party as part of his union activities, but the government never found evidence that he was a member himself. The notorious House Un-American Activities Committee investigated Bridges, and the government sought his deportation and, once he became a citizen, denaturalization, but never succeeded.

Denaturalizations decreased significantly, from tens of thousands to fewer than 10 annually, after the Supreme Court’s 1967 decision in Afroyim v. Rusk. In that case, the justices found that the US government does not have the power to denaturalize people without their consent because citizenship is guaranteed by the Constitution’s 14th Amendment.

“They said you could only lose your citizenship if you very explicitly renounce,” Frost said. “The United States government governs with the consent of the citizens. It’s not allowed to choose its citizens.”

For decades, the ruling meant that denaturalization was a rare phenomenon. However, the court included an exception for cases in which citizenship is “unlawfully procured” — meaning they were not eligible for citizenship in the first place due to acts like committing war crimes. That’s what Trump is now relying on to revive the tactic.

What Trump’s denaturalization plans could look like

Denaturalizations have been increasing since the Obama administration, when the digitization of naturalization records made it easier to identify individuals whose citizenship applications showed discrepancies with other government records. Most denaturalization cases during this period involved people who had committed acts of terrorism or war crimes.

But Trump made denaturalization a priority during his first administration, including targeting anyone who merely had errors on their naturalization papers. The DOJ launched a new section focused on denaturalization and investigated some 700,000 naturalized citizens, resulting in 168 active denaturalization cases — more than under any other modern president. It’s not clear how many of them were ultimately denaturalized and deported.

Trump is now picking up where he left off. The administration has said that it will pursue these denaturalization cases in civil rather than criminal court proceedings. In such proceedings, individuals are not entitled to an attorney, and the legal bar for the administration to prove that a citizen did something to warrant denaturalization is lower than it would be in criminal court. There is also no limit on how long after naturalization the government can seek to revoke someone’s citizenship.

All of that raises due process concerns.

“Somebody might not know about the proceedings against them. There might be a good defense that they’re not able to offer. There’s no right to an attorney,” Burke Robinson said. “It seems to me to be really problematic.”

There’s also the question of to what degree this Supreme Court will be willing to rein in Trump’s denaturalization efforts. Its 2017 decision in Maslenjak v. United Statesmaintained a high bar for denaturalization: The court found that an alleged misstatement in a Bosnian refugee’s citizenship paperwork could not have kept them from becoming a citizen, even if it had been discovered before their naturalization, and could not be used as grounds to denaturalize them in criminal proceedings.

That makes Burke Robinson “somewhat hopeful that the court does take the issue very seriously.”

“But that was 2017,” she added. “It is a different court now, so it’s very hard to predict.”


From Vox via this RSS feed

8
 
 

For all the talk of the glamour and ritz of big-city living, the United States is (and will remain) a nation of suburbs. Yet suburbs, too, are increasingly unable to build enough housing, affected by the same slowdown as their denser counterparts. The old suburban frontier is closing — but that means we have an opportunity to take another look at this often-maligned model of American life and develop a new and better vision.

Also in this issue, you’ll find the “intellectual vibe shift” that could lead to an era of liberal flourishing and a feature on the path to a less painful IUD procedure. Plus: Can AI stop bioweapons? And are microwaves actually bad for your health?

The old suburban frontier is closing. Here’s what the new one could look like.

By Marina Bolotnikova

The spiritual life calls out to me. But is it self-indulgent?

By Sigal Samuel

Will we know if the next plague is human-made?

By Joshua Keating

Coming July 8

The end of the anti-liberal moment

By Zack Beauchamp

Coming July 9

What if IUD insertion didn’t have to be so painful?

By Allie Volpe

Coming July 10

Microwaves produce radiation. Is that bad for me?

By Dylan Scott

Coming July 11


From Vox via this RSS feed

9
 
 

Zohran Mamdani, a bearded, dark-haired man wearing a suit with a white shirt and a patterned tie, stands with a crowd of supporters behind him.

Zohran Mamdani, the Democratic candidate for New York City mayor, speaks during a press conference celebrating his primary victory on July 2. | Angela Weiss/AFP via Getty Images

Last weekend, my colleague Christian Paz wrote about how the Democratic Party could be on the brink of a grassroots takeover, similar to what the GOP experienced with the Tea Party movement. It’s a fascinating piece that could have huge ramifications for Democratic politics, so I sat down with him to chat about his reporting for Vox’s daily newsletter, Today, Explained.

Our conversation is below, and you can sign up for the newsletter here for more conversations like this.

Hey, Christian, how are you? Remind us what the original Tea Party was. What is this movement we’re talking about?

The movement that I’m talking about started before Obama was elected. It was a mostly libertarian, grassroots, localized, not-that-big movement — a reaction to the bailouts at the end of the Bush administration. The idea being there’s too much deficit spending and government is becoming way too big and becoming unmoored from constitutional limited-government principles.

It evolved when Obama was elected into a broader anti-Obama backlash and then a major explosion because of the Affordable Care Act fights. It basically turned into an effort to primary incumbent Republicans, an effort to move the party more toward this wing and eventually try to win back control of Congress.

After it took off, what happened to the GOP?

They were able to win, I believe, five out of the 10 Senate seats that they were challenging. Something like 40 members of Congress were Tea Party-affiliated.

The primary thing was that they were successful in massively mobilizing Republican voters and getting people to turn out in the 2010 midterms, which turned out to be one of the biggest “shellackings,” as Obama called it, that Democrats or that any incumbent president and their party had sustained. Democrats lost control of the House and lost seats in the Senate, and that was a massive setback.

From then on, what happened was a successful move by more conservative primary challengers in future elections, the most iconic one being in 2014 — the primary that ousted Eric Cantor, the House majority leader, in favor of a Tea Party activist. It also forced the party as a whole to move to the right, making it more combative, more extreme, and more captive to a more ideological part of the Republican base.

Why are we hearing about this now with the Democratic Party?

The underlying idea is that there’s a divide between the establishment Democrats and populist-minded progressive Democratic candidates. And that’s part of the reason why we’re hearing this now, because there was a victory in New York City’s mayoral primary by Zohran Mamdani, a candidate who is fully in that latter category — a self-described democratic socialist appealing to this idea of bringing out new parts of the electorate, mobilizing people with populist appeal, with targeted, non-polished messaging, and taking more left-leaning positions on policy.

The big thing fueling talk about this Tea Party moment for Democrats is that the base has never really been as angry as it is right now. What we’re seeing is a combination of anti-Trump anger, wanting a change in direction, wanting a change in leadership, and also some folks who are like, Maybe we should become more progressive as a party.

So tell me about that. A change in leadership, a change in the establishment — what does this movement actually want?

It’s interesting. Because at least back with the original Tea Party movement, you could point to a core list of priorities there were about repealing Obamacare, about never repeating a bailout, about limiting the federal government’s ability to spend.

Something like that doesn’t exist right now, because it is a pretty disparate energy. The core thing is Democratic voters do not want the current leadership in Congress. They don’t like Hakeem Jeffries’s style of leadership in the House. They don’t like Chuck Schumer’s style of leadership in the Senate. There’s frustration at older members of Congress being in Congress and serving in leadership capacity right now.

In the polling, over and over again, we see, Democrats should be focused on providing a working-class vision for Americans. They should be more focused on kitchen table affordability issues. And that is the thing that most Democratic voters can actually agree on, and basically saying that that’s not what they think their current leadership is focused on.

What would it look like for the Democratic Party if this actually happens?

There are some strategists and activists who are drawing up lists of potential candidates to primary. There are already some challenges underway. I’m thinking of some House seats in Arizona, House seats in Illinois. There’s talk, especially after this New York City mayoral contest, about primarying Kirsten Gillibrand or Chuck Schumer and finding challengers to some more moderate House members in the New York area.

I’d be looking to see if there actually are younger people launching primary campaigns targeting older or centrist Democratic members of Congress. Once we get to primary season next year, how successful in fundraising are these candidates? Is there an actual effort by some established progressive members of the House to try to support some of these younger candidates?

Basically, just seeing if there’s money there, if there’s actual interest there in supporting these candidates, and whether we do see primary challenges in New York, in Massachusetts, be successful.


From Vox via this RSS feed

10
 
 

Kendrick Lamar, wearing a red, white, and blue jacket, stands amid dancers wearing red, white, and blue and choreographed to resemble the American flag.

Kendrick Lamar performs onstage during Super Bowl LIX Halftime Show on February 9, 2025, in New Orleans, Louisiana. | Patrick Smith/Getty Images

Imagine your average Fourth of July party. There are probably hot dogs on the grill, everyone is clad in red, white, and blue, and it culminates in a fireworks show. It may sound like a lovely way to spend a day off. But for a lot of Americans, the celebration, and the flag itself, are more complicated than that.

That’s the question that Explain It to Me, Vox’s weekly call-in show, is setting out to tackle this holiday weekend: What’s the relationship like between Black people and the American flag?

Specifically, one listener wanted to know, in the wake of the red-white-and-blue spectacle of Beyoncé’s Cowboy Carter and Kendrick Lamar’s Super Bowl halftime show, how that conversation has evolved over time.

This is something Ted Johnson thinks a lot about. Johnson, who is Black, is an adviser at the liberal think tank New America, a columnist at the Washington Post, and a retired US Navy commander. “The flag has sort of been hijacked by nationalists — folks who believe either America is perfect and exceptional, or at the very least, anything that it’s done wrong in the past should be excused by all the things that it’s done well,” Johnson told Vox. “And that is not my relationship with the flag. It’s much more complicated because there has been tons of harm done under that flag.”

How do Black Americans square that harm and that pride? And how has that relationship changed through the years? Below is an excerpt of the conversation with Johnson, edited for length and clarity.

You can listen to Explain It to Me on Apple Podcasts, Spotify, or wherever you get podcasts. If you’d like to submit a question, send an email to [email protected] or call 1-800-618-8545.

One way to tease out this relationship between Black Americans and the flag is to talk about the experience of Black service members. What’s that history?

One of the earliest instances is the story of an enslaved man named Jehu Grant in Rhode Island during the Revolutionary War. The man that owned him was a loyalist to the Brits. Grant was afraid that he was going to be shipped off and sold to the Brits to fight for them. So he runs away, joins Washington’s army and fights in the Continental Army, and then his master shows up and says, “You’ve got my property, and I want it back.” And the Army turns him back over to the guy that owns him, where he serves for many years and eventually buys his freedom.

When Andrew Jackson becomes president in the 1820s, he makes it policy to provide pensions for those Revolutionary War folks still alive. And so Grant applies for his pension and is denied. The government says that services rendered while a fugitive from your master are not recognized.

That is the relationship of Black service members to the flag. It represents a set of principles that many would be willing to die for and also a way of life that intentionally excluded Black folks for no other reason than race and status of their servitude. And so if you look at any war, you will find Black folks in uniform who have both been oppressed in the country they represent, and are willing to die for that country because of the values it stands for and for their right to be able to serve and benefit from the programs that the military has made available to folks.

My grandfather served in the military and I never got the chance to really talk with him about that experience. But I’m curious if you can speak to the motivations of Black Americans who continue serving, especially during the Jim Crow era.

Pre-Civil War, a lot of enslaved Black folks that decided to fight did so because they believed their chances at liberty, emancipation, and freedom were connected to their willingness to serve the country. Then we get the draft and a lot of the Black folks that served in the early part of the 20th century were drafted into service. They weren’t eager volunteers lining up as a way of earning their citizenship, but the fact that the vast majority of them honored that draft notice even though they were treated as second-class citizens was a sort of implicit demand for access to the full rights of the Constitution.

“There is a belief that the United States is ours as well. We have a claim of ownership. And to claim ownership also means you must sort of participate in the sacrifice.”

I’d be remiss if I say that folks joining today, for example, are doing so because they love the flag. The military has a great pension program. The military offers great programs if you want to buy a home or if you want to get an education. So there’s a sort of socioeconomic attractiveness to the military that I think explains why Black folks continue to join the military post-draft.

But it is also because there is a belief that the United States is ours as well. We have a claim of ownership. And to claim ownership also means you must sort of participate in the sacrifice.

When a lot of those service members came back from war, they were met with systemic institutionalized racism. How were people continuing to foster that sense of patriotism despite all that?

When Black folks were coming home from World War I and II, many were lynched in uniform.They weren’t even excused from the racial dynamics by being willing to die for the country.

One of the most famous genres of music in this period was called coon music. One of the songs was about Black people not having a flag. They talked about how white folks in the Northeast could fly flags from Italy, Ireland, wherever they’re from. And white people in the States could just fly the American flag. Black people could fly none of those because we didn’t know where we were from and the United States is not ours. And so in this song, they say the Black flag is basically two possums shooting dice and that would be an accurate representation.

Wow. That is some classic old-school racism.

Yeah, the song is called “Every Race Has a Flag, but the Coon.” And so we are very familiar with the red, black, and green pan-African flag. This was Marcus Garvey’s response to this coon genre of music.

There’s this idea among Black Americans of, We built this. Of course I’m going to reclaim this. Of course I’m going to have pride in it because I built it. I think that’s what we’re seeing with a lot of the imagery now.

But what about Black artists and also Black people in general who say, Our ancestors may have done all this work, but there really is no way to be a part of this and maybe we should not be trying to be a part of this?

If you take pride in the flag because you believe America is exceptional, you’re going to find a lot fewer subscribers to that belief system than one where your pride in the country means being proud of the people you come from and proud of the arc of your people’s story in this country.

On the latter, you will find people who are very proud of what Black people have accomplished in this country. For me, patriotism means honoring those sacrifices, those people that came before us. It does not mean excusing the United States from its racism, from its perpetuated inequality, or for putting its national interests ahead of the people that it’s supposed to serve. So it is very complicated, and there’s no easy way through it.

I will say that I think part of the reason we’re seeing more folks willing to sort of reclaim the flag for their own is because of Gen X. My generation was the first one born post-Civil Rights Act of 1964, so Jim Crow was the experience of our parents. Those experiences connected to the hijacking of the flag to connect it to explicit statutory racism feels generations removed from folks who have grown up in America where opportunity is more available, where the Jim Crow kind of racism is not as permitted. And while the country is not even close to being the kind of equal nation it says it was founded to be, it’s made progress.

I think a reclamation of that flag by Beyoncé and others is a sort of signal that yes, we built it. Yes, we’ve progressed here. And no, we’re not leaving. There’s no “go back to Africa.” This is home. And if this is home, I’m going to fly the flag of my country. There’s lots to be proud of about what the country has achieved and by Black Americans in particular. And for me, that is all the things that patriotism represents, not the more narrow exclusive version that tends to get more daylight.

I think one thing we need to discuss is the definition of Black we’re using here. I am what they would call Black American. My ancestors are from Alabama and Arkansas. They were formerly enslaved.

But Blackness in America now has a much wider net. I have so many friends whose parents are immigrants from the Caribbean or Africa. And it’s interesting in this moment where there are lots of conversations about what it means to be Black, and who gets to claim it, we’re also seeing this flag resurgence.

I think probably true that there are more Black people who are first-generation Americans today than there have been since they started erasing our nations of origin during slavery. That means Black American doesn’t just mean people who descended from slaves. It means Black people of all kinds.

When we talk about Black politics, we don’t consider the Black immigrant experience. When we talk about Black Americanism or Black patriotism, we often don’t account for the Black immigrant experience, except to the extent that that experience is shed and the American one is adopted. Those views sort of get thrown into this pot of Blackness instead of disaggregated to show how Black folks from other places who become Americans have a distinct relationship with the country that also affects their relationship with the iconography of the country like the flag, the national anthem, and this reclamation of red, white, and blue.

There may be some Black artists — I think of Beyoncé — who are reclaiming this imagery, but we also can’t ignore who has a majority stake in it. When people think of the flag, they think of white people. Is that changing?

It is, but slowly. If you ask people from around the world to picture a stereotypical American, they’re not picturing LeBron James, despite the medals he’s won at the Olympics. They’re probably picturing a white man from the Midwest.

The fact that so much of our nation’s history is racialized means that many of the nation’s symbols are also racialized. And to deracialize the things that were created in its origin is a long-term process. I do think it’s beginning to happen. I think it’s going to be some time before we get to a de-racialized conception of the United States.


From Vox via this RSS feed

11
 
 

Cabinet containing an automatic external defibrillator in Austin, Texas, on March 9, 2023. | Smith Collection/Gado/Getty Images

A day before my 47th birthday last month, I took the subway to Manhattan’s Upper East Side for a coronary artery calcium scan (CAC).

For those who haven’t entered the valley of middle age, a CAC is a specialized CT scan that looks for calcium deposits in the heart and its arteries. Unlike in your bones, having calcium in your coronary arteries is a bad thing, because it indicates the buildup of plaque comprised of cholesterol, fat, and other lovely things. The higher the calcium score, the more plaque that has built up — and with it, the higher the risk of heart disease and even heart attacks.

A couple of hours after the test, I received a ping on my phone. My CAC score was 7, which indicated the presence of a small amount of calcified plaque, which translates to a “low but non-zero cardiovascular risk.” Put another way, according to one calculator, it means an approximately 2.1 percent chance of a major adverse cardiovascular event over the next 10 years.

2.1 percent doesn’t sound high — it’s a little higher than the chance of pulling an ace of spades from a card deck — but when it comes to major adverse cardiovascular events, 2.1 percent is approximately 100 percent higher than I’d like. That’s how I found myself joining the tens of millions of Americans who are currently on statin drugs, which lower levels of LDL cholesterol (aka the “bad” cholesterol).

I didn’t really want to celebrate my birthday with a numerical reminder of my creeping mortality. But everything about my experience — from the high-tech calcium scan to my doctor’s aggressive statin prescription — explains how the US has made amazing progress against one of our biggest health risks: heart disease, and especially, heart attacks.

A dramatic drop in heart attack deaths

A heart attack — which usually occurs when atherosclerotic plaque partially or fully blocks the flow of blood to the heart — used to be close to a death sentence. In 1963, the death rate from coronary heart disease, which includes heart attacks, peaked in the US, with 290 deaths per 100,000 population. As late as 1970, a man over 65 who was hospitalized with a heart attack had only a 60 percent chance of ever leaving that hospital alive.

A sudden cardiac death is the disease equivalent of homicide or a car crash death. It meant someone’s father or husband, wife or mother, was suddenly ripped away without warning. Heart attacks were terrifying.

Yet today, that risk is much less. According to a recent study in the Journal of the American Heart Association, the proportion of all deaths attributable to heart attacks plummeted by nearly 90 percent between 1970 and 2022. Over the same period, heart disease as a cause of all adult deaths in the US fell from 41 percent to 24 percent. Today, if a man over 65 is hospitalized with a heart attack, he has a 90 percent chance of leaving the hospital alive.

By my calculations, the improvements in preventing and treating heart attacks between 1970 and 2022 have likely saved tens of millions of lives. So how did we get here?

How to save a life

In 1964, the year after the coronary heart disease death rate peaked, the US surgeon general released a landmark report on the risks of smoking. It marked the start of a decades-long public health campaign against one of the biggest contributing factors to cardiovascular disease.

That campaign has been incredibly successful. In 1970, an estimated 40 percent of Americans smoked. By 2019, that percentage had fallen to 14 percent, and it keeps declining.

The reduction in smoking has helped lower the number of Americans at risk of a heart attack. So did the development and spread in the 1980s of statins like I’m on now, which make it far easier to manage cholesterol and prevent heart disease. By one estimate, statins save nearly 2 million lives globally each year.

When heart attacks do occur, the widespread adoption of CPR and the development of portable defibrillators — which only began to become common in the late 1960s —  ensured that more people survived long enough to make it to the hospital. Once there, the development of specialized coronary care units, balloon angioplasty and artery-opening stents made it easier for doctors to rescue a patient suffering an acute cardiac event.

Our changing heart health deaths

Despite this progress in stopping heart attacks, around 700,000 Americans still die of all forms of heart disease every year, equivalent to 1 in 5 deaths overall.

Some of this is the unintended result of our medical success. As more patients survive acute heart attacks and life expectancy has risen as a whole, it means more people are living long enough to become vulnerable to other, more chronic forms of heart disease, like heart failure and pulmonary-related heart conditions. While the decline in smoking has reduced a major risk factor for heart disease, Americans are in many other ways much less healthy than they were 50 years ago. The increasing prevalence of obesity, diabetes, hypertension, and sedentary behavior all raise the risk that more Americans will develop some form of potentially fatal heart disease down the line.

Here, GLP-1 inhibitors like Ozempic hold amazing potential to reduce heart disease’s toll. One study found that obese or overweight patients who took a GLP-1 inhibitor for more than three years had a 20 percent lower risk of heart attack, stroke, or death due to cardiovascular disease. Statins have saved millions of lives, yet tens of millions more Americans could likely benefit from taking the cholesterol-lowering drugs, especially women, minorities, and people in rural areas.

Lastly, far more Americans could benefit from the kind of advanced screening I received. Only about 1.5 million Americans received a CAC test in 2017, but clinical guidelines indicate that more than 30 million people could benefit from such scans.

Just as it is with cancer, getting ahead of heart disease is the best way to stay healthy. It’s an astounding accomplishment to have reduced deaths from heart attacks by 90 percent over the past 50-plus years. But even better would be preventing more of us from ever getting to the cardiac brink at all.

A version of this story originally appeared in the Good News newsletter. Sign up here!


From Vox via this RSS feed

12
 
 

The text software ChatGPT is seen on a laptop screen.

What’s the point of college if no one’s actually doing the work?

It’s not a rhetorical question. More and more students are not doing the work. They’re offloading their essays, their homework, even their exams, to AI tools like ChatGPT or Claude. These are not just study aids. They’re doing everything.

We’re living in a cheating utopia — and professors know it. It’s becoming increasingly common, and faculty are either too burned out or unsupported to do anything about it. And even if they wanted to do something, it’s not clear that there’s anything to be done at this point.

So what are we doing here?

James Walsh is a features writer for New York magazine’s Intelligencer and the author of the most unsettling piece I’ve read about the impact of AI on higher education.

Walsh spent months talking to students and professors who are living through this moment, and what he found isn’t just a story about cheating. It’s a story about ambivalence and disillusionment and despair. A story about what happens when technology moves faster than our institutions can adapt.

I invited Walsh onto The Gray Area to talk about what all of this means, not just for the future of college but the future of writing and thinking. As always, there’s much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday.

This interview has been edited for length and clarity.

Let’s talk about how students are cheating today. How are they using these tools? What’s the process look like?

It depends on the type of student, the type of class, the type of school you’re going to. Whether or not a student can get away with that is a different question, but there are plenty of students who are taking their prompt from their professor, copying and pasting it into ChatGPT and saying, “I need a four to five-page essay,” and copying and pasting that essay without ever reading it.

One of the funniest examples I came across is a number of professors are using this so-called Trojan horse method where they’re dropping non-sequiturs into their prompts. They mention broccoli or Dua Lipa, or they say something about Finland in the essay prompts just to see if people are copying and pasting the prompts into ChatGPT. If they are, ChatGPT or whatever LLM they’re using will say something random about broccoli or Dua Lipa.

Unless you’re incredibly lazy, it takes just a little effort to cover that up.

Every professor I spoke to said, “So many of my students are using AI and I know that so many more students are using it and I have no idea,” because it can essentially write 70 percent of your essay for you, and if you do that other 30 percent to cover all your tracks and make it your own, it can write you a pretty good essay.

And there are these platforms, these AI detectors, and there’s a big debate about how effective they are. They will scan an essay and assign some grade, say a 70 percent chance that this is AI-generated. And that’s really just looking at the language and deciding whether or not that language is created by an LLM.

But it doesn’t account for big ideas. It doesn’t catch the students who are using AI and saying, “What should I write this essay about?” And not doing the actual thinking themselves and then just writing. It’s like paint by numbers at that point.

Did you find that students are relating very differently to all of this? What was the general vibe you got?

It was a pretty wide perspective on AI. I spoke to a student at the University of Wisconsin who said, “I realized AI was a problem last fall, walking into the library and at least half of the students were using ChatGPT.” And it was at that moment that she started thinking about her classroom discussions and some of the essays she was reading.

The one example she gave that really stuck with me was that she was taking some psych class, and they were talking about attachment theories. She was like, “Attachment theory is something that we should all be able to talk about [from] our own personal experiences. We all have our own attachment theory. We can talk about our relationships with our parents. That should be a great class discussion. And yet I’m sitting here in class and people are referencing studies that we haven’t even covered in class, and it just makes for a really boring and unfulfilling class.” That was the realization for her that something is really wrong. So there are students like that.

And then there are students who feel like they have to use AI because if they’re not using AI, they’re at a disadvantage. Not only that, AI is going to be around no matter what for the rest of their lives. So they feel as if college, to some extent now, is about training them to use AI.

What’s the general professor’s perspective on this? They seem to all share something pretty close to despair.

Yes. Those are primarily the professors in writing-heavy classes or computer science classes. There were professors who I spoke to who actually were really bullish on AI. I spoke to one professor who doesn’t appear in the piece, but she is at UCLA and she teaches comparative literature, and used AI to create her entire textbook for this class this semester. And she says it’s the best class she’s ever had.

So I think there are some people who are optimistic, [but] she was an outlier in terms of the professors I spoke to. For the most part, professors were, yes, in despair. They don’t know how to police AI usage. And even when they know an essay is AI-generated, the recourse there is really thorny. If you’re going to accuse a student of using AI, there’s no real good way to prove it. And students know this, so they can always deny, deny, deny. And the sheer volume of AI-generated essays or paragraphs is overwhelming. So that, just on the surface level, is extremely frustrating and has a lot of professors down.

Now, if we zoom out and think also about education in general, this raises a lot of really uncomfortable questions for teachers and administrators about the value of each assignment and the value of the degree in general.

How many professors do you think are now just having AI write their lectures?

There’s been a little reporting on this. I don’t know how many are. I know that there are a lot of platforms that are advertising themselves or asking professors to use them more, not just to write lectures, but to grade papers, which of course, as I say in the piece, opens up the very real possibility that right now an AI is grading itself and offering comments on an essay that it wrote. And this is pretty widespread stuff. There are plenty of universities across the country offering teachers this technology. And students love to talk about catching their professors using AI.

I’ve spoken to another couple of professors who are like, I’m nearing retirement, so it’s not my problem, and good luck figuring it out, younger generation. I just don’t think people outside of academia realize what a seismic change is coming. This is something that we’re all going to have to deal with professionally.

And it’s happening much, much faster than anyone anticipated. I spoke with somebody who works on education at Anthropic, who said, “We expected students to be early adopters and use it a lot. We did not realize how many students would be using it and how often they would be using it.”

Is it your sense that a lot of university administrators are incentivized to not look at this too closely, that it’s better for business to shove it aside?

I do think there’s a vein of AI optimism among a certain type of person, a certain generation, who saw the tech boom and thought, I missed out on that wave, and now I want to adopt. I want to be part of this new wave, this future, this inevitable future that’s coming. They want to adopt the technology and aren’t really picking up on how dangerous it might be.

I used to teach at a university. I still know a lot of people in that world. A lot of them tell me that they feel very much on their own with this, that the administrators are pretty much just saying, Hey, figure it out**. And I think it’s revealing that university admins were quickly able, during Covid, for instance, to implement drastic institutional changes to respond to that, but they’re much more content to let the whole AI thing play out.**

I think they were super responsive to Covid because it was a threat to the bottom line. They needed to keep the operation running. AI, on the other hand, doesn’t threaten the bottom line in that way, or at least it doesn’t yet. AI is a massive, potentially extinction-level threat to the very idea of higher education, but they seem more comfortable with a degraded education as long as the tuition checks are still cashing. Do you think I’m being too harsh?

I genuinely don’t think that’s too harsh. I think administrators may not fully appreciate the power of AI and exactly what’s happening in the classroom and how prevalent it is. I did speak with many professors who go to administrators or even just older teachers, TAs going to professors and saying, This is a problem.

I spoke to one TA at a writing course at Iowa who went to his professor, and the professor said, “Just grade it like it was any other paper.” I think they’re just turning a blind eye to it. And that is one of the ways AI is exposing the rot underneath education.

It’s this system that hasn’t been updated in forever. And in the case of the US higher ed system, it’s like, yeah, for a long time it’s been this transactional experience. You pay X amount of dollars, tens of thousands of dollars, and you get your degree. And what happens in between is not as important.

The universities, in many cases, also have partnerships with AI companies, right?

Right. And what you said about universities can also be said about AI companies. For the most part, these are companies or companies within nonprofits that are trying to capture customers. One of the more dystopian moments was when we were finishing this story, getting ready to completely close it, and I got a push alert that was like, “Google is letting parents know that they have created a chatbot for children under [thirteen years old].” And it was kind of a disturbing experience, but they are trying to capture these younger customers and build this loyalty.

There’s been reporting from the Wall Street Journal on OpenAI and how they have been sitting on an AI that would be really, really effective at essentially watermarking their output. And they’ve been sitting on it, they have not released it, and you have to wonder why. And you have to imagine they know that students are using it, and in terms of building loyalty, an AI detector might not be the best thing for their brand.

This is a good time to ask the obligatory question, Are we sure we’re not just old people yelling at clouds here? People have always panicked about new technologies. Hell, Socrates panicked about the written word. How do we know this isn’t just another moral panic?

I think there’s a lot of different ways we could respond to that. It’s not a generational moral panic. This is a tool that’s available, and it’s available to us just as it’s available to students. Society and our culture will decide what the morals are. And that is changing, and the way that the definition of cheating is changing. So who knows? It might be a moral panic toda,y and it won’t be in a year.

However, I think somebody like Sam Altman, the CEO of OpenAI, is one of the people who said, “This is a calculator for words.” And I just don’t really understand how that is compatible with other statements he’s made about AI potentially being lights out for humanity or statements made by people at an Anthropic about the power of AI to potentially be a catastrophic event for humans. And these are the people who are closest and thinking about it the most, of course.

I have spoken to some people who say there is a possibility, and I think there are people who use AI who would back this up, that we’ve maxed out the AI’s potential to supplement essays or writing. That it might not get much better than it is now. And I think that’s a very long shot, one that I would not want to bank on.

Is your biggest fear at this point that we are hurtling toward a post-literate society? I would argue, if we are post-literate, then we’re also post-thinking.

It’s a very scary thought that I try not to dwell in — the idea that my profession and what I’m doing is just feeding the machine, that my most important reader now is a robot, and that there’s going to be fewer and fewer readers is really scary, not just because of subscriptions, but because, as you said, that means fewer and fewer people thinking and engaging with these ideas.

I think ideas can certainly be expressed in other mediums and that’s exciting, but I don’t think anybody who’s paid attention to the way technology has shaped teen brains over the past decade and a half is thinking, Yeah, we need more of that. And the technology we’re talking about now is orders of magnitude more powerful than the algorithms on Instagram.

Listen to the rest of the conversation and be sure to follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you listen to podcasts.


From Vox via this RSS feed

13
 
 

President Donald Trump, joined by Speaker of the House Mike Johnson and other lawmakers, holds up an executive order he signed

President Donald Trump, joined by House Speaker Mike Johnson (R-LA) and other lawmakers, holds up an executive order he signed in June. | Chip Somodevilla/Getty Images

A strange thing happened on the way to Republicans’ passage of their big Medicaid-cutting bill: We learned that President Donald Trump seems unaware the bill will cut Medicaid.

Trying to line up support for the bill in a private call Wednesday with House Republicans, Trump offered his advice that, if they want to win elections, they shouldn’t touch Medicare or Social Security — or Medicaid. His comments were reported by Riley Rogerson and Reese Gorman of NOTUS.

This is a bizarre thing to say, because Medicaid is the single program being cut the most in the bill. Estimates suggest that its spending could end up cut by as much as 18 percent, causing about 8 million people to lose Medicaid coverage.

And this isn’t a one-off thing. For months, Trump has publicly promised to protect Medicaid, and reports have described him as queasy about Congress’s plans to do otherwise.

This puts Vice President JD Vance, who has talked a big game about changing the GOP to appeal more to low-income voters, in an awkward place. On X this week, he attempted to change the subject from the bill’s Medicaid cuts, arguing they were “immaterial” and “minutiae” compared to the immigration enforcement money that really matters.

Privately, many Republicans know differently. “Group texts are blowing up and frantic phone calls are being exchanged among GOP lawmakers alarmed about the Senate Medicaid provisions,” Politico reported this week.

It would be no surprise if Trump and Republicans misled about these cuts in public — GOP officials have been claiming that the Medicaid cuts are purely about limiting waste, fraud, and abuse. But the fact that Trump misstated this so blithely in private, to a friendly audience, is more strange. It suggests he truly is unaware what his “big, beautiful bill” will do.

Why the bill ended up cutting Medicaid deeply despite Trump’s repeated promises not to

To at least try to understand what’s going on here, it’s worth grappling with why this bill cuts Medicaid in the first place.

Trump’s priorities for his bill were tax cuts, immigration enforcement money, and raising the debt ceiling. This is all very expensive, and most of it will just add to the debt.

But conservatives in the House insisted that at least some spending cuts had to be included, to partially offset the bill’s cost. So GOP leaders searched for cuts that would be sizable — in the hundreds of billions of dollars range. Joe Biden’s clean energy subsidies were one obvious target.

It’s hard, though, to come up with big cuts that aren’t politically toxic. As budget wonks know, the real money in the federal budget is in Social Security, Medicare, Medicaid, and defense. Trump had no desire to cut defense, and the Trump-era GOP has deemed the senior-focused Social Security and Medicare politically untouchable.

Medicaid, aimed at low-income people, is a different story. Conservatives have long viewed it, along with food stamps and welfare, with suspicion, arguing that government benefits like these disincentivize work and get exploited by the lazy and undeserving. Medicaid beneficiaries are also believed to be less likely to turn out at the polls.

These longstanding conservative arguments have been slow to adjust to the news that Medicaid recipients have been an increasing share of the Trump coalition, as he’s helped the GOP gain among low-income voters. Many low-income whites in rural areas are on Medicaid, as are low-income Latinos in areas where Trump has done well, such as California’s Central Valley.

That dissuaded congressional Republicans from even more extreme Medicaid cuts some had wanted — but they still hit the program hard. The Medicaid cuts that made it into the bill were, however, crafted in roundabout ways that Republicans argued were just aimed at waste, fraud, and abuse.

These included new work reporting requirements. In theory, a requirement to document your working hours in exchange for coverage may not sound like a cut; in practice, the process will likely be arduous and error-filled and result in many working people losing coverage.

The bill also limits the “provider tax” states may change — a key way many states help finance Medicaid, since provider taxes are reimbursed with federal matching funds — among other changes.

All that added up to hundreds of billions in savings, on paper. But behind those savings is 8 million people losing their Medicaid coverage, as well as a potentially devastating impact on rural hospitals that rely on Medicaid payments.

Trump may not be aware of this, but many in the party are — that’s why they set the most painful Medicaid cuts to happen only after the 2026 midterms. And eventually, many Medicaid recipients will feel the pain, too.


From Vox via this RSS feed

14
 
 

Statues of Mickey Mouse and Minnie Mouse

How we tell the story of the United States — and who’s included in it and how — has been an ongoing battle in the country for decades. It’s one currently being waged by the Trump administration, such as when it scrubbed references to Jackie Robinson and Harriet Tubman from government webpages in the name of clamping down on “DEI.”

And in the 1990s, Disney had a particularly zany idea of how to tell the story of America — one that set off a culture war as the company sought to create an amusement park focused on US history, warts and all.

Disney’s America, the doomed amusement park, would have contained the story of immigration told through the Muppets’ musical-comedy stylings. It would have had sections dedicated to the Industrial Revolution, Native America, and the Civil War. It would, as Disney executives put it at the time, “make you a Civil War soldier. We want to make you feel what it was like to be a slave.”

The ensuing battle over Disney’s America would be one of Disney’s biggest failures — and a precursor to battles we’re still fighting today.

To learn more about what Disney tried to do, what ended up happening, and what it all means, Today, Explained co-host Sean Rameswaram spoke with historian Jacqui Shine.

Below is an excerpt of their conversation, edited for length and clarity. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.

Where does this story begin?

It begins with Michael Eisner, who came to Disney as its CEO and chairman in 1984. Eisner is ambitious, aggressive. Over the next 10 years, in what Disney buffs called the Disney Renaissance, the company has this enormous critical and commercial success with a run of animated movies. The juggernaut of this is The Little Mermaid, followed by Beauty and the Beast, The Lion King and Aladdin.

Maybe high on that supply, Eisner announces this plan for what he calls the Disney decade, which is this broad expansion of the company’s parks and resorts. The most high-profile project here was Euro Disney Resort, which is now Disneyland Paris. And there’s high expectations for the Disney decade and for the success of the Parks program.

This doesn’t go quite the way that they hope it will. Euro Disney doesn’t do well at opening. It loses nearly a billion dollars in its first year. So the failure of Euro Disney leads the company to want to pivot to more US expansion on smaller park projects.

In 1991, the head of the parks division brings Eisner and Disney’s president Frank Wells to Colonial Williamsburg. This inspires this plan for a history-themed Disney Park, Disney’s America.

They want to put it in Virginia because they imagine that it can become part of the DC-area tourist economy, and that a Disney theme park that is about American history will fit really well into this context. This is not a project that was supposed to involve Mickey Mouse or any of the Disney icons. Disney was starting work on Pocahontas.

Eisner says that he was reading a lot about John Smith and Pocahontas and that internally, the company was interested in democracy as a sort of, as a thematic subject.

So Eisner and Disney have an idea of what they don’t want to do, and perhaps more importantly, what they do want to do with this park. To build it, obviously you’re going to need some land. I imagine Disney just didn’t already have a huge parcel of property in northern Virginia-ish. Do they buy some?

They do. Between 1991 and 1993, Disney secretly begins buying up parcels of land in the area through shell companies. The guy who was in charge of buying apparently used a fake persona; this was very undercover, this is all happening secretly. It is also less than five miles from a National Park Service Civil War Battlefield: Manassas. This is a place where about 3,700 men died and where there were about 25,000 total casualties.

They’re doing this secretly. At what point does Manassas find out that Mickey Mouse is buying up their land?

Almost everybody finds out in November 1993 when Disney announces the project.

I think initially people receive this warmly, because Disney’s promising a significant amount of economic development for the region and Disney is promising a complex experience of American history there. The guy who heads the Disney’s America project, Bob Weis, says in the press release they envisioned Disney’s America as a place to debate and discuss the future of our nation and to learn more about the past by living it.

And they are quick to say that this is a project that is not going to whitewash American history. Eisner is interviewed in the Washington Post the next day. He says that the park will present painful, disturbing, agonizing history. We’re going to be sensitive, but we will not be showing the absolute propaganda of the country. We will show the Civil War with all this racial conflict.

This was a very serious, very powerful, very successful entertainment executive saying, “We’re gonna make a kiddy theme park that will take our most brutal history seriously.

Yes. And I think, like you, a lot of people had trouble with that contradiction. The day after this press release is issued, Disney holds a press conference in Haymarket. At this presser, Bob Weis, who is the senior vice president of imagineering, which is Disney’s creative division, says, “This will be entertaining in the sense that it would leave you something you could mull over. We want to make you a Civil War soldier. We want to make you feel what it was like to be a slave or what it was like to escape through the underground railroad.”

This moment, I think, comes to define this conflict in the public eye.

It’s such a nutty thing to hear a serious person say. Your kids could come to our theme park, home of Mickey Mouse, and find out what it’s like to be a slave. I imagine at this point, people are just like, “I’m sorry, I’m gonna need some more specifics.

Yes. They put out a brochure, which is where a lot of the information that we have about what this would’ve been like comes from.

“Any kind of debate about public history is always going to be about trying to stake some sort of political or ideological claim about the meaning of American history.”

You enter at Crossroads USA, and there you board an 1840s train that takes you first to President Square, which they say celebrates the birth of democracy. It’s about the Revolutionary War.

You follow that to Native America. They say, “guests may visit an Indian village representing such eastern tribes as the Powhatans, or join in a harrowing Lewis and Clark raft expedition through pounding rapids and churning whirlpools.” We’re going to be educating people about Manifest Destiny here.

We move from Native America to the Civil War fort, where they say you’re going to experience the reality of a soldier’s daily life. After the Civil War fort, you go to a section on American immigration. And they’re going to build a replica Ellis Island building. Some sources indicate they would’ve done a show called The Muppets Take America.

The next section is a factory town called Enterprise that centers on a high-speed adventure ride called the Industrial Revolution. That involves a narrow escape from its fiery vat of molten steel.

Then you go to Victory Field, where guests may parachute from a plane or operate tanks and weapons in combat.

You then hit the last two areas, State Fair and Family Farm, to learn how to make homemade ice cream or milk a cow and even participate in a nearby country wedding, barn dance, and buffet.

This sounds like one doozy of a brochure. Does it work? Does it convince everyone?

Yes and no.

Does that slow down Michael Eisner? Is he ready to give up?

No. And that is where the fight begins. People hook in, in particular, to this idea that Disney’s going to include some element about American chattel slavery. And he is aggressive about saying, No, we weren’t going to do that. Why would you think that?

He is really persuaded that Disney’s big swing can work, that this idea has value and merit, and that the people who are standing against it are misguided.

At this point, is this fight relegated to Virginia, or is it getting bigger? This is obviously an international company with a huge cultural footprint.

It’s getting bigger. One of the things that contributes to this is that the Washington Post does a lot of coverage of this, which makes it go national. And it starts this debate in editorial pages about whether or not Disney can responsibly represent American history and whether or not the Disneyfication of American history is advisable.

And what happens when national papers, opinion columns start weighing in on this debate?

A few things happen. In early 1994, a strong coalition of opponents develops, including people who are concerned about preserving the environment there.

But then the historians get involved. The big guns come out when this group called Protect Historic America launches. This is a group of big-name, high-powered academic historians. This group of major figures stepped forward to say they’re concerned about education around the Civil War and about the park’s location near Manassas. In very short order, dozens and dozens of historians volunteer their time to write editorials, to comment to the media. They’re really fired up about this.

I read that this fight also somehow made it to the United States Congress. Why is this even Congress’s business?

This is one of the interesting things that comes out of Senate Energy and Natural Resources subcommittee hearings. The entree into this is that this involves public lands of national importance. Five hundred people come to the Senate hearing, and Eisner’s really combative. He says about the people who are opposed to this, “I sat through many history classes where I read some of their stuff and I didn’t learn anything. It was pretty boring.”

At this point you’ve got historians speaking out about this. You’ve got op-ed columns being written, it sounds like all over the country. You’ve got a hearing on Capitol Hill. Are people out in the streets protesting this somewhere?

They are. Eisner is on the Hill trying to make nice with DC politicians and invites them to a special screening of The Lion King. But when they leave the theater, there are about a hundred protestors outside. Bigger than this though, in September 1994, 3,000 people march on the National Mall to protest Disney’s America.

Nationally, public support for the park has dropped to like 25 percent. At the end of September 1994, the company announces that Disney is withdrawing from the Virginia site. It’s clear that people don’t want it to be sited where it is, and they’re giving up. It’s over for Disney’s America. It is curtains for Disney’s America.

How do you think what happened in the ’90s connects to the kinds of fights we’re having about our history right now?

Any kind of debate about public history is always going to be about trying to stake some sort of political or ideological claim about the meaning of American history. Right now we see this very direct, very aggressive effort to insist on a positivist narrative about American history.

One of the things that I think people found puzzling about the early days of the Trump administration was that the National Endowment for the Humanities cut an enormous amount of active grants. And they issued new guidelines seeking projects, they say, that instill “an understanding of the founding principles and ideals that make America an exceptional country.” I think partly this is the administration’s backlash to efforts in the last decade to bring a more nuanced and complex understanding to structural oppression in US history.

We fantasize about American history in all kinds of ways, in all kinds of places. I don’t know that Disney in seeking to do that was necessarily doing anything out of step with how we represent the American story.


From Vox via this RSS feed

15
 
 

A girl wearing patriotic stickers and apparel at a Fourth of July parade

Americans aren’t used to having to defend democracy. It’s just been a given for so long. After all, it’s the country’s 249th birthday. But now, with experts warning that US democracy may break down in the next three years, many people feel worried about it — and passionate about protecting it.

But how do you defend something when you don’t quite remember the justifications for it?

Many intellectuals on both the left and right have spent the past decade attacking America’s liberal democracy — a political system that holds meaningfully free, fair, multiparty elections, and gives citizens plenty of civil liberties and equality before the law.

On the left, thinkers have criticized liberalism’s economic vision for its emphasis on individual freedom, which they argued feeds exploitation and inequality. On the right, thinkers have taken issue with liberalism’s focus on secularism and individual rights, which they said wrecks traditional values and social cohesion. The common thread is the belief that liberalism’s core premise — the government’s main job is to defend the freedom of the individual to choose their path in life — is wrong.

These arguments gained mainstream success for a time, as Vox’s Zack Beauchamp has documented. That’s in part because, well, liberalism does have its problems. At a time of rising inequality and rampant social disconnection, it shouldn’t be surprising when some people complain that liberalism is so busy protecting the freedom of the individual that it neglects to tackle collective problems.

But awareness of these problems shouldn’t mean that we give up on liberal democracy. In fact, there are very compelling reasons to want to uphold this political system. Because Americans have gotten used to taking it for granted, many have forgotten how to make the intellectual case for it.

It’s time to remember.

Liberal democracy does have a good defense. It’s called value pluralism.

When you think of liberalism, you might think of philosophers like John Locke, John Stuart Mill, or John Rawls. But, believe it or not, some people not named John also had very important ideas.

Prime examples include the Oxford philosopher Isaiah Berlin and Harvard political theorist Judith Shklar, who are strangely underappreciated given their contributions to liberal thought in the Cold War period. Associated thinkers like Bernard Williams and Charles Taylor are also worth noting.

Let’s focus on Berlin, though, since he was one of the clearest and greatest defenders of liberal democracy. Born to a Jewish family in the Russian Empire, he experienced the political extremes of the 20th century — the Russian Revolution, the rise of Soviet communism, the Holocaust — and came away with a horror for totalitarian thinking. In all these cases, he argued, the underlying culprit was “monism”: the idea that we can arrive at the true answers to humanity’s central problems and harmoniously combine them into one utopian, perfect society.

For example, in Stalin’s communism, monism took the form of believing that the key is to establish a classless society — even if millions of people had to be killed to achieve that vision.

If it were possible to have a perfect society, any method of bringing it about would seem justified. Berlin writes:

For if one really believes that such a solution is possible, then surely no cost would be too high to obtain it: to make mankind just and happy and creative and harmonious forever — what could be too high a price to pay for that? To make such an omelette, there is surely no limit to the number of eggs that should be broken — that was the faith of Lenin, of Trotsky, of Mao.

But this utopian idea is a dangerous illusion. The problem with it, Berlin argued, is that human beings have lots of different values, and they’re not all compatible with each other. In fact, they’re inherently diverse and often in tension with each other.

Take, for example, justice and mercy. Both of these are equally legitimate values. But rigorous justice won’t always be compatible with mercy; the former would push a court to throw the book at someone for breaking a law, even if no one was harmed and it was a first offense, while the latter would urge for a more forgiving approach.

Or take liberty and equality. Both beautiful values — “but total liberty for wolves is death to the lambs,” Berlin writes, “total liberty of the powerful, the gifted, is not compatible with the rights to a decent existence of the weak and the less gifted.” The state has to curtail the liberty of those who want to dominate if it cares about making room for equality or social welfare, for feeding the hungry and providing houses for the unhoused.

Some ethical theories, like utilitarianism, try to dissolve these sorts of conflicts by suggesting that all the different values can be ranked on a single scale; in any given situation, one will produce more units of happiness or pleasure than the other. But Berlin argues that the values are actually incommensurable: attending a Buddhist meditation retreat and eating a slice of chocolate cake might both give you some sort of happiness, but you can’t rank them on a single scale. They are extremely different types of happiness. What’s more, some values can actually make us less happy — think of courage, say, and intellectual honesty or truth-seeking — but are valuable nonetheless. You can’t boil all values down to one “supervalue” and measure everything in terms of it.

If human values are incommensurable and sometimes flat-out incompatible, that means no single political arrangement can satisfy all legitimate human values simultaneously. To put it more simply: We can’t have everything. We’ll always face trade-offs between different goods, and because we’re forced to choose between them, there will always be some loss of value — some good thing left unchosen.

Berlin says it’s precisely because this is the human condition that we rightly place such a high premium on freedom. If no one can justifiably tell us that their way is the one right way to live — because, according to Berlin’s value pluralism, there can be more than one right answer — then no government can claim to have uncontestable knowledge about the good and foist its vision on us. We should all have a share in making those decisions on the collective level — as we do in a liberal democracy. And on the individual level, we should each have the freedom to choose how we balance between values, how we live our own lives. When others come up with different answers, we should respect their competing perspectives.

Value pluralism is not relativism

“I do not say, ‘I like my coffee with milk and you like it without; I am in favor of kindness and you prefer concentration camps,’” Berlin memorably writes. Although he argues that there’s a plurality of values, that doesn’t mean any and every possible value is a legitimate human value. Legitimate values are things that humans have genuine reason to care about as ends in themselves, and that others can see the point in, even if they put less weight on a given value or dispute how it’s being enacted in the world.

Security, for example, is something we all have reason to care about, even though we differ on the lengths the government should go to in order to ensure security. By contrast, if someone said that cruelty is a core value, they’d be laughed out of the room. We can imagine a person valuing cruelty in specific contexts as a means to a greater end, but no human being (except maybe a sociopath) would argue that they value it as an end in itself. As Berlin writes:

The number of human values, of values that I can pursue while maintaining my human semblance, my human character, is finite — let us say 74, or perhaps 122, or 26, but finite, whatever it may be. And the difference it makes is that if a man pursues one of these values, I, who do not, am able to understand why he pursues it or what it would be like, in his circumstances, for me to be induced to pursue it. Hence the possibility of human understanding.

Contemporary psychologists like Jonathan Haidt have made a similar case. His research suggests that different people prioritize different moral values. Liberals are those who are especially attuned to the values of care and fairness. Conservatives are those who are also sensitive to the values of loyalty, authority, and sanctity. It’s not like some of these values are “bad” and some are “good.” They’re just different. And even a liberal who strongly disagrees with how a conservative is applying the value of sanctity (for example, as a way to argue that a fetus represents a life and that life is sacred, so abortion should be banned) can appreciate that sanctity is, itself, a fine value.

Berlin anticipated this line of thinking. Although he acknowledges that some disagreements are so severe that people will feel compelled to go to war — he would go to war against Nazi Germany, for example — by and large, “respect between systems of values which are not necessarily hostile to each other is possible,” he writes.

Liberalism can’t just be about warding off totalitarianism. Is there more to it?

Berlin’s analysis offers a highly effective vaccine against totalitarian thinking. That’s a huge point in its favor — and defenders of liberal democracy would do well to resurface it.

But there’s more to a good society than just warding off totalitarianism — than, to put it in Berlin’s own terms, guaranteeing “negative freedoms” (freedom from things like oppression). We also care about “positive freedoms” (freedom to enjoy all the good things in life). In recent years, critics have alleged that Berlin and other Cold War liberals neglected that part of the equation.

It’s fair to point out that American liberalism has done a poor job of ensuring things like equality and social connection. But Berlin’s account of value pluralism never pretended to be laying out a timeless prescription for how to balance between different priorities. Just the opposite. He specified that priorities are never absolute. We exist on a seesaw, and as our society’s concrete circumstances change — say, as capitalism goes into hyperdrive and billionaires amass more and more power — we’ll need to repeatedly adjust our stance so we can maintain a decent balance between all the elements of a good life.

And on the global scale, Berlin fully expects that different cultures will keep disagreeing with each other about how much weight to put on the different legitimate human values. He urges us to view each culture as infinitely precious in its uniqueness, and to see that there may be “as many types of perfection as there are types of culture.” He offers us a positive vision that’s about respecting, and maybe even delighting in, difference.

Nowadays, a new generation of philosophers, including American thinkers influenced by Berlin like Ruth Chang and Elizabeth Anderson, is busy trying to work out the particulars of how to do that in modern society, tackling issues from ongoing racial segregation to rapid technological change.

But this can’t just be the work of philosophers. If America is going to remain a liberal democracy, everyday Americans need to remember the value of value pluralism.


From Vox via this RSS feed

16
 
 

Zohran Mamdani speaking at a rally with a megaphone

Zohran Mamdani’s policy ideas might not always be the answer voters are looking for, but the dignity underlying his whole agenda is. | Madison Swart/Hans Luca /AFP via Getty Images

Last week, New York State Assembly member Zohran Mamdani sent shockwaves through the political establishment after he clinched the Democratic nomination for New York City mayor. Mamdani, a self-described democratic socialist, defeated a crowded field, which included former Gov. Andrew Cuomo, by double digits. Turnout was higher than usual, especially among younger voters, indicating that Mamdani’s campaign energized New York City residents in ways few people expected.

Throughout his campaign, and especially since the stunning upset, Mamdani has faced attacks from both Republicans and centrist Democrats that paint him as far too extreme for New York City, let alone America. Part of that caricature is clearly fueled by racism — Mamdani is a Muslim immigrant born to Indian parents in Uganda — with Republicans sharing photos of the Statue of Liberty dressed in a burqa, saying Mamdani is uncivilized for eating with his hands, and calling for the 33-year-old candidate to be denaturalized and subsequently deported. It’s also part of the backlash to Mamdani’s support for Palestinian rights, as even members of his own party baselessly accuse him of peddling antisemitism.

But much of the criticism has also centered on Mamdani’s campaign promises, which pledge to make New York City more affordable in small but meaningful ways with rent freezes, city-owned grocery stores, and fare-free buses. Some of that criticism is very heated: Former Treasury Secretary Larry Summers, for example, called Mamdani’s rent stabilization proposal “the second-best way to destroy a city, after bombing.”

Many argue that Mamdani is only offering pie-in-the-sky proposals — nice policies in an ideal world, but unachievable in our not-so-ideal reality. But Mamdani’s splashy policies aren’t exactly foreign ideas, nor is he the first to try to implement them. They’ve been tried before, often with promising results.

Mamdani’s policies aren’t reckless; they’re tested

Let’s take three of his policies that have gotten some of the most attention:

1. Rent freeze

Mamdani has proposed to impose a rent freeze. That means that landlords would be unable to raise the rents on roughly 1 million rent-stabilized apartments across the city. This mostly falls within the mayor’s jurisdiction: Rent hikes (or freezes) are decided by the Rent Guidelines Board, whose nine members are appointed by the mayor. And if elected mayor in November, Mamdani can appoint members to the board who pledge to freeze the rent.

As dramatic as the negative response has been, this isn’t exactly a novel idea. In just the past decade — during Bill de Blasio’s tenure as the city’s mayor — the board froze the rent on three occasions: in 2015, 2016, and in 2020. (Those freezes, it should be noted, are hardly, if ever, blamed for worsening the city’s housing problems.) Mamdani’s proposal also doesn’t mean that a rent freeze will be permanent. The idea is to hold rents where they are to give tenants a chance to catch up to the rising cost of living. (In Mayor Eric Adams’s first three years in office, for example, the board raised the rents by a combined 9 percent.)

Opponents to this plan often point to the piles of literature that show the pitfalls of rent control in the long run — that it disincentivizes landlords to provide services and ultimately leaves apartments in disrepair. But those arguments conveniently leave out some key parts of this debate. Under New York’s rent stabilization laws, landlords who invest in meaningfully improving their apartments are allowed to increase rents beyond the guidelines set by the board, meaning that landlords can’t really use a rent freeze as an excuse to leave their apartments in bad conditions.

More than that, Mamdani’s plan for a rent freeze doesn’t exist in a vacuum; it’s part of a broader plan to spur investment in housing and improve renters’ lives, which include changing zoning laws, cutting red tape to get more housing built more quickly across the city, and cracking down on crooked landlords by more strictly and efficiently enforcing New York’s housing codes.

Put another way, Mamdani’s rent freeze is not presented as the solution to New York’s housing crisis. It’s just one part of a bigger toolkit that can help tenants in the near term while the other tools finally put housing costs under control in the long run.

2. City-owned grocery stores

Mamdani’s suggestion of city-owned grocery stores has irked some entrepreneurs to the point that one supermarket mogul threatened to close down all of his stores if Mamdani becomes mayor. The rationale behind this proposal is that a publicly owned grocery store would make food more affordable. The store wouldn’t have to worry about making a profit or paying rent, and those savings would be passed onto consumers by lowering the price of goods.

Realistically, this is unlikely to have a significant impact on grocery prices across the city — after all, grocery stores famously run on very slim profit margins to begin with. But Mamdani’s plan is also a means to address some of the city’s food deserts, where there aren’t enough grocery stores to serve residents. (While some argue that New York City doesn’t really have food deserts, the reality is that it’s hard to argue that residents have equal access to healthy and fresh foods across the city.)

Some of Mamdani’s critics have seized on this plan to call him a communist who would put private businesses and consumer choice at risk. The government, they argue, shouldn’t be operating businesses because governments are an inefficient alternative to the private market. The reality is that public-owned stores aren’t exactly new, let alone a threat. Several states, from Alabama to Virginia, have publicly owned liquor stores — a product of the post-prohibition period where states took more control over the sale and distribution of alcohol — and boast of their success. (Virginia’s government website, for example, notes its state-owned liquor stores’ “history of giving back to Virginians” and highlights that it has generated more than $1 billion in revenue for six consecutive years.)

Other cities across the country are also trying their hand at publicly owned grocery stores. In St. Paul, Kansas, for example, the municipality-owned grocery store helped end the city’s nearly two-decade run without a grocery store. In Madison, Wisconsin, a municipally owned grocery store is set to open later this summer, and other cities, including Chicago and Atlanta, are planning to dabble in this experiment as well.

Mamdani’s proposal for publicly owned grocery stores is also far more rational and modest than the state-monopoly model of liquor stores: He’s merely proposing a pilot program of just five city-owned grocery stores — one in each borough — in a sea of some 15,000 privately owned grocery stores. If the pilot program succeeds and satisfies New Yorkers’ needs, then it could be expanded.

3. Fare-free buses

One of Mamdani’s signature wins as a New York State Assembly member was his push for a fare-free bus pilot on five lines in New York City. As mayor, he promises to expand fare-free buses across the city to make public transit more accessible. It’s also good environmental policy that helps alleviate traffic because it encourages people to ditch their cars and ride the bus instead.

Fare-free transit experiments in various cities have already shown promising results. In Boston, a fare-free bus pilot after the pandemic found that bus lines without fares recovered ridership much faster than the rest of the transit system. A year-and-a-half after the initial Covid lockdowns — when transit ridership cratered across the country — one fare-free bus line in Boston saw ridership bounce back to 92 percent of pre-pandemic levels, while the rest of the city’s transit system was stuck near 50 percent.

According to an article by Mamdani and his colleague in the state legislature, in New York City, the lines included in the fare-free bus pilot showed an increase in ridership across the board, and of the new riders those lines lured, the highest share was among people making less than $28,000 a year.

Of course, fare-free transit should be a secondary goal. After all, what good is a free fare if the buses won’t get you to where you need to go, let alone get you there in time? But Mamdani’s plan makes clear that he’s not just interested in making transit free, but fast and reliable as well.

His fare-free proposal is packaged with a commitment to invest in improving infrastructure — like building more dedicated lanes — to make bus trips more efficient. There are plenty of avenues to raise revenue for that kind of investment, from imposing a new tax to introducing schemes like congestion pricing, as New York already has. Plus, if making buses free makes people more likely to get out of cars and ride public transit instead, then that is a worthwhile investment.

What Mamdani’s policies could mean for the future of Democratic politics

Ultimately, Mamdani’s policies also proved to be good politics, at least good enough for a Democratic primary. Part of the reason Mamdani’s policies might have resonated with so many voters is that they are, in many ways, a promise to reshape government — not into a communist haven on the Hudson, but into a government that owns up to its responsibility to provide all of its people with a dignified life.

That’s why ideas like fare-free transit aren’t solely about saving $2.90 on a bus ride. It’s true that there are plenty of reasonable arguments against fare-free transit: Eliminating fares would get rid of a reliable source of revenue for transit agencies. Solely relying on taxes to fund public transportation potentially makes transit systems more volatile and susceptible to politics, where they can be used as a bargaining chip in the legislature’s tax bills. And there are other ways to make transit affordable to those in need, including existing subsidies that reduce fares for low-income commuters.

But these arguments miss the broader appeal of agendas like Mamdani’s, which are a commitment to expand the government’s role in our daily lives in positive ways. Despite the depiction of Mamdani as a radical socialist, his agenda, at its core, actually promises something much more ideologically modest: making government more likable by making it work well. So his overarching goal as mayor, it seems, would be to make people believe that effective governance is possible — that local government can tangibly improve the quality of life in a city by being more present and, not to mention, pleasant to deal with.

This is not to say that Mamdani’s primary win will reshape American politics — or even Democratic primaries in other cities. But Mamdani is onto something, and Democrats might be well-served by looking at his not-so-radical agenda and understanding that people want more from their own governments. Mamdani’s ideas, like publicly owned grocery stores, might not always be the answer voters are looking for, but the dignity underlying his whole agenda is.


From Vox via this RSS feed

17
 
 

Donald Trump, left, displays a signed executive order while Education Secretary Linda McMahon stands next to him.

President Donald Trump with Education Secretary Linda McMahon at the White House on March 20, 2025. | Chen Mengtong/China News Service/VCG via Getty Images

This story appeared in The Logoff, a daily newsletter that helps you stay informed about the Trump administration without letting political news take over your life. Subscribe here.

Welcome to The Logoff: Today, I’m focusing on the Trump administration’s decision to withhold nearly $7 billion in federal education funding.

What just happened? The Trump administration refused to release congressionally mandated funding to support a variety of education initiatives: after-school and summer programs, programs for students who are learning English, teacher training, classroom technology, and more.

The nearly $7 billion was allocated to states and local schools, and should have gone out on Tuesday. Its loss will be particularly harmful because school districts have already made plans with the assumption that the money would be there, only to have it pulled at the last minute.

What is the administration saying? The Trump administration argues the withholding isn’t a freeze and is instead because the funds are under review. But this argument is likely a fig leaf, given the administration’s opposition to the programs in question and its previous efforts to withhold funding for programs it disagrees with. What the administration appears to be doing is called impoundment — the decision not to spend money that Congress has already appropriated for a specific purpose.

Can they do that? Not really — but we’ll see how the courts rule, since the administration’s decision is almost certain to be challenged in court. Though the president can request Congress withdraw funding — and making that request would trigger a temporary freeze — the administration hasn’t done so in this case.

What’s the big picture here? The Trump administration is waging a war against the congressional power of the purse, led by Office of Management and Budget director Russ Vought (of Project 2025 fame).

The decision to withhold education funding is one of a number of efforts to wrest spending power from Congress, and recent reporting suggests the administration is considering ways to step up its attack and challenge restrictions on impoundment more broadly. If it’s successful, it will be a major expansion of Trump’s powers — and another blow to Congress’s.

And with that, it’s time to log off…

I absolutely loved this story from my colleague Bryan Walsh about the new Vera C. Rubin Observatory in Chile, which just last month shared its first images of the cosmos. The telescope itself is a scientific marvel that has already provided useful data to researchers, but, as Bryan points out, it’s also the “ultimate perspective provider,” a reminder of our place in a vast, beautiful universe. I hope you enjoy his piece — and the photos — as much as I did, and I’ll see you back here tomorrow!


From Vox via this RSS feed

18
 
 

President Trump signing a bill at his desk

President Donald Trump signs a bill on June 12, 2025. | Saul Loeb/AFP via Getty Images

Republicans are close to passing President Donald Trump’s so-called One Big Beautiful Bill, which will cut taxes, slash programs for low-income Americans, ramp up funding for mass deportation, and penalize the solar and wind energy industries.

Oh, and it adds enormously to the nation’s debt — but who’s counting? (Independent analysts are, and they estimate it will add at least $3 trillion.)

The sprawling, 887-page bill contains far too many provisions to name here. But to get a better sense of the bill’s impact, it’s worth running down what it does in a few key areas.

The big picture, though, is that Trump is targeting Democratic or liberal-coded programs and constituencies — programs for the poor, student borrowers, and climate change — to cover part (but nowhere near all) of the cost of his big tax cuts and new spending.

Taxes: The current tax rates stick around – plus there’s some new tax cuts

The bill makes a variety of changes to tax law, some of which are about keeping tax breaks set to expire soon, others of which are adding new goodies in the tax code.

1) Making the 2017 Trump tax cuts permanent: In Trump’s first term, Republicans lowered income and other tax rates with his 2017 tax law. However, in a gimmick to make that law look less costly, the new lower rates they set were scheduled to expire at the end of 2025 — meaning that, if Congress did nothing, practically everyone’s taxes would go up next year.

So the single most consequential thing this bill does, from a budgetary perspective, is making those 2017 tax levels permanent, averting their imminent expiration.

That saves Americans from an imminent tax hike, but notably, it just keeps the status quo tax levels in place. So, in practice, many people may not perceive this as a new cut to their taxes.

2) New “populist” tax cuts:The bill also creates several new tax breaks meant to fulfill certain Trump 2024 campaign promises, such as “no tax on tips.” There will be new deductions for up to $25,000 in tip income, $12,500 in overtime income, $6,000 for seniors, and a deduction for interest on loans for new US-made cars. The bill also creates savings accounts for children called “Trump accounts,” in which the government would invest $1,000 per child.

3) Tax cuts for the wealthy and businesses: Wealthy Americans wanting to pay less in taxes have the most to be happy about from this bill, because they benefit hugely from making the 2017 Trump tax cuts permanent.

Other wealthy winners in the bill include owners of “pass-through” businesses (partnerships, LLCs, or other business entities that don’t pay the typical corporate income tax); they get their tax cuts in Trump’s 2017 bill made permanent. Some wealthy heirs stand to gain too, as the exemption from the estate tax was raised to inherited estates worth $15 million).

Affluent blue state residents got a big win. The 2017 Trump tax law had sharply limited a deduction that typically benefited them — the state and local (SALT) deduction, which it capped at $10,000. (People in blue states tend to have more state and local taxes they can deduct.) The new bill raises that limit to $40,000.

Businesses also get some big benefits, as the bill makes three major corporate tax breaks permanent: bonus depreciation, research and development expensing, and a tax break related to interest deduction.

All this, combined with the cuts for programs for poor people, is why many analysts calculate the impact this bill would be regressive overall — it will end up financially harming low-income Americans, and benefiting the rich the most.

The safety net: Big cuts to Medicaid, food stamps, and student loans

Trump has repeatedly promised that he wouldn’t cut Medicaid, and this bill breaks that promise bigly. Its new work reporting requirements and other changes (such as a limit to the “provider tax” states may charge) could end up cutting Medicaid spending by as much as 18 percent. The bill also makes changes to the Affordable Care Act individual insurance marketplaces. Altogether, these provisions would result in 12 million people losing their health insurance, per the Congressional Budget Office.

Food stamps are another target. The Supplemental Nutrition Assistance Program (SNAP) could be cut by as much as 20 percent, due to new work requirements and new requirements states pay a higher share of the program’s cost. One bizarre last-minute provision, aimed at winning over swing vote Sen. Lisa Murkowski (R-AK), seemingly gives states an incentive to make erroneous payments, because states with higher payment error rates get to delay their cost hikes.

Student loans also come in for deep cuts, as the bill overhauls the existing system, ending many repayment plans, requiring borrowers to repay more, and limiting future loan availability.

Clean energy: The bill singles out solar and wind for harsh treatment

Three years ago, with the Inflation Reduction Act, Democrats enacted a swath of new incentives aimed at making the US a clean energy powerhouse. Trump’s new bill moves in the exact opposite direction. It repeals many of Biden’s clean energy benefits, but it doesn’t stop there – it goes further by singling out clean energy, particularly solar and wind, for harsh treatment.

Under the bill, new Biden-era tax credits for electric vehicles and energy efficiency will be terminated this year. Biden’s clean electricity production tax credits, meanwhile, will be gradually rolled back, though solar and wind will see their credits vanish more quickly. The bill also requires clean power projects to start using fewer and fewer Chinese-made components, which much of the industry heavily relies on.

Things could be worse, though. A recent draft of the bill included far harsher policies toward solar and wind, which could have had truly apocalyptic consequences for the industry — but some of them were dropped or watered down to get the bill through the Senate.

Trump’s new spending goes to the border wall, mass deportation, and the military

Counterbalancing some of these spending cuts on the safety net and clean energy, Trump’s bill also spends a bunch more money on two of his own top priorities: immigration enforcement in the military.

About $175 billion will be devoted to immigration, including roughly $50 billion for Trump’s border wall and US Customs and Border Protection (CBP) facilities, $45 billion for expanding the capacity to detain unauthorized immigrants, and $30 billion for enforcement operations. This is a lot of money that will now be devoted to Trump’s “mass deportation” agenda, and the question will now be whether they can put it to use.

The military, meanwhile, will get about $150 billion from the bill, to be used to start construction on Trump’s planned “Golden Dome” missile defense shield, as well as on shipbuilding, munitions, and other military priorities.

The debt: It goes up a whole lot

In the end, Trump’s spending cuts were nowhere near enough to balance out the enormous cost of the tax cuts in this bill. So, estimates suggest, at least $3 trillion more will be added to the debt if this bill becomes law.

Every president this century has come in with big deficit-increasing bills, dismissing concerns about the debt, and the sky hasn’t yet fallen. But all these years of big spending are adding up, and interest payments on the debt are rising. This could make for a significant drag on the economy in future years and make even more painful cuts necessary.

Republicans are betting that the tax cuts in this bill will juice business and economic activity enough to keep the country happy in the short term — and that the cuts, targeting mainly low-income people or Democratic constituencies, are unlikely to hurt them too much at the ballot box.


From Vox via this RSS feed

19
 
 

A desk with a sign reading “One big beautiful bill act” is in front of an empty chair and a row of American flags

President Donald Trump’s “big, beautiful bill” has big Medicaid cuts. | Alex Wroblewski/AFP via Getty Images

Republicans in Congresshave passed President Donald Trump’s “big, beautiful bill,” a move that will make major changes to Medicaid through establishing a work requirement for the first time and restricting states’ ability to finance their share of the program’s costs. The Senate approved the plan on Tuesday; the House gave its approval on Thursday. Once the bill receives Trump’s signature, American health care is never going to be the same.

The consequences will be dire.

The Congressional Budget Office estimates that the legislation would slash Medicaid spending by more than $1 trillion and that nearly 12 million people would lose their health insurance. Senate Republicans added a last-minute infusion of funding for rural hospitals to assuage moderates skittish about the Medicaid cuts, but hospitals say the legislation will still be devastating to their business and their patients.

When combined with the expiration of Obamacare subsidies at the end of this year, which were not addressed in the budget bill, and the other regulatory changes being made by the Trump administration, the Republican policy agenda could lead to an estimated 17 million Americans losing health coverage over the next decade, according to the health policy think tank KFF.

Fewer people with health insurance is going to mean fewer people getting medical services, which means more illness and ultimately more deaths.

One recent analysis by a group of Harvard-affiliated researchers of the House Republicans’ version of the budget bill (which included the same general outline, though some of the provisions have been tweaked in the Senate) concluded that 700,000 fewer Americans would have a regular place to get medical care as a result of the bill. Upward of 200,000 fewer people would get their blood cholesterol or blood sugar checked; 139,000 fewer women would get their recommended mammograms. Overall, the authors project that between 8,200 and 24,600 additional Americans would die every year under the Republican plan. Other analyses came to the same conclusion: Millions of Americans will lose health insurance and thousands will die.

After a painful legislative debate in which some of their own members warned them not to cut Medicaid too deeply, Republicans succeeded in taking a big chunk out of the program to help cover the costs of their bill’s tax cuts. They have, eight years after failing to repeal Obamacare entirely, managed to strike blows to some of its important provisions.

So, for better or worse, they own the health care system now, a system that is a continued source of frustration for most Americans — frustrations that the Republican plan won’t relieve. The next time health care comes up for serious debate in Congress, lawmakers will need to repair the damage that the GOP is doing with its so-called big, beautiful bill.

How the Republican budget bill will drive up health care costs for everyone

The effects of the budget bill won’t be limited only to the people on Medicaid and the people whose private insurance costs will increase because of the Obamacare funding cuts. Everyone will experience the consequences of millions of Americans losing health coverage.

When a person loses their health insurance, they are more likely to skip regular medical checkups, which makes it more likely they go to a hospital emergency room when a serious medical problem has gotten so bad that they can’t ignore it any longer. The hospital is obligated by federal law to take care of them even if they can’t pay for their care.

Those costs are then passed on to other patients. When health care providers negotiate with insurance companies over next year’s rates, they account for the uncompensated care they have to provide. And the fewer people covered by Medicaid, the more uncompensated care hospitals have to cover, the more costs are going to increase for even people who do have health insurance. Republicans included funding in the bill to try to protect hospitals from the adverse consequences, an acknowledgement of the risk they were taking, but the hospitals themselves are warning that the funding patches are insufficient. If hospitals and doctors’ offices close because their bottom lines are squeezed by this bill, that will make it harder for people to access health care, even if they have an insurance card.

The effects of the Republican budget bill are going to filter through the rest of the health care system and increase costs for everyone. In that sense, the legislation passage marks a new era for US health policy. Since the Affordable Care Act passed in 2010, Democrats have primarily been held responsible for the state of the health care system. Sometimes this has been a drag on their political goals. But over time, as the ACA’s benefits became more ingrained, health care became a political boon to Democrats.

Going forward, having made these enormous changes, Republicans are going to own the American health care system and all of its problems — the ones they created and the ones that have existed for years.

The BBB’s passage sets the stage for another fight on the future of American health care

For the past decade-plus, US health care politics have tended to follow a “you break it, you buy it” rule. Democrats discovered this in 2010: Though the Affordable Care Act’s major provisions did not take effect for several years, they saw their popularity plummet quickly as Republicans successfully blamed annual premium increases that would’ve occurred with or without the law on the Democrats and their new health care bill. Voters were persuaded by those arguments, and Democrats lost Congress in the 2010 midterms.

But years later, Americans began to change their perception. As of 2024, 44 million Americans were covered through the 2010 health care law and two-thirds of the country say they have a favorable view of the ACA. After the GOP’s failed attempt to repeal the law in 2017, the politics of the issue flipped: Democrats scored major wins in the 2018 midterms after successfully campaigning against the GOP’s failed plan to repeal the ACA. Even in the disastrous 2024 election cycle for Democrats, health care policy was still an issue where voters trusted Kamala Harris more than Trump.

Trump’s One Big Beautiful Bill is already unpopular. Medicaid cuts specifically do not poll well with the public, and the program itself is enjoying the most popularity ever since it was first created in 1965. Those are the ingredients for a serious backlash, especially with government officials and hospitals in red states railing hard against the bill.

Democrats have more work to do on explaining to the public what the bill does and how its implications will be felt by millions of people. Recent polling suggests that many Americans don’t understand the specifics. A contentious debate among Republicans, with several solitary members warning against the consequences of Medicaid cuts, have given politicians on the other side of the aisle good material to work with in making that case: Democrats can pull up clips of Sen. Thom Tillis (R-NC) on the Senate floor, explaining how devastating the bill’s Medicaid provisions would be to conservative voters in Republican-controlled states.

Republicans will try to sell the bill on its tax cuts. But multiple analyses have shown the vast majority of the benefits are going to be reserved for people in higher-income brackets. Middle-class and working-class voters will see only marginal tax relief — and if their health care costs increase either because they lose their insurance or because their premiums go up after other people lose insurance, then that relief could quickly be wiped out by increased costs elsewhere. That is the story Democrats will need to tell in the coming campaigns.

Medicaid has served as a safety net for tens of millions of Americans during both the Great Recession of 2008 and since the pandemic recession of 2020. At one point, around 90 million Americans — about one in four — were covered by Medicaid. People have become much more familiar with the program and it has either directly benefited them or helped somebody that they know at a difficult time.

And difficult times may be coming. Economists have their eyes on concerning economic indicators that the world may be heading toward a recession. When a recession hits — that is, after all, inevitable; it’s just the normal cycle of the economy — people will lose their jobs and many of them will also lose their employer-sponsored health insurance. But now, the safety net is far flimsier than it was in previous crises.

Republicans are going to own those consequences. They took a program that had become an essential lifeline for millions of Americans and having schemed to gut the law ever since the Democrats expanded Medicaid through the ACA more than a decade ago, have finally succeeded. This Republican plan was a reaction to their opponent’s most recent policy overhaul; the next Democratic health care plan will need to repair the harms precipitated by the GOP budget bill.

In the meantime, the impetus is on Democrats and truth tellers in the media to help Americans understand what has happened, why it has happened, and what the fallout is going to be.

Update, July 3, 2:30 pm ET: This story was originally published on July 1 was updated after the House’s passage of the budget reconciliation bill.


From Vox via this RSS feed

20
 
 

A photo of Trump speaking

President Donald Trump speaks to members of the media as he departs a House Republican meeting at the Capitol on May 20, 2025, in Washington, DC. | Andrew Harnik/Getty Images

President Donald Trump’s “big, beautiful bill” is the centerpiece of his legislative agenda, and the stakes are high.

The bill has four major pillars: renewing his 2017 tax cuts, implementing new tax cuts, spending billions on a border wall, US Customs and Border Protection, and the military, and increasing the debt ceiling. The bill itself is a smorgasbord of policy and could also affect clean energy programs, student loans, and food assistance, but perhaps the most consequential changes will be to Medicaid.

The bill was approved by the House in May and passed a key Senate vote on Saturday. Republicans are divided over competing priorities; some want to extend Trump’s tax cuts and boost immigration and defense spending, while others worry about the $2.6 trillion cost and cuts to Medicaid. Republican lawmakers aim to pass the bill by Friday using budget reconciliation, but it’s unclear if all 53 Republican senators will agree.

This is a developing story. Follow along here for the latest news, explainers, and analysis.

The Republican tax bill, explained in 500 wordsThe Republican spending bill is a disaster for reproductive rightsThe most surprising victim of Trump’s terrible tax agendaThe devastating impact of Trump’s big, beautiful bill, in one chartThe economic theory behind TrumpismTrump’s big, beautiful bill, explained in 5 chartsThe big, beautiful bill is bad news for student loansThe big, bad bond market could derail Trump’s big, beautiful billTrump’s “big, beautiful bill,” briefly explainedThe ugly truth about Trump’s big, beautiful billTrump wants “one big, beautiful bill.” Can he get it?


From Vox via this RSS feed

21
 
 

Gavin Newsom smiling.

California Governor Gavin Newsom speaks during a news conference at Gemperle Orchard on April 16 in Ceres, California.

California just demolished a major obstacle to housing construction within its borders — and provided Democrats with a blueprint for better governance nationwide.

On Monday, Governor Gavin Newsom signed a pair of housing bills into law. One exempts almost all urban, multifamily housing developments from California’s environmental review procedures. The second makes it easier for cities to change their zoning laws to allow for more homebuilding.

Both these measures entail restricting the reach of the California Environmental Quality Act (CEQA), a law that requires state and local governments to research and publicize the ecological impacts of any approved construction project. Individuals and groups can then sue to block these developments on the grounds that the government underestimated the project’s true environmental harms.

At first glance, these events might seem irrelevant to anyone who is neither a Californian nor a massive nerd. But behind the Golden State’s esoteric arguments over regulatory exemptions lie much larger questions — ones that concern the fundamental aims and methods of Democratic policymaking. Namely:

Is increasing the production of housing and other infrastructure an imperative of progressive politics that must take precedence over other concerns?Should Democrats judge legislation by how little it offends the party’s allied interest groups or by how much it advances the general public’s needs (as determined by technocratic analysis)?

In making it easier to build urban housing — despite the furious objections of some environmental groups and labor unions — California Democrats put material plenty above status quo bias, and the public’s interests above their party’s internal harmony.

Too often in recent decades, Democrats have embraced the opposite priorities. And this has led blue cities and states to suffer from exceptionally large housing shortages while struggling to build public infrastructure on time and on budget. As a result, Democratic states have been bleeding population  — and thus, electoral clout — to Republican ones while the public sector has fallen into disrepute.

California just demonstrated that Democrats don’t need to accept these failures. Acquiescing to scarcity — for the sake of avoiding change or intraparty tension — is a choice. Democrats can make a different one.

California Democrats were long hostile to housing development. That’s finally changing.

Critics of California’s CEQA reforms didn’t deny their state needs more housing. It might therefore seem fair to cast the debate over those reforms as a referendum on the importance of building more homes.

But the regulatory regime that the opponents of CEQA reform sought to preserve is the byproduct of an explicitly anti-development strain of progressivism, one that reoriented Democratic politics in the 1970s.

The postwar decades’ rapid economic progress yielded widespread affluence, ecological degradation, and disruptive population growth. Taken together, these forces spurred a backlash to building: Affluence led liberal reformers to see economic development as less of a priority, environmental decay prompted fears that humanity was swiftly exhausting nature’s bounty, and the swift growth of booming localities led some longtime residents to fear cultural alienation or displacement.

California was ground zero for this anti-growth backlash, as historian Yoni Appelbaum notes in his recent book Stuck. The state’s population quintupled between 1920 and 1970. And construction had largely kept pace, with California adding nearly 2 million units in the 1950s alone. As a result, in 1970, the median house in California cost only $197,000 in today’s dollars.

But millions of new people and buildings proved socially disruptive and ecologically costly. Many Californians wished to exclude newcomers from their towns or neighborhoods, so as to preserve their access to parking, the aesthetic character of their area, or the socioeconomic composition of their schools, among other concerns. And anti-growth progressivism provided both a high-minded rationalization for such exclusion and legal tools with which to advance it.

In 1973, consumer advocate Ralph Nader and his team of researchers prepared a report on land-use policy in California. Its overriding recommendation was that the state needed to make it easier for ordinary Californians to block housing construction. As one of the report’s authors explained at a California Assembly hearing, lawmakers needed to guard against both “the overdevelopment of the central cities” and “the sprawl around the cities,” while preserving open land. As Appelbaum notes, this reasoning effectively forbids building any housing, anywhere.

The California Environmental Quality Act emerged out of this intellectual environment. And green groups animated by anti-developed fervor quickly leveraged CEQA to obstruct all manner of housing construction, thereby setting judicial precedents that expanded the law’s reach. The effect has been to greatly increase the amount of time and money necessary for producing a housing unit in California. Local agencies take an average of 2.5 years to approve housing projects that require an Environmental Impact Report. Lawsuits can then tie up those projects in court for years longer. Over the past decade, CEQA litigation has delayed or blocked myriad condo towers in urban centers, the construction of new dormitories at the University of California, Berkeley (on the grounds that the state’s environmental impact statement failed to account for noise pollution), and even a bike lane in San Francisco.

CEQA is by no means the primary — let alone, the only — reason why the median price of a California home exceeded $900,000 in 2023. But it is unquestionably a contributor to such scarcity-induced unaffordability. Refusing to amend the law in the face of a devastating housing shortage is a choice, one that reflects tepid concern for facilitating material abundance.

Anti-growth politics left an especially large mark on California. But its influence is felt nationwide. CEQA is modeled after the National Environmental Policy Act, which enables the litigious to obstruct housing projects across the United States. And many blue states — including Massachusetts, Minnesota, and New York — have their own state-level environmental review laws, which have also deterred housing development.

In sum, California Democrats’ decision to pare back the state’s environmental review procedures, so as to facilitate more urban housing, represents a shift in the party’s governing philosophy — away from a preoccupation with the harms of development and toward a greater sensitivity to the perils of stasis. Indeed, Governor Newsom made this explicit in his remarks on the legislation, saying, “It really is about abundance.”

Democrats elsewhere should make a similar ideological adjustment.

California Democrats put the public above “the groups”

If anti-growth progressivism helped birth CEQA’s excesses, Democrats’ limited appetite for intraparty conflict sustained the law’s defects.

In recent years, the Yes in My Backyard (YIMBY) movement has built an activist infrastructure for pro-development reform. And their cause has been buttressed by the energetic advocacy of myriad policy wonks and commentators. One of this year’s best-selling books, Abundance by Ezra Klein and Derek Thompson, is dedicated in no small part to making the case against California’s housing policies.

Nevertheless, environmental organizations and labor unions have long boasted far greater scale and influence than “pro-abundance” groups.

And past efforts to curtail CEQA’s reach have attracted vigorous opposition from some greens and unions. Democrats typically responded by scaling back their reform ambitions to better appease those constituencies.

The hostility of green groups and the building trades to CEQA reform is as much instrumental as ideological. Some environmentalists retain the de-growth impulses that characterized the 1970s left. But environmental review lawsuits are also the stock and trade of many green organizations. CEQA litigation provides these groups with a key source of leverage over ecologically irresponsible developers and — for environmental law firms — a vital source of billings.

The building trades unions, meanwhile, see CEQA as a tool for extracting contracts from housing developers. Such groups have made a practice of pursuing CEQA lawsuits against projects until the builders behind them commit to using union labor.

For these reasons, many environmentalists and labor leaders fiercely condemned this week’s CEQA reforms. At a hearing in late June, a representative of Sacramento-Sierra’s Building and Construction Trades Council told lawmakers that their bill “will compel our workers to be shackled and start singing chain gang songs.”

Roughly 60 green groups published a letter condemning the legislation as a “backroom Budget Trailer Bill deal that would kill community and environmental protections, even as the people of California are faced with unprecedented federal attacks to their lives and livelihoods.”

The opposition of these organizations was understandable. But it was also misguided, even from the standpoint of protecting California’s environment and aiding its construction workers.

The recently passed CEQA bills did not weaken environmental review for the development of open land, only for multifamily housing in dense urban areas. And facilitating higher rates of housing development in cities is vital for both combating climate change and conserving untouched ecosystems. All else equal, people who live in apartment buildings by mass transit have far smaller carbon footprints than those who live in suburban single-family homes. And increasing the availability of housing in urban centers reduces demand for new exurban housing development that eats into open land.

Meanwhile, eroding regulatory obstacles to housing construction is in the interest of skilled tradespeople as a whole. A world where more housing projects are economically viable is one where there is higher demand for construction labor. This makes CEQA reform unambiguously good for the 87 percent of California construction workers who do not belong to a union (and thus, derive little direct benefit from the building trades CEQA lawsuits). But policies that grow California’s construction labor force also provide its building trades unions with more opportunities to recruit new members. Recognition of that reality led California’s carpenters’ union to back the reforms.

Therefore, if Democrats judged those reforms on the basis of their actual consequences — whether for labor, the environment, or the housing supply — they would conclude that the policies advanced progressive goals. On the other hand, if they judged the legislation by whether it attracted opposition from left-coded interest groups, then they might deem it a regressive challenge to liberal ideals. Too often, Democrats in California and elsewhere have taken the latter approach, effectively outsourcing their policy judgment to their favorite lobbies. But this time, the party opted to prioritize the public interest over coalitional deference.

Importantly, in doing so, California Democrats appeared to demonstrate that their party has more capacity to guide its stakeholders than many realized. In recent years, Democratic legislators have sometimes credited their questionable strategic and substantive decisions to “the groups” — as though the party were helplessly in thrall to its advocacy organizations.

But these groups typically lack significant political leverage. Swing voters do not take their marching orders from environmental organizations. And in an era of low union density and education polarization, the leaders of individual unions often can’t deliver very many votes.

This does not mean that Democrats should turn their backs on environmentalism or organized labor. To the contrary, the party should seek to expand collective bargaining rights, reduce pollution, and promote abundant low-carbon energy. But it should do those things because they are in the interests of ordinary Americans writ large, not because the electoral influence of green groups or building trades unions politically compel them to do so. Of course, all else equal, the party should seek to deliver victories to organizations that support it. But providing such favors should take precedence over advancing the general public’s welfare.

And pushing back on a group’s demands will rarely cause it to abandon your party entirely. After seeing that Democrats would not abandon CEQA reform, California’s Building Trades Council switched its position on the legislation to “neutral,” in exchange for trivial concessions.

Rome wasn’t upzoned in a day

It is important not to overstate what California Democrats have accomplished. Housing construction in the Golden State is still constrained by restrictive zoning laws, various other land-use regulations, elevated interest rates, scarce construction labor, and a president who is hellbent on increasing the cost of lumber and steel. Combine these constraints on housing supply with the grotesque income inequalities of cities like San Francisco and Los Angeles, and you get a recipe for a sustained housing crunch. CEQA reform should reduce the cost and timelines of urban homebuilding. But it will not, by itself, render California affordable.

Democrats cannot choose to eliminate all of blue America’s scarcities overnight. What they can do is prize the pursuit of material abundance over the avoidance of disruptive development and intraparty strife. And California just provided the party with a model for doing precisely that.


From Vox via this RSS feed

22
 
 

Is boredom over?

This story originally appeared in***Kids Today, Vox’s newsletter about kids, for everyone.Sign up here for future editions***.

As a millennial, I had my fair share of ’90s summers. I rode my bike, I read, I spent a lot of time doing nothing. My friends from home like to tell the story of the time they came by my house unannounced and I was staring at a wall (I was thinking).

Now, as a parent myself, I’ve been highly invested in the discourse over whether it’s possible for kids to have a “’90s summer” in 2025. This year, some parents are opting for fewer camps and activities in favor of more good old-fashioned hanging around, an approach also described as “wild summer” or “kid-rotting.”

On the one hand, sounds nice! I liked my summers as a kid, and I’d love to give my kids more unstructured playtime to help them build their independence and self-reliance (and save me money and time signing up for summer camp).

On the other hand, what exactly are they going to do with that unstructured time? Like a majority of parents today, I work full time, and although my job has some flexibility, I can’t always be available to supervise potion-making, monster-hunting, or any of my kids’ other cute but messy leisure activities. Nor can I just leave them to fend for themselves: Norms have changed to make sending kids outside to play til the streetlights come on more difficult than it used to be, though those changes started before the ’90s. The rise of smartphones and tablets has also transformed downtime forever; as Kathryn Jezer-Morton asks at The Cut, “Is it really possible to have a ’90s summer when YouTube Shorts exist?”

After talking to experts and kids about phones and free time, I can tell you that the short answer to this question is no. But the long answer is more complicated, and a bit more reassuring. Yes, kids today reach for their devices a lot. But especially as they get older, they do know how to put them down. And hearing from them about their lives made me rethink what my ’90s summers really looked like, and what I want for my kids.

Kids’ free time is different now

Parents aren’t imagining the differences between the ’90s and today, Brinleigh Murphy-Reuter, program administrator at the Digital Wellness Lab at Boston Children’s Hospital, told me. For one thing, kids just have less downtime than they used to — they’re involved in more activities outside of school, as parents try to prepare them for an increasingly competitive college application process. They’re also more heavily supervised than in decades past, thanks to concerns about child kidnapping and other safety issues that began to ramp up in the ’80s and continues today.

Free time also looks different. “If you go back to the ’80s or early ’90s, the most prized artifact kids owned was a bicycle,” Ruslan Slutsky, an education professor at the University of Toledo who studies play, told me. Today, “the bike has been replaced by a cell phone.”

The average kid gets a phone at the age of 10, Murphy-Reuter said. Tablet use starts even earlier, with more than half of kids getting their own device by age 4. If kids are at home and not involved in some kind of structured activity, chances are “they’re on some kind of digital device,” Slutsky said.

It’s not as though all millennials had idyllic, screen-free summers — some of my best July memories involve Rocko’s Modern Life, for example. But kids’ screen time is qualitatively different now.

According to a Common Sense Media report published in 2025, 35 percent of viewing for kids up to the age of 8 was full-length streaming TV shows, while 32 percent was on platforms like YouTube. Sixteen percent were short-form videos like TikToks, Instagram Reels, or YouTube Shorts. Only 6 percent of kids’ viewing was live TV, which honestly seems high (I am not sure my children have ever seen a live TV broadcast).

It’s not completely clear that YouTube is worse for kids than old-fashioned TV, but it can certainly feel worse. As Jezer-Morton puts it, “kid rotting in the ’90s was Nintendo and MTV; today’s version is slop-engineered for maximum in-app time spent.”

It is undeniably true that in the ’90s, you’d sometimes run out of stuff to watch and be forced to go outside or call a friend. Streaming means that for my kids’ generation, there is always more TV.

And the ubiquity of phones in both kids’ and adults’ lives has made enforcing screen time limits more difficult. “It’s tough to take away something that they have become so dependent on,” Slutsky said.

Older kids can be remarkably savvy about their screen time

That’s the bad news.

The good news is that a lot of what kids do on their devices isn’t actually watching YouTube — it’s gaming. Kids in the Common Sense survey spent 60 percent of their screen time playing games, and just 26 percent watching TV or video apps.

Gaming can actually have a lot of benefits for kids, experts say. “Video games can support relationship building and resiliency” and “can help to develop complex, critical thinking skills,” Murphy-Reuter said. Some research has found that educational media is actually more helpful to kids if it’s interactive, making an iPad better than a TV under certain circumstances, according to psychologist Jacqueline Nesi.

“Just because it’s on a screen doesn’t mean it’s not still fulfilling the same goals that unstructured play used to fulfill,” Murphy-Reuter told me. “It just might be fulfilling it in a way that is new.”

Meanwhile, kids — especially older teens — are actually capable of putting down their phones. Akshaya, 18, one of the hosts of the podcast Behind the Screens, told me she’d been spending her summer meeting up with friends and playing pickleball. “I spend a lot of my days hanging out outside,” she said.

Her cohost Tanisha, also 18 and a graduating senior, said she and her friends had been “trying to spend as much IRL time as we can while we’re still together this summer.” She, Tanisha, and their other cohost Joanne, also 18, have been enjoying unstructured summers for years — though they had internships last summer, none of them has been to camp since elementary school.

Joanne does worry that the ubiquity of short videos on her phone has affected her attention span. “I feel like it’s easy to just kind of zone out, or stop paying attention when someone’s talking,” she said.

At the same time, she and her cohosts have all taken steps to reduce their own device use. Tanisha deleted Instagram during college application season. Akshaya put downtime restrictions on her phone after noticing how often she was on it. “In my free time, if I ever feel like I’m doomscrolling, like I’ve been on social media for too long, I usually try to set a specific time when I’ll get off my phone,” she said.

Overall, 47 percent of kids have used tools or apps to manage their own phone use, Murphy-Reuter told me.

The sense I got from talking to Tanisha, Joanne, and Akshaya — and that I’ve gotten in interviews with teenagers and experts over the last year — is that teens can be quite sophisticated about phones. They know, just as we do, that the devices can make you feel gross and steal your day, and they take steps to mitigate those effects, without getting rid of the devices entirely.

Kids “really are very much in this digital space,” Murphy-Reuter said. And many of them are adept at navigating that space — sometimes more adept than adults who entered it later in life.

All that said, Tanisha, Joanne, and Akshaya are 18 years old, and talking to them made me realize that “wild summer,” at least of the unsupervised variety, may just be easier to accomplish for older kids. I can’t quite imagine letting my 7-year-old “rot” this summer. Yes, he’d want to watch way too much Gravity Falls, but he’d also just want to talk to me and play with me — normal kid stuff that’s not very compatible with adults getting work done.

It’s certainly possible that kids were more self-reliant — more able to occupy themselves with pretend play or outdoor shenanigans for long stretches of time — before they had devices. But I’m not sure how much more.

While writing this story, I realized that the lazy, biking, wall-staring summers of my youth all took place in high school. Before that, I went to camp.

What I’m reading

The Trump administration is declining to release almost $7 billion in federal funding for after-school and summer programs, jeopardizing support for 1.4 million kids, most of them low-income, around the country.

An American teen writes about why Dutch kids are some of the happiest in the world: It might be because they have a lot of freedom.

A new study of podcast listening among low-income families found that the medium fostered creative play and conversations among kids and family members, which are good for child development.

Sometimes my older kid likes to go back to picture books. Recently we’ve been reading I Want to Be Spaghetti! It’s an extremely cute story about a package of ramen who learns self-confidence.

From my inbox

A quick programming note: I will be out on vacation for the next two weeks, so you won’t be hearing from me next week. You will get a summery edition of this newsletter on Thursday, July 17, so stay tuned. And if there’s anything you’d especially like me to cover when I get back, drop me a line at [email protected]!


From Vox via this RSS feed

23
 
 

Donald Trump speaks from the dais of the House of Representatives.

President Donald Trump speaks during an address to a joint session of Congress at the US Capitol on March 4, 2025. | Allison Robbert/AFP via Getty Images

President Donald Trump is about to achieve his biggest legislative victory yet: his “one big, beautiful bill” — the massive tax– and Medicaid-cutting, immigration and border spending bill passed the Senate on Tuesday — is on the verge of passing the House of Representatives.

It’s a massive piece of legislation, likely to increase the national debt by at least $3 trillion, mostly through tax cuts, and leave 17 million Americans without health coverage — and it’s really unpopular. Majorities in nearly every reputable poll taken this month disapprove of the bill, ranging from 42 percent who oppose the bill in an Ipsos poll (compared to 23 percent who support) to 64 percent who oppose it in a KFF poll.

And if history is any indication, it’s not going to get any better for Trump and the Republicans from here on out.

In modern American politics, few things are more unpopular with the public than big, messy bills forged under a bright spotlight. That’s especially true of bills passed through a Senate mechanism called “budget reconciliation,” a Senate procedure that allows the governing party to bypass filibuster rules with a simple majority vote. They tend to have a negative effect on presidents and their political parties in the following months as policies are implemented and campaign seasons begin.

Part of that effect is due to the public’s general tendency to dislike any kind of legislation as it gets more publicity and becomes better understood. But reconciliation bills in the modern era seem to create a self-fulfilling prophecy: forcing presidents to be maximally ambitious at the outset, before they lose popular support for the legislation and eventually lose the congressional majorities that delivered passage.

Presidents and their parties tend to be punished after passing big spending bills

The budget reconciliation process, created in 1974, has gradually been used to accomplish broader and bigger policy goals. Because it offers a workaround for a Senate filibuster, which requires 60 votes to break, it has become the primary way that presidents and their parties implement their economic and social welfare visions.

The public, however, doesn’t tend to reward the governing party after these bills are passed. As political writer and analyst Ron Brownstein recently pointed out, presidents who successfully pass a major reconciliation bill in the first year of their presidency lose control of Congress, usually the House, the following year.

In 1982, Ronald Reagan lost his governing majority in the House after using reconciliation to pass large spending cuts as part of his Reaganomics vision (the original “big, beautiful” bill). And the pattern would repeat itself for George H.W. Bush (whose reconciliation bill contradicted his campaign promise not to raise taxes), for Bill Clinton in 1994 (deficit reductions and tax reform), for Barack Obama in 2010 (after the passage of the Affordable Care Act), for Trump in 2018 (tax cuts), and for Biden in 2022 (the American Rescue Plan and the Inflation Reduction Act).

The exception in this list of modern presidents is George W. Bush, who did pass a set of tax cuts in a reconciliation bill, but whose approval rating rose after the 9/11 terrorist attacks.

Increasing polarization, and the general anti-incumbent party energy that tends to run through midterm elections, of course, explains part of this overall popular and electoral backlash. But reconciliation bills themselves seem to intensify this effect.

Why reconciliation bills do so much political damage

First, there’s the actual substance of these bills, which has been growing in scope over time.

Because they tend to be the first, and likely only, major piece of domestic legislation that can execute a president’s agenda, they are often highly ideological, partisan projects that try to implement as much of a governing party’s vision as possible.

These highly ideological pieces of legislation, Matt Grossman, the director of Michigan State University’s Institute for Public Policy and Social Research, and his partners have found, tend to kick into gear a “thermostatic” response from the public — that is, that public opinion moves in the opposite direction of policymaking when the public perceives one side is going too far to the right or left.

Because these bills have actually been growing in reach, from mere tax code adjustments to massive tax-and-spend, program-creating bills, and becoming more ideological projects, the public, in turn, seems to be reacting more harshly.

These big reconciliation bills also run into an issue that afflicts all kinds of legislation: It has a PR problem. Media coverage of proposed legislation tends to emphasize its partisanship, portraying the party in power as pursuing its domestic agenda at all costs and emphasizing that parties are fighting against each other. This elevates process over policy substance. Political scientist Mary Layton Atkinson has found that just like campaign reporting is inclined to focus on the horse race, coverage of legislation in Congress and policy debates often focuses on conflict and procedure, adding to a sense in the public mind that Congress is extreme, dysfunctional, and hyperpartisan.

Adding to this dynamic is a quirk of public opinion toward legislation and referenda: Proposals tend to get less popular, and lose public support, between proposal and passage, as the public learns more about the actual content of initiatives and as they hear more about the political negotiations and struggles taking place behind the scenes as these bills are ironed out.

Lawmakers and key political figures also “tend to highlight the benefits less than the things that they are upset about in the course of negotiations,” Grossman told me. “That [also] occurs when a bill passes: You have the people who are against it saying all the terrible things about it, and actually the people who are for it are often saying, ‘I didn’t get all that I wanted, I would have liked it to be slightly different.’ So the message that comes out of it is actually pretty negative on the whole, because no one is out there saying this is the greatest thing and exactly what they wanted.”

Even with the current One Big Beautiful Bill, polling analysis shows that the public tends not to be very knowledgeable about what is in the legislative package, but gets even more hostile to it once they learn or are provided more information about specific policy details.

Big reconciliation bills exist at the intersection of all three of these public image problems: They tend to be the first major legislative challenge a new president and Congress take on, they suck up all the media’s attention, they direct the public’s attention to one major piece of legislation, and they take a pretty long time to iron out — further extending the timeline in which the bill can get more unpopular.

This worsening perception over time, the public’s frustration with how the sausage is made, and the growing ideological stakes of these bills, all create a kind of feedback loop: Governing parties know that they have limited time and a single shot to implement their vision before experiencing some form of backlash in future elections, so they rush to pass the biggest and boldest bill possible. The cycle repeats itself, worsening public views in the process and increasing polarization. For now, Trump has set a July 4 deadline for signing this bill into law. He looks likely to hit that goal, or at least come close. But all signs are pointing to this “beautiful” bill delivering him and his party a big disappointment next year. He’s already unpopular, and when he focuses his and the public’s attention on his actual agenda, it tends not to go well.


From Vox via this RSS feed

24
 
 

A lone man stands in a sea of cubicles.

It’s okay to be scared of AI. You should learn to use it anyway.

ChatGPT’s most advanced models recently served me a surprising statistic: US productivity grew faster in 2024 than in any year since the 1960s. Half that jump can be linked to generative AI tools that most workers hadn’t even heard of two years earlier.

The only problem is that it’s not true. The AI made it up.

Despite its much-documented fallibility, generative AI has become a huge part of many people’s jobs, including my own. The numbers vary from survey to survey, but a June Gallup poll found that 42 percent of American employees are using AI a few times a year, while 19 percent report deploying it several times a week. The technology is especially popular with white-collar workers. While just 9 percent of manufacturing and front-line workers use AI on a regular basis, 27 percent of white-collar workers do.

Even as many people integrate AI into their daily lives, it’s causing mass job anxiety: A February Pew survey found that more than half of US employees worried about their fate at work.

Unfortunately, there is no magic trick to keep your job for the foreseeable future, especially if you’re a white-collar worker. Nobody knows what’s going to happen with AI, and leadership at many companies is responding to this uncertainty by firing workers it may or may not need in an AI-forward future.

“If AI really is this era’s steam engine, a force so transformative that it will power a new Industrial Revolution, you only stand to gain by getting good at it.”

After laying off over 6,000 workers in May and June, Microsoft is laying off 9,000 more workers this month, reportedly so the company can reduce the number of middle managers as it reorganizes itself around AI. In a note on Tuesday, Amazon CEO Andy Jassy told employees that the company would “roll out more generative AI and agents” and reduce its workforce in the next few years. This was all after Anthropic CEO Dario Amodei warned AI would wipe out half of all entry-level white-collar jobs in the same timespan, a prediction so grim that Axios coined a new term for AI’s imminent takeover: “a white-collar bloodbath.”

This is particularly frustrating because, as my recent encounter with ChatGPT’s tendency to hallucinate makes clear, the generative AI of today, while useful for a growing number of people, needs humans to work well. So does agentic AI, the next era of this technology that involves AI agents using computers and performing tasks on your behalf rather than simply generating content. For now, AI is augmenting white-collar jobs, not automating them, although your company’s CEO is probably planning for the latter scenario.

Maybe one day AI will fulfill its promise of getting rid of grunt work and creating endless abundance, but getting from here to there is a harrowing proposition.

“With every other form of innovation, we ended up with more jobs in the end,” Ethan Mollick, a Wharton professor and author of the newsletter One Useful Thing, told me. “But living through the Industrial Revolution still kind of sucked, right? There were still anarchists in the street and mass displacement from cities and towns.”

We don’t know if the transition to the AI future will be quite as calamitous. What we do know is that just as jobs transformed due to past technological leaps, like the introduction of the personal computer or the internet, your day-to-day at work will change in the months and years to come. If AI really is this era’s steam engine, a force so transformative that it will power a new Industrial Revolution, you only stand to gain by getting good at it.

At the same time, becoming an AI whiz will not necessarily save you if your company decides it’s time to go all in on AI and do mass, scattershot layoffs in order to give its shareholders the impression of some efficiency gains. If you’re impacted, that’s just bad luck. Still, having the skills can’t hurt.

Welcome to the AI revolution transition

It’s okay to be scared of AI, but it’s more reasonable to be confused by it. For two years after ChatGPT’s explosive release, I couldn’t quite figure out how a chatbot could make my life better. After some urging from Mollick late last year, I forced myself to start using it for menial chores. Upgrading to more advanced models of ChatGPT and Claude turned these tools into indispensable research partners that I use every day — not just to do my job faster but also better. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)

But when it comes to generative AI tools and the burgeoning class of AI agents, what works for one person might not be helpful to the next.

“Workers obviously need to try to ascertain as much as they can — the skills that are most flexible and most useful,” said Mark Muro, a senior fellow at Brookings Metro. “They need to be familiar with the technology because it is going to be pervasive.”

For most white-collar workers, I recommend Mollick’s 10-hour rule: Spend 10 hours using AI for work and see what you learn. Mollick also recently published an updated guide to the latest AI tools that’s worth reading in full. The big takeaways are that the best of these tools (ChatGPT from OpenAI, Claude from Anthropic, and Google Gemini) can become tireless assistants with limitless knowledge that can save you hours of labor. You should try different models within the different AI tools, and you should experiment with the voice features, including the ability to use your phone’s camera to share what you’re seeing with the AI. You should also, unfortunately, shell out $20 a month to get access to the most advanced models. In Mollick’s words, “The free versions are demos, not tools.”

“If I have a very narrow job around a very narrow task that’s being done repetitively, that’s where the most risk comes in.”

You can imagine similar advice coming from your geeky uncle at Thanksgiving circa 1984, when personal computers were on the brink of taking over the world. That was the year roughly the same percentage of white-collar workers were regularly using PCs at work as are using AI today. But the coming AI transition will look different than the PC transition we’ve already lived through. While earlier digital technologies hit frontline workers hardest, “AI excels at supporting or carrying out the highly cognitive, nonroutine tasks that better-educated, better-paid office workers do,” according to a February Brookings report co-authored by Muro.

This means AI can do a lot of the tasks that software engineers, architects, lawyers, and journalists do, but it doesn’t mean that AI can do their jobs — a key distinction. This is why you hear more experts talking about AI augmentation rather than AI automation. As a journalist, I can confidently say that AI is great at streamlining my research process, saving me time, and sometimes even stirring up new ideas. AI is terrible at interviewing sources, although that might not always be the case. And clearly, it’s touch-and-go when it comes to writing factually accurate copy, which is kind of a fundamental part of the job.

That proposition looks different for other kinds of white-collar work, namely administrative and operational support jobs. A Brookings report last year found that 100 percent of the tasks that bookkeepers and clerks do were likely to be automated. Those of travel agents, tax preparers, and administrative assistants were close to 100 percent. If AI really did make these workers redundant, it would add up to millions of jobs lost.

“The thing I’d be most worried about is if my task and job are very similar to each other,” Mollick, the Wharton professor, explained. “If I have a very narrow job around a very narrow task that’s being done repetitively, that’s where the most risk comes in.”

It’s hard to AI-proof your job or career altogether given so much uncertainty. We don’t know if companies will take advantage of this transition in ways that produce better products and happier workers or just use it as an excuse to fire people, squandering what some believe is a once-in-a-generation opportunity to transform work and productivity. It sucks to feel like you have little agency in steering the future toward one outcome or the other.

At the risk of sounding like your geeky uncle, I say give AI a try. The worst-case scenario is you spend 10 hours talking to an artificially intelligent chatbot rather than scrolling through Instagram or Reddit. The best-case scenario is you develop a new skill set, one that could very well set you up to do an entirely new kind of job, one that didn’t even exist before the AI era. You might even have a little fun along the way.

A version of this story was also published in the User Friendly newsletter.Sign up hereso you don’t miss the next one!


From Vox via this RSS feed

25
 
 

A US flag with arms and a face is holding a hot dog and a soda with an uneasy expression. The flag is surrounded by brightly colored sugar snacks and drinks. There are red, white, and blue fireworks in the background.

Eating a hot dog on July Fourth isn’t just traditional. It’s patriotic.

From iconic red, white, and blue rocket pops (hello, Red Dye 40!) to nitrate-loaded hot dogs and the all-day parade of sugary drinks and alcohol, this quintessential American holiday is a celebration of freedom — and, often, dietary chaos.

And yet these days, many of us seem to be having second thoughts about the American diet. Our food is too processed, too loaded with dyes and preservatives. The country’s obesity and diabetes epidemics, which have led to an explosion in the diagnoses of related chronic health conditions, have put the issue front and center, with much of the blame being placed on what we eat and all of the additives and preservatives it contains. About half of US adults believe food additives and chemicals are a large or moderate risk to their health — higher than the perceived risks of infectious disease outbreaks or climate change, according to a recent poll from Ipsos, a global market research firm.

We all worry about microplastics, nitrates, food dyes, and ultra-processed foods. And US Health Secretary Robert F. Kennedy Jr. has made improving Americans’ diets and our food supply a top priority. It’s a policy emphasis that’s popular with the public: Two-thirds of US adults believe artificial dyes and pesticides make our foods unsafe to eat — and these are opinions that transcend political leanings, according to Ipsos.

And regardless of our entrenched food system, people are trying to make healthier decisions in their daily lives: 64 percent of US adults say they pay more attention to food labels than they did five years ago, according to the public health nonprofit NSF International. But we are frustrated: Only 16 percent of Americans say they find claims on food labels trustworthy.

It may sound unbelievable on a holiday when Americans will gladly stuff their faces with ultra-processed junk while wearing flag-laden paraphernalia, but these days, many of us actually wish the products in our grocery stores looked a little more like the ones across the Atlantic. Just 37 percent of American adults said in the NSF International survey that our food labeling was better than in other countries. Most Americans say they want changes to how foods at our grocery stores are labeled.

American food really is different from what can be found in Europe, both in its substance and in its packaging.

But while we’re probably not doing any favors to our health by consuming ultraprocessed foods loaded with artificial ingredients that are banned elsewhere, the biggest source of our health woes isn’t necessarily these artificial dyes and preservatives. It’s the cholesterol and saturated fat in that hot dog, the sugar in that lemonade, and those ultra-processed potato chips. Americans consume about twice as much sugar as other rich countries do on average, eat more ultra-processed foods, and consume more trans and saturated fats than Europeans. We also eat enormous portions, and calories, no matter where they come from, are a big part of the problem.

Americans are generally in poorer health than our peers in Europe, and US life expectancy continues to trail behind other wealthy countries. Rich Americans actually fare worse than poor Europeans, according to one study.

A new era of American greatness starts at the picnic table this July Fourth. Yes, we ostensibly rebelled against an English monarchy in order to be able to do whatever we want, even eat whatever we want. But if we want to catch up to our European rivals again in how healthy we feel, how productive we are, and how long we live — we need to take a closer look at the stuff we’re putting in our bodies.

American food really does have different stuff in it

Doctors widely agree that ultra-processed foods and food additives are bad for children’s health. Yet they have become more and more readily available over the decades: One 2023 study found 60 percent of the food that Americans buy has additives, a 10 percent increase since 2001.

Kennedy, the head of the Department of Health and Human Services, the country’s top health agency, has made overhauling US food production a top priority. His department’s recent MAHA report highlights steps taken by other countries, including France and the Nordic countries, to discourage people through their dietary guidelines from eating ultra-processed foods. The report lists several additives and artificial ingredients that are permitted in American food but are banned or heavily restricted across the pond. Kennedy suggests that the US should follow suit.

So where might we begin?

Let’s start with Red Dye 40, the color additive found in foods such as Froot Loops and M&Ms that has been linked to hyperactivity in children and, according to some animal studies, has been shown to accelerate tumor growth in mice. The US has not placed any special requirements on Red Dye 40, aside from its listing alongside other ingredients. But the European Union has required a clear warning label on any food with the dye, and some countries (including Germany, France, and Denmark) have banned it outright. A similar warning could be adopted here.

There are other additives casually lurking in American foods that have been restricted in other countries. Here are a few:

Titanium dioxide: Another food coloring that can be added to candies like Skittles and coffee creamers for a bright white effect. The EU banned it in 2022 because of evidence it could affect the human body’s genetic material, while the US continues to allow its use.Propyl paraben: This preservative is regarded as safe in the US, often added to mass-produced American baked goods such as Sara Lee cinnamon rolls or Weight Watchers lemon creme cake. But its use has been prohibited in the EU because of research indicating it could mess with hormone function.Butylated hydroxytoluene: Another preservative that’s sometimes added to breakfast cereals and potato chips to extend their shelf life. It’s generally regarded as safe for use in the United States despite evidence that it could compromise kidney and liver function and concerns that it could cause cancer. In the EU, however, its use is subject to strict regulation.

There are some artificial sweeteners, too — aspartame, sucralose, and saccharin — that are permitted in the US and the EU, but generally, Europe puts many more restrictions on unhealthy artificial ingredients than the US does.

Kennedy is pledging he’ll do something about it. His biggest win so far is securing voluntary commitments from food manufacturers to remove a variety of artificial dyes — yes, including Red Dye 40 — from their products before the end of 2026. If they fail to comply, he has suggested new regulations to put a limit on or outright prohibit certain substances of concern.

But are these ingredients the most important problem with our July Fourth cookouts? They are part of the issue. But there’s more to it.

The real problem is the American diet, dyed or not

Here’s a revealing comparison: In 2018, the United States banned trans fats, an artificial ingredient derived from oils that has been linked to heart disease and diabetes — 15 years after Denmark did the same thing. For more than a decade, Americans kept eating a ton of trans fat — something that is so bad for you that it can simultaneously increase bad cholesterol while lowering good cholesterol.

While that is probably not the entire reason that the US has double the obesity and diabetes rates as Denmark does, it is a telling example. A fatty and highly processed ingredient that is linked to two of the biggest health problems in the United States persisted for years in American food, long after the Europeans had wised up.

It’s a pattern that, across the decades, explains the enormous gulf between the typical American’s diet and the Mediterranean diet that dominates much of Europe. During the 20th century, amid an explosion in market-driven consumerism, convenience became one of the most important factors for grocery shoppers. Americans wanted more meals that could be quickly prepared inside the microwave and dry goods that could last for weeks and months on a pantry shelf, and so these products gained more and more of a market share. But that meant that more American food products were laced with more of the preservatives and additives that are now drawing so much concern.

Americans have also always eaten more meat, cheese, and butter, animal products high in saturated fats as opposed to the unsaturated fats that come from oils like olive oil and are more common in European diets, for years. Our meat obsession was turbocharged by a meat industry that tapped into patriotic sentiments about pioneering farms making their living off the frontier. Eating a diet with more animal products is associated with a long list of health problems, particularly the cardiovascular conditions that remain the biggest killers of Americans.

We should push our policymakers to pass regulations that get rid of artificial additives, but that alone is insufficient. You can find too much fat and too much sugar around the picnic table. Some of it is unnatural, but plenty of it is. America has to figure out how to encourage people to eat low-fat, low-sugar, whole-food diets. That’s the real path to better health.

MAHA has some good ideas. Its emphasis on whole foods, not processed ones, is a step in the right direction. But Kennedy’s prescriptions are contradictory: Kennedy wants to make it easier for people to find whole foods at their nearby store, while Republicans in Congress propose massive cuts to food stamps. Kennedy’s MAHA report rails against the overuse of pesticides, but Trump’s Environmental Protection Agency is rolling back restrictions on their use.

Those contradictions are a reminder that, though Kennedy has shone a light on a worthwhile issue, we can’t and we shouldn’t expect the government to fix our food problems all on its own. This is America, after all, where we pride ourselves on individualism.

The occasional indulgence is not a big deal. It’s what we do on July 5 that really matters.


From Vox via this RSS feed

view more: next ›