Vox

41 readers
8 users here now

founded 9 months ago
MODERATORS
1
 
 

President Donald Trump (L, out of focus) and Israeli Prime Minister Benjamin Netanyahu (R, in focus) are seen sitting at a polished table.

President Donald Trump and Israeli Prime Minister Benjamin Netanyahu attend a dinner in the Blue Room of the White House on July 7. | Andrew Harnik/Getty Images

President Donald Trump sounded confident on Sunday when he told a reporter that a ceasefire deal between Israel and Hamas could be reached by the end of the week.

Israel has reportedly proposed a 60-day ceasefire and the return of 10 living and 18 deceased hostages, out of approximately 50 remaining Israeli hostages, of whom 20 are believed to be alive. Hamas continues to push for a permanent end to the conflict.

Negotiations between the two sides are being conducted indirectly, with Qatar and Egypt leading the talks. The calculus on the part of the US and Israel appears to be that, with Iran and its proxy forces in the region significantly weakened, Hamas will be ready to make more concessions.

These talks are taking place against the backdrop of a worsening humanitarian crisis in Gaza, with daily reports of Israeli troops displacing Palestinians and firing on hungry, desperate people trying to get food as Palestinian children struggle with starvation, malnutrition, and disease. Israeli soldiers have also been killed in recent fighting.

Meanwhile, Israeli Prime Minister Benjamin Netanyahu is making the rounds in Washington, DC, this week. It’s his third visit since Trump returned to office, more than any other foreign leader.

At a dinner at the White House on Monday evening, Netanyahu presented Trump with a copy of a letter nominating him for the Nobel Peace Prize. Trump has repeatedly expressed his interest in getting the peace prize — an accolade that President Barack Obama received early in his presidency — and has cited his “peacekeeping” efforts in the Middle East, including attacks on Iran’s nuclear facilities, as justification.

Ending the war in Gaza, Trump believes, would cement his legacy. But he has to convince Netanyahu to agree to that. To get the latest on where things stand between Trump and Netanyahu, Today, Explained co-host Noel King spoke to Michael Koplow, chief policy officer at Israel Policy Forum.

Below is an excerpt of their conversation, edited for length and clarity. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.

Donald Trump and Benjamin Netanyahu have run hot and cold on each other since Trump took office in January. Where do things stand between them right now?

Right now, it seems that their relationship is at a high point. But even in the past six months, we’ve seen significant ups and significant downs.

On the one hand, this is now Prime Minister Netanyahu’s third visit to the White House in President Trump’s second term, and that would indicate that these two men have a closer relationship than any other two leaders on the face of the planet. But these visits have not always been so harmonious, and they’ve not always been so great for Prime Minister Netanyahu, particularly the second visit.

On that second visit, Prime Minister Netanyahu seemed to be blindsided in the Oval Office in front of cameras by a number of things that President Trump said. He was blindsided on tariffs when he had come to Washington ostensibly to try to remove any tariffs that President Trump was going to put on Israel. And not only was he not successful in doing so, President Trump sat in front of the cameras and talked about how the United States gives Israel $4 billion a year, and that’s a lot of money and it should get something in return.

He was also blindsided in that meeting on the issue of Iran. President Trump announced in that meeting that the United States was going to enter into direct talks with Iran, something that Prime Minister Netanyahu was certainly opposed to. Now we know how that turned out two months later, but at the time, it was seen as a pretty significant signal that President Trump and Prime Minister Netanyahu were not on the same page.

In addition to the awkwardness of that second visit — during President Trump’s last visit to the Middle East, he went to Saudi Arabia, he went to Qatar, he went to the UAE. He made a huge production out of those visits, talking about how much he loved the region, but he didn’t go to Israel and many people interpreted that as a snub. Was it?

I don’t think it was a snub. I think that he went to the region because he wanted to come home with high-profile, visible demonstrations of US strength and demonstrate that he could bring deals back home. Ultimately, he thinks of himself as a dealmaker, and there were all sorts of trade deals and promises for investment to be found in Saudi Arabia, the UAE, and Qatar. Those were not going to be found in Israel.

He took that trip to the Middle East back in May. What’s changed since then?

The biggest thing that has changed is the campaign against Iran, where you had 12 days of Israeli strikes on Iranian nuclear and military facilities and personnel, and then you had the very high-profile US strike on the three Iranian nuclear sites at Fordo, Natanz, and Isfahan.

In Israel and certainly within the administration — and I share this assessment — that campaign is viewed as being incredibly successful. And unlike when President Trump traveled to the region, this is a case where Israel presents him with a big and visible win and he’s touting it as much as he can.

So Israel gives him a win on Iran, and now, as President Trump is wont, he’s looking for another win. On Sunday, he tells reporters that a deal on Gaza is close. What do you think he’s trying to telegraph and what happens if he’s wrong? Does he take it out on Netanyahu?

There are two things that President Trump has consistently talked about in terms of his vision for the Middle East and what he wants to accomplish. One was preventing Iran from getting a nuclear weapon. And whether that has now been done definitively or not, President Trump is certainly treating it as if this is mission accomplished, and Iran is now not going to get a nuclear weapon.

The second thing that he’s consistently talked about is bringing the fighting in Gaza to an end and expanding the Abraham Accords and bringing other countries into the circle of normalization, all of which I think in his mind is supposed to lead to the Nobel Peace Prize, which really I think is the goal that he seems to put above almost anything else in the realm of foreign policy.

To accomplish that second one, he needs Prime Minister Netanyahu to go along with what he wants because there is no world in which the fighting in Gaza will end unless Prime Minister Netanyahu agrees to do it. If Prime Minister Netanyahu does not go along with it, there may be consequences, and it may be that President Trump eventually moves on.

I think that what we’re seeing right now from Prime Minister Netanyahu is an effort to really extend the clock. The first part of this is a potential 60-day ceasefire in Gaza that will not bring a permanent end to the war, but will give President Trump the opening that he seeks at the moment. And if this negotiation is successful, then it also buys Prime Minister Netanyahu two months to figure out whether he wants to keep it going or whether, at the end of 60 days, the fighting in Gaza will resume.

[Netanyahu] likes extending his options as much as he can, and so buying this time will be important and it will allow him to give the president something that the president is really demanding from him, and that should keep the heat off at least for a little bit.

These two men have different motivations when it comes to the Palestinian people. Netanyahu doesn’t seem to care about Palestinians. He cares about Israel’s security. Trump wants a deal in Gaza because he likes doing deals. Do you think that Donald Trump cares what happens to the Palestinian people?

When President Trump speaks about Palestinians, certainly during this term in office, he tends to do it with a degree of empathy that we don’t always see from him on other issues. It’s pretty consistent when he talks about Gaza for him to talk about the fact that Palestinians are suffering and that they deserve better. We don’t often hear that sort of language from Prime Minister Netanyahu and from many Israeli leaders.

The problem is that for any real resolution to Gaza, you need some sort of political vision. President Trump often talks about how Palestinians deserve better lives and should have better lives. But it’s rare to hear him talk about how he thinks they will get there politically as opposed to this just being a quality of life issue.

We saw it when he announced his “Gaza Riviera” plan during that first Netanyahu trip to Washington, and we saw it again even last night when President Trump got a question about two states, and he didn’t answer it. He punted it to Netanyahu. Netanyahu made it very clear that Israel does not see a Palestinian state as part of the Israeli-Palestinian political future.

Both Trump and Netanyahu need this relationship. Trump needs Netanyahu. Netanyahu needs Trump. Who needs whom more, do you think?

Netanyahu absolutely needs Trump more than Trump needs Netanyahu.

The things that Trump wants right now from Netanyahu are things that would be nice to have. He wants to expand the Abraham Accords. He wants to have better coordination throughout the region. He wants his Nobel Prize. But ultimately, the United States has lots of other issues on its plate. The United States is a global superpower, whereas Israel is not. And the United States can work on all sorts of other things even if the Trump-Netanyahu relationship turns out to be poor and the coordination turns out to not be what President Trump wants.

From Prime Minister Netanyahu’s side, the United States is indispensable, and there are all sorts of things that he wants that he has to have President Trump for. We saw this in practice with Iran, where Israel embarked on airstrikes on its own, but it was clear from the beginning that it needed the United States not only to buy into what Israel was doing but to actually step in and act.

Ultimately, Prime Minister Netanyahu cannot afford to be on President Trump’s bad side, and I think that that’s going to cause him some difficult choices ahead when it comes to Gaza, where there’s no question that President Trump wants that war to end. Prime Minister Netanyahu also may want it to end, but he wants it to end on very specific and narrow terms. And if the two men end up crosswise on that question, Prime Minister Netanyahu really cannot afford to get into a rift with President Trump.


From Vox via this RSS feed

2
 
 

Students walk through an archway on Stanford University’s campus.

The One Big Beautiful Bill could make it harder for people to earn advanced degrees. | Justin Sullivan/Getty Images

University tuition in the United States is notoriously expensive — so much so that Americans currently have over $1.6 trillion in student loan debt. But now, the routine process of taking out student loans has been overhauled as a result of the One Big Beautiful Bill, which President Donald Trump signed into law last week.

The new law eliminates some student loan programs and adds borrowing caps that could make it harder for people to earn advanced degrees. In some cases, it could make financing college more difficult.

“This bill makes monumental changes to the higher education system,” said Sarah Sattelmeyer, project director for education, opportunity, and mobility in the Higher Education initiative at the liberal think tank New America. “And a lot of the elements in the bill are going to make college more expensive or harder to access, especially for the lowest-income students, families, and borrowers.”

So what, exactly, is in this bill, and how will it impact student loan borrowers?

Borrowers will have fewer repayment options

On X, Education Secretary Linda McMahon said that the president’s spending bill “simplifies the overly complex student loan repayment system.” In a way, she’s right. The law consolidates a variety of student loan repayment options.

Before, students could choose from roughly a dozen financing options, including the Biden-era SAVE plan, an income-driven repayment plan that eventually leads to loan forgiveness. Now, they have just two options. The first is the standard repayment plan, which includes a fixed monthly payment over a set period of time based on how much someone borrowed. The second is the Repayment Assistance Plan (or RAP), which is an income-driven repayment plan where borrowers’ payments are a certain percentage of their income. The more money someone makes, the more that percentage increases.

But while McMahon was right to say that the student loan repayment system is simpler, she left out the fact that dramatically limiting the number of repayment options will make it harder for people to finance their student loans according to their particular needs. “Streamlining this system is a really important goal,” Sattelmeyer said. “But one outcome of this plan is that payments would be more expensive for those at the bottom of the income ladder, and that’s a really important piece in terms of people’s ability to repay their loans.”

One analysis found that under RAP, monthly payments could be hundreds of dollars more than they are under the SAVE plan, which is currently blocked by the courts and will be officially eliminated under Trump’s new law.

The Urban Institute also pointed out that the bills passed by the House and Senate don’t account for inflation. “This means as incomes rise over time, borrowers who make the same amount of income in real terms would gradually pay a larger percentage of their income over time,” wrote Kristin Blagg, principal research associate in the Work, Education, and Labor Division at the Urban Institute.

The new caps on loans will make it harder to afford advanced degrees

As this bill was making its way through Congress, experts were warning that it could worsen America’s doctor shortage. That’s because the law imposes new limits on how much people can borrow for graduate school and scraps the Grad PLUS loan program, which allows students to take out enough loans to cover the full cost of their programs.

For graduate students, the new law puts an annual cap of $20,500 and a lifetime cap of $100,000 on borrowing. Those seeking professional degrees, like medical or law degrees, have an annual limit of $50,000 and a $200,000 limit in a lifetime. As I wrote last month, putting limits on how much money people can borrow is generally a good idea, because high amounts of debt can drag people down and get people in financial trouble. The problem is that the limits this law imposes simply won’t be enough to cover the actual costs of these programs. According to the Association of American Medical Colleges, the median cost of attending a four-year medical program at a public school is over $280,000. At private institutions, it’s just under $400,000.

“Loan limits are an important tool but I think the challenge here is it can be hard to know where, exactly, to set those limits,” Sattelmeyer said. “The bill also doesn’t address a lot of the other underlying issues with the program: It does not provide additional grant aid or funding to low resource students. And so it is limiting loans and not necessarily providing additional resources.”

That means that some students might be pushed to turn to private lenders, who tend to charge higher interest rates. And in some cases, borrowers might not even qualify for those private loans. “Private student loans often require a cosigner, so some students may not qualify, and they may have no options to fully finance and attend graduate school. So there is a possibility that for some students, this will be a barrier to accessing graduate school,” Sara Partridge, associate director for higher education policy at the Center for American Progress, recently told Business Insider.

It’s difficult to predict both how this bill will affect the higher education system as a whole and just how much more expensive it will be to attend universities. What’s clear is that while Republicans tout their success in passing this bill, the Americans who stand to benefit from it the most are the wealthy who will get yet more tax cuts, all while middle- and low-income households have a harder time cobbling up the funds to pay for a hard-earned degree.


From Vox via this RSS feed

3
 
 

Christian Horner during the F1 Bahrain Grand Prix on March 2, 2024. | Peter Fox/Formula 1 via Getty Images

Editor’s note, July 9, 2025, 10:45 am ET: On July 9, Red Bull’s Formula 1 team principal Christian Horner was fired. His removal followed a tough year for the team, one that saw dwindling performance, high turnover of critical engineers, staff and drivers, and rumors about the potential exit of four-time world champ Max Verstappen. As of 10:30 am, a reason hasn’t been made public, but below you’ll find an explainer — initially published on March 8, 2024 — behind last year’s allegations of sexual harassment, a scandal that may have contributed to the downfall of the once-dominant team.

The line between spectacle and sport has always been blurry. But when it comes to Formula 1 — the global motorsport with some of the fastest cars and deepest pockets — the biggest institutional players seem to forget that there’s a difference between delicious drama and  legitimate controversy.

It’s easy to mistake the two when so much of the elite sport’s success can be attributed to Drive to Survive, the addictive Netflix show that explains the intimidating technicalities and stakes of F1 through the catty interpersonal drama between teammates and competitors alike. My colleague Byrd Pinkerton described the show in 2021 this way: “Basically, imagine the Real Housewives, if the housewives were driving around at 300 kilometers an hour, and if occasionally one of the housewives caught on fire.”

What’s at stake?

Prior to reporting this story, I attended several races in the last few years, bonded with an increasingly diverse fan community, and spoke with pit crew personnel at the 2023 Miami Grand Prix. Here’s what you need to know:

The investigation and allegations against Christian Horner, explainedWho gets to control the narrativeWhat this means for women employees and fans in motorsport

Have questions? Email me: [email protected].

As a result, F1 has experienced years of explosive growth, especially in the US. More eyes has meant more races: You might have heard Las Vegas residents griping about the construction inconveniences and inaccessibility of last November’s brand-new street race. Or how Elon Musk, Shakira, and Vin Diesel rolled out to the Miami Grand Prix. There’s also the sheer stardom of the drivers: thirst traps of Lewis Hamilton going viral, Daniel Ricciardo gracing the cover of GQ, and endless brand deals for nearly everyone on the grid. (That’s where the 20 drivers start the race!)

Right now, though, a controversy might change how people see the ordinarily glitzy sport — especially the growing female fanbase F1 is all too happy to court via social media. In 2022, approximately 40 percent of fans were female, up 8 percent from five years ago, according to Stefano Domenicali, CEO of the Formula 1 Group.

In early February, energy drink company and team owner Red Bull launched an independent investigation into its Formula 1 team principal, Christian Horner, for potential misconduct toward a team member. Red Bull did not specify the details of the nature of the alleged misconduct nor who it was against, leading to rumors. Horner, who manages team strategy and personnel, was cleared a few days ahead of the start of the season.

A day after Red Bull dismissed the investigation, a Google Drive folder was anonymously sent to more than 100 reporters, other team principals, and members of the Fédération Internationale de l’Automobile (FIA), the governing body for the sport. The folder contains nearly 80 different files, most of which are screenshots of WhatsApp messages between what is allegedly Horner and a female employee.

Vox is currently in the process of verifying the images and videos. What we can say is that the undated messages include references to intimate acts, requests to delete messages, and what appears to be a nude picture. These WhatsApp messages share space with seemingly professional communication, including about whether this person could work from home.

Red Bull Racing, its parent company Red Bull GmbH, and Christian Horner did not respond to Vox’s requests for comment about the validity of the messages, the nature of the investigation, anything regarding potentially firing Horner, or whether they have opened any new investigations by time of publication.

While controversy isn’t new to F1 — just look at the cheating scandals, race interference, or the driver who lost his seat because his Russian oligarch father was sanctioned at the beginning of the war in Ukraine — the noise surrounding Horner should be more than fodder for a juicy season of Drive to Survive. It should prompt questions around how to best support the women pit crew members, engineers, assistants, and all other workers in a predominantly male sport. If F1 can’t protect its own female employees in the sport, then the efforts to create a pipeline for female drivers as well as make races safer for the recent influx of female fans will fall flat.

Currently, much of the coverage of the controversy is focused on whether Red Bull will be able to maintain its enormous lead in the Drivers’ and Constructors’ Championships, or speculation about the other person’s identity and motivations in the texts, as if this is gossip fodder about reality stars and not a workplace concern. As of Thursday morning, Red Bull had reportedly suspended a woman who accused Horner of inappropriate behavior. It’s not clear if she is the same person in the texts. And on Friday afternoon, it was reported that Red Bull adviser Helmut Marko could face suspension.

The way everything is shaking out feels more like gossip, rather than a gut check to see if the sport’s institutions are capable of questioning breaches of power. From the FIA to the individual teams and their owners, the sport has proven time and time again that they hide behind claims that everything should “just be about the racing,” rather than having to concern themselves with the people who make it possible. How Formula 1 decides to move from here will determine if their efforts to support women employees and its fans have any merit.

The investigation and allegations against Christian Horner, explained

Most Formula 1 fans would say the sport is pretty boring at the moment: Red Bull driver Max Verstappen, predictably, will take first position in most races. He’s in a league of his own. The engineers and leadership at Red Bull Racing guarantee that.

When you’re constantly crushing the competition, others try to find ways to take you down. There are several instances documented in Drive to Survive of petty reports to the FIA. So when Red Bull announcedon February 5 that it would be looking into “certain allegations” made against team principal Christian Horner, it was hard to pin down whether this was an external attempt to get him fired for competition’s sake or if there was veracity to the claims of misconduct. Red Bull’s intentionally vague statement to the media ushered in rumors about “aggressive management.” Of what kind? The company never explicitly said. And it still won’t.

“One of the biggest quotes from Drive to Survive is [motorsport commentator] Will Buxton always saying ‘In F1, when there’s smoke, there’s usually fire,’” said Kate Byrne, one of the founders of the fan community Two Girls 1 Formula. She explains that such internal matters rarely bubble up to the public, but she feels Red Bull weighing in was a red flag, saying it’s possible they wanted to “get in front of it by saying something. Red Bull will never do that on their own.”

Red Bull’s outside lawyer, who they refuse to publicly name, cleared Horner of any wrongdoing on February 28 — just ahead of the first Grand Prix of the season in Bahrain. “Red Bull is confident that the investigation has been fair, rigorous, and impartial,” Red Bull’s parent company said in another statement. “The investigation report is confidential and contains the private information of the parties and third parties who assisted in the investigation, and therefore we will not be commenting further out of respect for all concerned.”

Additionally, according to some reports, Red Bull was preparing to fire Horner as early as February 2 — three days before the investigation announcement — but he had allegedly insisted on arbitration. Horner has not made any statements regarding potentially being fired, and during this week’s press conference, he declined to give any more specifics.

“My wife has been phenomenally supportive, as has my family, but the intrusion on my family is now enough,” Horner said during a press conference Thursday in Saudi Arabia. “It is time now to focus on why we are here, which is to go Formula 1 racing.”

Who gets to control the narrative?

Horner’s career will likely be unaffected, given the portrayal of this investigation as simply a scandal rather than a potential workplace misconduct issue. F1 and Red Bull are certainly ensuring that Horner is able to do “business as normal.” So naturally, all eyes were on Red Bull’s two drivers, Max Verstappen and Checo Perez, to scope out their performance last weekend. There were concerns, some raised by Max’s father Jos, who is a former racer, that things at Red Bull might “explode” if nothing is done. If it’s true that tensions were rising within the team, it would be evident in the first race, no?

As if. Verstappen led the race by a wide margin of about 20 seconds between him and his teammate. Simply put, he was coasting, taking his monster of a car for a joy ride, while no competitors even came close. It was like watching Simone Biles on a playground. After Verstappen snagged first place, Horner was right in front of the podium with his wife, former Spice Girl Gerri Halliwell, and the heir of the Red Bull fortune, Chalerm Yoovidhya. “Better to do your talking on the track,” Horner said after the race.

Standing in a crowd, three people laugh and smile, mid-conversation.

Focus on the continued dominance of the Red Bull team misses the point, however. Verstappen will continue to perform just fine regardless of tensions, but there are hundreds of women who work for F1 teams who, because of F1’s general mishandling of this whole situation, are being tacitly told that their experiences and safety within the sport don’t matter. If they speak up, they may be ridiculed — all because of the obsessive focus on winning races and ratings.

And yet, the FIA and Red Bull have made no statements or promises about how they’re going to support women (and nonbinary and trans) workers in light of the conversation around Horner. Even the whispers of such behavior should launch wider investigations across teams, set up a stronger whistleblower policy, and add more concrete consequences for senior leadership who do end up embarking on romantic relationships with employees without clearing it with the appropriate bodies. The FIA has not responded to a request for comment on whether they have started an investigation into Horner and Red Bull by the time of this publication.

But given how each F1 team has its own CEO and operates in several different countries, internal protocols and processes vary, according to Nicole Sievers, co-founder of Two Girls 1 Formula.

“In a sport that’s so heavily male dominated, there are also likely pressures on women that are working for these teams,” Sievers said. “That’s a really tough position to be in, especially when you do see kind of the overwhelming majority of the ruling class of F1. We’ve seen that play out historically. Money just breeds power, which breeds a loss of a feeling of consequence.”

On top of that, so much of the makeup of the current media pool skews a certain way: older, male, white. This old guard has been covering F1 for a very long time. Coverage around sexism, racism, and other types of discrimination may not be top of mind.

Strict rules on F1 coverage don’t help. Typically in American sports, getting press accreditation is fairly routine, but with F1, there are very strict rules for who is eligible and what you can produce, from prohibiting moving images to not providing credentials to websites that don’t meet certain traffic figures on their F1 coverage, with relatively small-scale offenses resulting in extended bans.

When there’s such a tight leash on reporters, asking hard questions with an eye for accountability becomes difficult. As Lily Herman, author of the F1 culture newsletter Engine Failure, explains, “There are all these fears that exist in F1 around, ‘Hey, if I speak up, I’m going to get my media access revoked or my entire media organization will if I do something wrong.’”

A perfect case study of this dynamic can be seen with Road & Track’s editor-in-chief Daniel Pund taking down a critical piece by Kate Wagner analyzing the opulence of Formula 1 because he felt it was the “wrong story” for the publication. The story itself was entertaining, but not necessarily news-breaking, and the removal raised questions about whether the FIA reached out (according to Pund, they haven’t).

What does this mean for women in motorsport

How this story ultimately unfolds will demonstrate if F1 understands its changing fanbase and what it takes to make its employees feel supported, and whether it can get with the times.

“As someone who loves the sport, it’s definitely disappointing to see what’s going on right now,” said Lewis Hamilton, Mercedes driver and seven-time world champion, during media day for this weekend’s race in Saudi Arabia. “It doesn’t look good to the outside world. It doesn’t look good from the inside either. It’s a really pivotal moment for the sport, in terms of what we project to the world and how it’s handled.”

By sweeping such allegations under the rug, Red Bull may hope the various dramas of the sport will cause the news to die out. That’s wrong. If anything, F1’s growing female fanbase is hungry for accountability.

“We need to firmly stake our chair to the ground at the table and say: ‘We’re not leaving and we’re going to force you to be better,’” Byrne said. To her, what employees face mirrors the experiences of fans. You can’t improve one without the other, and turning off the sport won’t help, she added. “The only way you can do that is not by stopping, but by continuing to go on and being louder about it.”

And we’ve seen fans demand equal pay for female workers, safety protections for all fans regardless of gender identity at races, and increasing opportunities for female drivers through F1 Academy. The sport, of course, made promises that it will do better in these avenues. Talk is just talk though, if there aren’t actions to back it up when the stakes have never been higher. (F1 Academy, for instance, was criticized for not being broadcasted during its debut in 2023.)

“I always question very openly what exactly the point of all of this is,” Herman said, “It’s great that women are getting opportunities like F1 Academy. But we should always kind of question what exactly those opportunities are, what they do, and who they don’t extend to. What’s just kind of lip service being paid in the moment of crisis?”

At the end of the day, we’re talking about people’s lives, not just some reality TV show. When winning is prioritized over well-being, we all lose.

Update, July 9, 2025: This story, initially published March 8, 2024, has been updated at the top with news of Horner’s firing from the Red Bull team.


From Vox via this RSS feed

4
 
 

Donald Trump, Melania Knauss, financier, Jeffrey Epstein, and Ghislaine Maxwell

In February 2000, Donald Trump, his then-girlfriend and now-wife former model Melania Knauss, Jeffrey Epstein, and British socialite Ghislaine Maxwell. | Davidoff Studios/Getty Images

Amid all the controversies of President Donald Trump’s second term so far, the one that may be causing him the biggest problems among his right-wing base is about a man who died six years ago: Jeffrey Epstein.

Epstein, the well-connected financier who was indicted for sex trafficking underage girls and died in prison in 2019, has been an obsession on the right ever since. Online MAGA influencers and Fox commentators have speculated for years that Epstein might have been murdered, that he was blackmailing powerful people, and that the US government is hiding information that would reveal what really happened.

In 2024 and the initial months of 2025, Trump allies like Pam Bondi and Kash Patel — the current attorney general and FBI director, respectively — played to the base and egged on these theories, promising to release the “Epstein Files.”

But now they can’t — or won’t — deliver.

On Monday, the Justice Department and FBI released a memo saying, basically, that they’ve got nothing. The memo says that, after a review, they found no Epstein “client list” and no “credible evidence” that he blackmailed people. The memo also restated the FBI’s conclusion that Epstein killed himself.

Key right-wing influencers reacted with fury, despair, or contempt — refusing to accept the administration’s assurances that there’s nothing to see here.

Trump was deeply annoyed to be asked about this by a reporter on Tuesday. “Are you still talking about Jeffrey Epstein?” he snapped. Insisting there were much more important things in the news, he continued: “That is unbelievable. I can’t believe you’re asking a question on Epstein.”

Of course, Epstein has long been an awkward topic for Trump, considering that they were friendly, that they’ve been photographed together, and that Trump told a reporter Epstein was a “terrific guy” who liked women “on the younger side” all the way back in 2002. (These connections have been resolutely ignored by online right influencers.)

Adding to the awkwardness, when Elon Musk fell out with Trump last month, Musk claimed that Trump was “in the Epstein Files” and that “that is the real reason they have not been made public.” (Ordinarily, the accusation from a top presidential adviser that the president was implicated in sex trafficking underage girls would be a gigantic scandal, but Musk does tend to make things up.)

The idea that Trump could be covering up his own Epstein-related crimes makes the right feel so uncomfortable that they’ve largely dismissed it. Many prefer to speculate that Epstein has ties to intelligence agencies — either in the US or Israel — that the government doesn’t want to disclose.

But if you’re a right-winger deeply bought into the belief that there must be more to the story, there are really only two possibilities: Either Trump, Bondi, and Patel are dupes getting snowed by the deep state, or they’re knowing participants in the cover-up.

Why the right is so fixated on Jeffrey Epstein

Epstein was a very wealthy financier who had a lot of famous friends, whom he often flew on his planes to his private island. He was also repeatedly accused, in criminal and civil proceedings, of sex trafficking and sexually abusing underage girls, including some as young as 14. A brush with the law over this in the mid-2000s ended in what was later derided as a sweetheart plea deal, but in July 2019, Epstein was arrested again — and then found dead in his cell the following month, in what authorities said was a suicide.

The documented story of Epstein is bizarre and damning as it is. But, on both the left and the right, many have insisted the conspiracy must go far deeper. Did Epstein have secret sexual blackmail material on powerful and influential people — politicians, celebrities — who were also involved in abusing underage girls? How did he make his money? Was he working with US or foreign intelligence services? Was he actually murdered in a shadowy conspiracy to prevent him from telling what he knew?

In particular, the online right has fixated on what they refer to as the “Epstein list” — an imagined document supposedly listing famous people who were co-conspirators in his sex crimes. The online right hoped this would be the promised smoking gun that would indisputably reveal that their political enemies are perverted criminals, disgracing them forever and likely leading them to be arrested.

As such, it’s the latest variation of the “QAnon” theory, which argued that proof of a deep-state conspiracy to protect Democratic pedophiles was imminent. (Democrats’ hopes for a Trump “pee tape” could be said to serve a similar role.)

Dark allusions to this theory are a proven strategy for aspiring MAGA influencers seeking online engagement. When JD Vance launched his political career in 2021, he tweeted: “Remember when we learned that our wealthiest and most powerful people were connected to a guy who ran a literal child sex trafficking ring? And then that guy died mysteriously in a jail? And now we just don’t talk about it.”

Why Trump’s team keeps screwing this up

Epstein first became a problem for Trump this year back in February, when Bondi hyped an imminent release of Epstein-related information. When a Fox host asked her if she’d release “the list of Jeffrey Epstein’s clients,” Bondi answered, “It is sitting on my desk right now, to review.”

Bondi then invited right-wing influencers to the White House and gave them binders marked “The Epstein Files: Phase 1” — but the files turned out to contain no new or relevant information, and certainly no client list. Widespread bipartisan mockery ensued on social media.

As part of Bondi’s damage control, her team put out a letter to Patel, in which Bondi claimed she’d asked for all the Epstein documents, but had been belatedly tipped off that the FBI’s New York field office was withholding thousands of pages. Bondi demanded they be handed over, and ordered Patel “to conduct an immediate investigation into why my order to the FBI wasn’t followed.”

Months later, it’s clear that this, too, was BS.

And now, Bondi’s DOJ and Patel’s FBI have claimed they don’t have any such client list. When Bondi was asked Tuesday about her old claim on Fox that the list was “on her desk,” she said she was just referring to the Epstein Files generally, and not a client list specifically.

The simplest and most boring explanation for what happened here is that they really do have nothing — that the government is not sitting on bombshell intel as part of a massive cover-up of some secret Epstein conspiracy. That, when Bondi and Patel implied otherwise, they were pandering to the base, genuinely misinformed, or both.

“When you’re a podcast guest or a podcast host, you can make a bunch of wild claims that you think are probably true, whatever, it’s just content,” conservative activist Will Chamberlain posted on X. “When you’re a high-ranking official in the FBI, it’s not so simple.”

Not everyone is satisfied with that. “The Epstein case isn’t over,” Glenn Beck wrote. “It’s the Rosetta Stone of public trust. And if we don’t get to the bottom of it, we’ll never restore what’s already been lost.”


From Vox via this RSS feed

5
 
 

In 2023, my husband and I bought our house in southwest Colorado, in part, because it backed up to open space. That was the dream: trails just past the fence, a scrubby network of oak and sage stretching out into the hills beyond. But a little over a year into homeownership, I was questioning the wisdom of living so close to a burnable landscape.

This past winter’s spate of wildfires across Los Angeles made that fear of living alongside such a combustible landscape all the more real — fear that was only intensified by the weather. In my town, winter and its all important snow never really showed up. By spring, our snowpack was well below normal, winds were whipping, and I was becoming more paranoid about my wildfire risk.

It’s not just people like me — living on the edge of fire-prone terrain — who may be sharing that paranoia.

More than 100 million people across 20 states and Washington, DC, live in the path of the increasingly fierce hurricanes. Most of the eastern half of the country is now at risk for tornadoes, and floods have increased in frequency and intensity in both coastal areas and river valleys. Over the Fourth of July weekend, extreme flooding in central Texas was among the most deadly of the past century. The National Oceanic and Atmospheric Administration’s (NOAA) National Centers for Environmental Information found that 2024 held the second-most billion-dollar natural disasters ever for the US — right behind 2023. This year, with its already higher than average fire activity and predicted busy Atlantic hurricane season, is already shaping up to be significant, too.

At the same time, government cuts have undermined every critical juncture for disaster preparation. Federal programs for wildfire mitigation, proactive work like thinning forests and conducting prescribed burns, which help prevent large scale fires, have been halted due to staffing cutbacks and lack of funds.

Cutbacks to the National Weather Service, through reduction in force at NOAA, have already led to gaps in forecasting, which makes it harder for the public to plan for extreme weather events like the Texas Hill Country floods this month or the deadly May tornadoes, which killed at least 27 people as they swept across Kentucky, Missouri, and Virginia.

And the agency explicitly tasked with disaster relief is shrinking, FEMA, has cut funding for its bipartisanly popular Building Resilient Infrastructure and Communities program, a major tool for building flood-resistant infrastructure, among other resiliency projects. They’ve lost some of their ability to help us recover, too. FEMA, already understaffed in 2024, has lost a third of its workforce since the beginning of the year. It has scaled back training, and stands to lose $646 million in funding, if the current federal budget proposal passes. As if that’s not enough, President Donald Trump has said he plans to phase out FEMA as a whole after the 2025 hurricane season

In the face of all that, I wanted to find out what all of us could do to limit our risk.

The first step was pretty basic: Instead of just spiraling about hypotheticals, figure out the specific risks in your area. For now, FEMA’s National Risk Index, where you can identify the threats to your community, remains a good source. By looking through the index, I learned my county is high in wildfire risks — which I already knew — but also that the area is prone to landslides, drought, and severe lightning storms.

Once I knew the risks, I looked at how I could prepare.

But the answers weren’t obvious. I reached out to both my regional FEMA office, whose contact was easy to find online, and the national headquarters, because I wanted to know what sort of concrete things I could do to protect my home — and what kind of support I might expect if the worst-case scenario happened to hit my community. I got a short email back saying that I should contact local authorities.

And so I started the real journey there, by looking at my local resources.

How to harden your home

The most meaningful thing you can do on your own is harden your own home against relevant disasters. I found online that my local fire department provides free wildfire assessments because they think reducing your own vulnerability is one that can also reduce community risk. “The less time I have to spend at your house,”Scott Nielsen, my local wildfire battalion chief, told me, the more he can spend fighting other parts of the fire.

Nielsen says that when it comes to mitigating fire, we can’t change things like topography or weather, but we can change the fuel — and often that fuel includes our homes.

When Loren Russell, who works for the wildfire division of my regional fire district, came over to assess my risks, what he said surprised me: Instead of the overgrown hill behind the house, which had scared me, he was worried about nooks in the eaves or corners of the deck where embers could get caught. He also worried about the the oily junipers in the yard, which could become ladder fuels that might allow fire to leap to the tree canopies, and about the ways those canopies connected, spreading sparks across the landscape.

Russell says it’s always the same few things that create risk. Looking at the splintering boards of my neighbor’s fence, he noted that he’d seen fires blow across whole subdivisions through fences. “Once embers are in a fence it’s like a wick,” he says.

There are strategies for personal protection too — and not just for fire. FEMA says that if you live in a hurricane-prone area, you can install impact resistant doors (particularly garage doors), storm shingles, and reinforced roof bracing, all of which help your house withstand storms. If you’re in a floodplain, you can seal cracks in your foundation, move your electrical boxes higher or build berms and drains into your landscaping so water runs away from your home.

There can be a range of costs for those projects, from your own sweat equity for landscaping to tens of thousands of dollars for a new roof. But there can be grants and tools available to offset some of the cost, like Alabama’s program to help fund home strengthening, which is run through the state insurance office. Check your state resources, like the division of emergency management.

There’s no perfect formula for what to do. Russell says mitigation makes a real difference, but that risk is personal, because it’s also tied up in tolerance, and in trying to predict the future. “You go out and you build a concrete bunker and surround it with gravel, but,” he says, “now you live in a concrete bunker surrounded by gravel.”

Risk is personal — but what your neighbors do matters

Turning your home into a bunker is expensive, unappealing, and it might not make a difference in your broader risk tolerance. And unfortunately, it doesn’t really change your insurance liability. At least not yet.

My insurance agent told me that they don’t yet factor home hardening into their policies and pricing, even though  simply being in a disaster-prone area can raise your premiums or make it harder to get insurance — and sometimes, insurance companies will simply dump policy holders in risk-prone areas. More than 100,000 Californians in fire-prone areas have lost their insurance in the past five years.

Those drops don’t necessarily reflect what’s happening on the ground. “We had one insurance agency that was pretty happy to drop people. I looked at their reports and didn’t find them to be based in fire science,” Nielsen says, about our area of western Colorado. He says they’re based on zip codes, which can be relatively arbitrary, instead of on the kind of terrain and fuel supply that actually make a difference to fires. And they almost never reflect mitigation work.

One of the only ways home hardening and mitigation makes a difference for insurance is when it’s done on a neighborhood scale. For instance, in 10 states, communities that have been certified as firewise through the National Fire Protection Association are able to get insurance discounts.

That is reflective of actual risk, “You really are impacted by your neighbor’s property,” says Max Moritz, a wildfire specialist at the University of California Santa Barbara. He says that the LA wildfires showed just how much broader-scale hardening — or lack thereof — impacts risk. But regional tools, like consistent fire hazard mapping programs, or building code requirements for new construction can significantly reduce risk. That’s true of other natural disasters, too.

Nielsen says that landscape scale problems, like fire, need landscape scale solutions. Home hardening is a piece we can control, but it’s networked into a bigger system of land management, risk tolerance, and policy. When a tornado or a hurricane comes, it doesn’t just hit one house.

Every slice (of cheese — yes, cheese) is important

Nielsen thinks about what’s commonly called the Swiss cheese model of risk assessment, where multiple layers of protection. This includes everything from personal scale, like hardening your roof to withstand high force winds; to local and regional projects like floodplain mitigation or evacuation planning; to federal tools, like the National Weather Service, or FEMA, which apply to the whole country.

You can visualize each layer as a Swiss cheese slice in a sandwich. They all have holes, ideally, the gaps overlap, and the layers support one another — and stop a threat from becoming catastrophic.

That’s even more true for renters, or people who live in urban areas, who might not have as much control over their own homes, and who are even more impacted by the places around them. Hurricanes have wreaked havoc on major cities. If that’s you, ask your landlord what they’ve done to harden the property, ask about past damage, consider supplemental renters insurance and then get curious about municipal management like storm drains, which divert water away from housing, evacuation routes or fire mitigation, depending on your risks.

Having a lot of layers of swiss cheese is especially important now.

It’s all connected. Preventative mitigation is networked into a broader system, but so is dealing with disasters when they come, whether they’re fires, floods, or storms. Marshalling national resources during and after large-scale disasters has been a federal responsibility since the 1970s. That kind of coordinated response is part of how we plan for natural disasters, but the current administration is planning to cut the budget and scope of FEMA and turn responsibility toward state and local governments, which aren’t always funded or prepared to manage large incidents.

The scientists and field workers I spoke to for this story told me they were worried about the lack of federal investment. Moritz says that he’s concerned about disaster response, but he’s also worried about understanding future preparedness. “Some of the big questions that we don’t have answers for yet rely on big labs and national level funding,” he says. “Research wise, a lot of Forest Service colleagues who do really good work in federal labs have been let go, or lost staff. Those are serious losses that will take a long time to get back from.”

He says that there are still big gaps in the research about exactly how home hardening fits into the puzzle of resilience, and what kind of choices are the most effective, but that in the face of that federal lack of support, the sort of things we can do individually or as a neighborhood collective become even more important.

When there are several fires burning at once — like in Los Angeles earlier this year — responding agencies are spread thin and every person might not be able to depend on their help, Mortiz told me. That makes education shared among neighbors even more important. “That’s the scale you can make a difference,” he said.


From Vox via this RSS feed

6
 
 

An image of a stone column repeating in increasingly small, stacked layers. Abstracted American flag elements are peeking out from between the layers.

Prior to Kanye West, Lawrence Dennis was America’s most famous Black fascist.

Born in 1893, Dennis had European features and light skin that allowed him to pass for white, which he did for nearly his entire life. He attended Phillips Exeter Academy and Harvard University, served in the US military and diplomatic corps in the 1920s, and then emerged as a prominent journalist and public intellectual around 1930. His primary interest: what he saw as the looming collapse of the American economic and political model.

The twin shocks of World War I and the Great Depression proved, in his mind, that the liberal capitalist political order was doomed. The future, he thought, might be either fascist (his preference) or communist. Liberalism was on its way to extinction, its obsessions with individual rights and freedoms preventing it from adapting to a new world that demanded total states. He advanced this basic claim in a series of books and essays so prominent that, in 1941, Life magazine named him “America’s No. 1 Intellectual Fascist.”

A black and white photo of Lawrence Dennis walking up steps in a suit with a briefcase.

“I find the liberal theory and practice inadequate both to what I consider to be social requirements and to my own personal requirements… It has failed. It has proved inadequate. Therefore, by the inexorable law of the survival of the fittest, it is doomed,” Dennis wrote in a 1935 essay titled “Fascism for America.”

The coming decades would, as we all know, embarrass Dennis’s predictions. Yet for the past 10 years, strikingly similar arguments about liberalism’s obsolescence have played a defining role in American intellectual life.

The early 21st century, much like the early 20th, was defined by a series of shocks — 9/11, the 2008 financial crisis, and Donald Trump’s stunning 2016 victory. These events led a rising chorus of intellectuals on the right and the left to insist the liberal political order was on its last legs. Something new was coming. Whatever it was, it would obliterate the hidebound liberalism buckling at Trump’s feet.

Liberals, caught off-guard by events, started to wonder if their enemies had a point. In 2018, a right-wing assault on liberal politics — political theorist Patrick Deneen’s Why Liberalism Failed — became a surprise bestseller. Deneen’s radical argument, that “the only path to liberation…is liberation from liberalism itself,” earned him a place on Barack Obama’s list of the year’s best books.

In a 2019 article examining the rise of these arguments, I labeled the current era an “anti-liberal moment.” Critics of liberalism on the left and right were finding mainstream success, and liberals appeared unable or unwilling to properly defend against their critiques.

Yet I believe this period is ending — in fact, it may have ended already. The anti-liberal claims that liberalism has “failed” are looking increasingly like Lawrence Dennis’s predictions of its doom in 1935: wildly premature.

Of course, there is no parallel modern event to the source of Dennis’s humiliation: allied victory in World War II. Politically, liberalism is still in crisis, with President Donald Trump engaging in a multipronged offensive against the American constitutional order. Between Trump’s illiberalism and the success of similar leaders abroad, it’s far too early for liberals to declare “mission accomplished.”

What’s happening now is something more subtle, more inchoate: a kind of intellectual vibe shift. Anti-liberal forces that once seemed ascendant are weakening, and illiberal ideas are losing their luster. Prominent thinkers on the right and left, even some once seen as radical critics of liberalism, are reemphasizing liberalism’s virtues.

Perhaps surprisingly, the political crisis of liberalism caused by Trump is a major cause of liberalism’s improving intellectual outlook.

Defining liberalism

Philosophically, the term “liberalism” refers to something quite different than the “liberalism” discussed in American politics. Prior to Trump, the vast majority of Republicans were philosophical liberals — and many still are.

Liberalism in the philosophical sense is the broad family of political doctrines that center on equality and freedom: holding that the purpose of politics is to enable each and every citizen to live life according to their own vision and values.

Institutionally, liberals believe that realizing these values requires democracy, the rule of law, and strong legal rights to protect individuals from abuse by the state. Liberalism, in this sense, is the philosophical underpinning of democracy as we understand it today — operational in contexts as diverse as the United States, Japan, and Namibia.

While liberals agree in broad strokes about the purpose of politics, they often disagree among themselves on what their doctrine entails. There are liberal socialists and liberal libertarians, liberal cosmopolitans who support open borders and liberal nationalists who endorse tight restrictions on immigration. What they share is a commitment to resolving these disagreements through the liberal democratic process — through elections, free debate, and peaceful activism.

Non-liberal political theories, by contrast, reject one or more of liberalism’s premises. Perhaps they hold, on religious grounds, that people should not be free to live life as they choose, but instead pushed toward living according to religious scriptures. Perhaps they do not believe that individuals deserve the right to criticize the state, as was the case under Soviet Communism or fascism.

Being non-liberal is not merely about criticizing the political status quo, but about attacking the basic premises that underpin liberal democracy itself.

Although the anti-liberal right has a newly powerful adherent in Vice President JD Vance, the administration’s record has divided its leading intellectuals and alienated liberals who used to give their ideas a respectful hearing. Simultaneously, the Trump administration’s naked authoritarianism has made leftists who once thought critiques of Trump’s “fascist” tendencies hysterical realize that maybe the liberals had a point.

Yet there are reasons to think that this is not just a temporary backlash, but reflective of anti-liberals’ deeper intellectual defects. The right’s anti-liberals are on weaker ground than their political success suggests, and the left’s anti-liberals may have barely existed in the first place.

The shifting intellectual winds are not proof that liberalism will ultimately triumph. Whether Trump succeeds or fails in fatally undermining American democracy will almost certainly hang on things less abstract than argument in academic journals and little-read Substacks.

Yet ideas do matter. The influence of radical thought on Trump’s second-term policies vividly illustrates that political leaders and their staff pay more attention to these debates than you might think. The end of the anti-liberal moment could, for this reason, mark the beginning of a new era of liberal flourishing — if liberals can develop new answers of their own.

Sohrab Ahmari and the weakening of the postliberal right

Six years ago, journalist Sohrab Ahmari was at the cutting edge of right-wing intellectual radicalism. A Catholic convert who had adopted his faith with proverbial zeal, he had come to see modern social liberalism as an abomination that corroded the traditional values and social solidarity that made a good society possible.

He once, famously, declared the idea of drag queen story hour “demonic” — a sign of liberalism’s moral poverty. If liberalism meant drag queens reading to children, he said, then “to hell with liberal order.” He turned this revulsion into a credo, most notably in a vituperative essay blasting the center-right evangelical and religious liberty litigator David French. By asking the state to provide Christians freedom, rather than by seeking to make the state itself more Christian, French and his ilk were creating the conditions for their own extinction. The religious right could not exist within liberalism, per Ahmari’s view, but rather was pitted against it in a death struggle.

“Progressives understand that culture war means discrediting their opponents and weakening or destroying their institutions. Conservatives should approach the culture war with a similar realism,” he wrote.

These arguments put Ahmari at the forefront of a new intellectual movement — the “postliberal” right.

Together with a mostly Catholic group of thinkers, including Patrick Deneen and Harvard Law’s Adrian Vermeule, Ahmari led an assault on liberal ideas of tolerance and pluralism. Their most notable adherent was Vance, also an adult convert to Catholicism, who openly describes himself as a “postliberal” aiming to tear down the “regime” (by which he means the American liberal elite).

JD Vance and a professor before their panel discussion on “regime change and the future of liberalism” in 2023

Yet around the time the movement managed to place one of its own on the GOP ticket, Ahmari started to have second thoughts. In the spring 2024 edition of the liberal journal Liberties, he published a piece on “the poverty of the Catholic intellectual tradition” that read like a renunciation of his former radicalism.

Catholics who believed that “the whole order, the whole regime, is corrupt” — as he once did — were guilty of adopting a “dogmatic ahistorical posture” and fostering “an unhealthy and philosophically indefensible revulsion for the nation and its traditions.” Instead of radical critique, he argued, the Catholic right needed to develop an appreciation for what was good about the American political order.

“Critical patriotism and a return to the American center — the vital center redux — should be our watchwords, and this implies, first and foremost, a recognition that American democracy is itself a most precious common good,” he concludes.

When I spoke with Ahmari in May, he told me he did not regret his prior positions. He still believes, as a philosophical matter, that secular liberalism’s preoccupation with the individual inhibited its ability to address collective problems.

However, his practical politics have shifted profoundly. He believes the task is not tearing down the current liberal democratic order, but improving it. Once a Trump booster, he told me that his politics more closely resemble those of the New Deal tradition in America or Europe’s center-right Christian Democrats.

“The fundamental difference is that I [now] think the existing order is the best we could hope to achieve in our historical horizon,” he told me.

Ahmari’s new moderation is a notable data point indicating postliberalism’s decline: It is never a good sign when one of an intellectual movement’s leading lights abandons its core vision.

There are others. Kevin Vallier, a philosopher at the University of Toledo and critic of postliberalism, has tracked the number of essays published by postliberal publications and the interest in those publications’ output on social media. Both metrics indicate decline.

“They’re just doing a lot less ideological work, and it’s getting a lot less engagement,” he tells me.

Even some postliberals agree.

In a 2024 essay, Vermeuele wrote that he had “lost interest in the liberalism debate,” and moved on instead to “applied practical projects” such as developing a “classical legal theory,” a stance he believes is widely shared within the movement. “The debates over high liberal theory that seemed so gripping in recent memory are no longer of much relevance,” he wrote.

Vermeule claims the decline in postliberal theory is, in part, a reflection of their success. And there’s an element of truth to this: At a moment where postliberals are staffing the Trump administration, it makes sense that their intellectuals would spend more effort on practical politics than on philosophical fencing with liberals.

Yet his victory dance, published a year before Trump was inaugurated, was premature. In fact, the Trump administration’s anti-liberal record has seriously damaged the credibility that movement had spent years building up.

In the spring, the New York Times’s David Brooks wrote an angry column calling postliberalism “the closest thing the Trump administration has to a guiding philosophy” and blaming it for “the moral rot at the core of Trumpism.”

He denounced its adherents by name: “Years ago, I used to slightly know both Deneen and Vance. JD has been in my home. We’ve gone out for drinks and coffee. Until Inauguration Day, I harbored him no ill will,” Brooks wrote. “But over the past four months, a small cabal at the top of the administration — including Trump, Vance, [Stephen] Miller and the O.M.B. director, Russell Vought — have brought a series of moral degradations to the nation.”

Why postliberalism is in decline

Brooks’s column points to perhaps the most obvious reason behind postliberalism’s intellectual decay: their close association with the unpopular Trump administration. Much as it was interesting and edgy to debate American fascism right up until the US entered World War II, the mainstream fascination with the forbidden postliberal vision has difficulty surviving a war where its political champions are busy demolishing the foundations of American democracy and civil liberties.

Postliberalism has, intentionally or not, linked its future to the Trump experiment. If these four years end with Trump leaving office amid policy disaster and bottom-scraping approval ratings, postliberalism could suffer the same intellectual discreditation as neoconservatism did after the Iraq debacle.

But postliberalism is also facing internal pressures on the right, fissures that are weakening its intellectual coalition even as it ascends to power.

One such fissure, per Ahmari, arises from economics. While the postliberal movement is best known for its radical social conservatism, it also advanced a more interventionist approach to the economy. Its view of liberalism as disinterested in “the common good” applied even more cleanly to laissez-faire economics than it did to secular social liberalism.

From this point of view, the Trump administration’s record is dismal. Its attacks on the welfare state and regulatory agencies contravened the letter and the spirit of a postliberal economic vision. This was a major reason why Ahmari, in particular, has grown further apart from pro-Trump postliberals like Deneen and Vermeule: He has come to believe that the GOP cannot be a vehicle for meaningful economic reform.

“We always said that it would be easier to move the Republicans left economically than the Democrats right culturally. And I just don’t see the first half,” he tells me.

While economics split the movement’s leadership, culture is weakening its rank-and-file foundations.

The postliberal base has always been college-aged men or recent college grads, who gave it strength in factional right-wing disputes and enhanced its mainstream cachet: It always helps to be able to claim the mantle of the right’s intellectual future.

Yet recently, Ahmari has noticed young men flocking to an even more extreme faction — explicitly anti-democratic, often-racist thinkers he terms “Nietzschean” or “barbarian.” Tech monarchist Curtis Yarvin, fascist Bronze Age Pervert, neo-Nazi Nick Fuentes, and even increasingly radical Tucker Carlson have figured out how to outflank the postliberals from the right.

A group of white nationalists, including Nick Fuentes

Intellectually, these figures are lightweights. Deneen and Vermeuele are attempting to make rigorous and well-considered arguments; the “barbarians” are not. Yarvin, their most “serious” thinker, has lost whatever rigor he once had.

Their rise suggests that postliberalism always had shallow foundations. Its success among the youth came less from the strength of its arguments than from the sense that it was the most radical option available. But when outright bigotry became more normal in right spaces, postliberals lost that relative advantage. In the drive to be the most “based,” Hitler worship would always trump political Catholicism.

“In terms of online purchase, that has taken off a lot more,” Ahmari says, adding that he has become so critical of this group that some on the right probably think he sounds like a “progressive scold.” But if forced to choose between a right where the extremists call the shots, and an alliance with liberals and the left, Ahmari knows where he would stand.

“I will happily join an old-school united front against the barbarians,” he tells me. “Skull-measurers, IQ-worshippers — it’s really the most terrifying politics there is.”

The left’s fizzled challenge to liberalism

Samuel Moyn is one of the foremost left-wing critics of Trump-era liberalism. A law professor at Yale, Moyn co-authored a 2017 New York Times op-ed warning of the risks of “tyrannophobia” — “the belief that the overwhelmingly important political issue is the threat to our liberal freedoms and institutions.”

Tyrannophobia, he and co-author David Priestland argued, historically had “a tendency to redirect our attention from underlying social and economic problems [which have] often been the real source of danger.” In their view, the liberal reaction to Trump was falling into this pattern: “It is easier to believe that democracy is under siege than to acknowledge that democracy put Mr. Trump in power — and only more economic fairness and solidarity can keep populists like him out.”

Moyn fleshed this critique out in his 2023 book Liberalism Against Itself. An intellectual history of the “Cold War liberalism” embodied by figures like Isaiah Berlin and Judith Shklar, the book argues that their preoccupation with the specter of totalitarianism led them to embrace a tyrannophobic politics that abandoned core liberal commitments to social equality.

For the past decade, he has insisted that we have been living through a similar error — that liberals describing Trump as an authoritarian or “fascist” were overstating his dangers. And, in doing so, they were throttling the prospects for the emergence of a more authentically left-wing Democratic Party that could fight Trump and win.

These arguments helped burnish Moyn’s reputation as a fierce critic of liberalism; a recent Liberties essay on his work was titled “The Anti-Liberal.”

Yet, as the essay notes, Moyn has always seen his project not as an attempt to destroy liberalism but to rescue it.

“My view is that liberalism deserved another chance in the face of its right-wing opponents,” he told me in a June conversation. He saw his critique, along with those from other left-wing radicals, as an “an occasion for diagnosing genuine shortcomings — not the ones the right charged — and, in a sense, to save liberalism from itself.”

This has been a consistent theme in Moyn’s work. But now he is more contrite. In the wake of Trump’s naked second-term authoritarianism, he now says that “he’s willing to concede that maybe we went too far” — that tyrannophobic liberals may have had a point.

“I didn’t adequately [foresee] all of the future dangers that came later. I have to own that,” he told me.

A person holds a sign that reads ‘No kings, no fascists’ during the 2025 New York City Pride March

In this sense, Moyn is like Ahmari — both admitting that their prior attacks on liberalism were at least somewhat too aggressive. But the differences between the two men mirror differences between the modern right and left more broadly, and explain why left anti-liberalism, which recently seemed ascendant in a manner similar to postliberalism, has undergone a very different sort of decline.

Postliberalism was, at least at one point, a genuine alternative to liberalism. But left anti-liberalism was always overstated — not only by the left’s centrist critics, but often by leftists themselves.

Its weakening is less a story of a decline than the popping of a bubble.

The late 2010s and early 2020s saw the rise of two distinct, but related, strands of radical left-wing politics. One was socialist, taking class and economic inequality as its primary interests; the other centered on identity-based oppression. The first, Moyn’s faction, rose alongside Bernie Sanders’s 2016 primary bid. The second rose in the late-2010s “Great Awokening” and peaked around the 2020 George Floyd protests. Both movements took inspiration from academic and activist radical circles in which the words “liberal” and “liberalism” are used as slurs.

“The Marxist superego has defined the left so deeply,” Moyn says, “that it’s very hard to say a good word about liberalism.”

These movements’ vocal distaste for “liberals,” oft-expressed on social media and in podcasts, gave the rising left a veneer of philosophical radicalism — one convenient both to the leftists and their foils on the center and on the right. For both the socialist and identity-based left, attacking “liberals” was a way to distinguish their politics from hated figures like Hillary Clinton. For their enemies on the center and right, the anti-liberal discourse was proof that these movements were dire threats to American freedom.

Still, the left of the late 2010s and early 2020s never produced a sustained alternative to liberalism as a philosophy.

There were no serious efforts during this time to challenge liberalism’s view that individual freedom was socially vital, and that liberal limitations on state power — like, say, First Amendment protections — were necessary. There was no figure akin to Herbert Marcuse, the 1960s radical professor who attacked liberal tolerance and championed violent revolution — let alone any prominent Stalinists or Maoists. While Vermeule and Deneen were advancing radical alternatives to liberalism from the right, the left generally offered radical-sounding critique paired with reformist policy.

This can be seen in their choice of champions — figures like Bernie Sanders, Alexandria Ocasio-Cortez, and (most recently) Zohran Mamdani, all democratic socialists who respect election results and advance policy agendas that would do little to challenge the basic foundations of American liberalism. It can also be seen in the reformist policies the modern left champions: ideas like minimum wage hikes and police reforms that, while perhaps misguided, were enacted through the democratic process and posed no threat to its continued survival.

Zohran Mamdani speaks in front of a poster that reads “Afford to Dream”

It’s not that the modern left is incapable of anti-liberal politics. At several points over the last 10 years, some in their ranks have advanced ideas and taken actions that were genuinely illiberal. But these tendencies have either fizzled out or were always overstated.

One example, on the socialist left, was a sub-faction that valorized a politics of cruelty and personal viciousness. This so-called dirtbag left, revolving around the Chapo Trap House podcast, not only made a sport of demonizing their enemies but actively celebrated efforts to harass and intimidate their political opponents into silence. It was a profoundly and proudly illiberal political style, one opposed to the open debate and mutual respect that liberal political culture depends on.

Yet the dirtbag tendency no longer plays a pivotal role in the modern left, and the energy that once surrounded it has largely dissipated. Some of its defining voices have become apolitical or, like the hosts of the Red Scare podcast, defected to Trumpism. Chapo still has a sizable audience, but its once-precipitous growth has halted.

The equivalent phenomenon on the identity left would, in theory, be “cancel culture”: a catch-all label used to describe everything from college students violently silencing right-wing speakers to campaigns to get journalists fired when their writing contravened left-wing pieties. For roughly the past 10 years, but especially in the wake of the Floyd uprising, voices on the center warned that this aggressive species of “wokeness” posed a mortal threat to American liberalism.

Yet there was always less here than met the eye. There was never an organized ideological hub, akin to the dirtbag left, that openly and proudly celebrated illiberalism as a political virtue. There were, instead, individual instances encouraged by local student groups and decentralized social mobs — a problem, to be sure, but not one unique to the identity left. The best available systematic data showed that the scope of this problem was dramatically overstated — with Trump’s current crusade against higher education showing us what genuinely dangerous illiberalism in that sphere looks like.

Today, with the overall phenomenon of “wokeness” in clear decline, you scarcely hear about the left’s cancel culture anymore. Even observers who formerly fretted about left-wing intolerance — like sociologist Musa al-Gharbi — have now concluded that the problem is in the past.

Why the left’s anti-liberalism was a mirage

This conclusion — that left anti-liberalism was a paper tiger — was not exactly what I would have expected a few years ago.

On the one hand, I’m not surprised that “wokeness” and “cancel culture” were never the threat to liberalism they were hyped up to be. I said so at the time, repeatedly. While there are many persuasive criticisms of left-wing politics during “peak woke” — like its embrace of substantively bad policies like educational detracking — these are very different from the claim that wokeness is destroying basic philosophical commitments to liberalism. Events since have shown they did not.

What does surprise me, however, is that the socialist left did not produce something akin to postliberalism: a systematic alternative political vision that challenged core liberal premises about the need to restrict government power over the individual. The Marxist tradition has a long track record of theorizing such alternatives, and the anti-liberal moment was ideal conditions for such a systemic challenge to catch fire in American intellectual life.

But this never happened. Why?

Part of the answer rests in the anti-liberal left’s political weakness relative to the postliberal right — a weakness on display as early as 2020. Sanders’s primary defeat was a devastating blow for the dirtbag left, which had hitched their political and intellectual wagons to his success. And some disastrous illiberal experiments during the George Floyd uprising — most notably, the brief-lived anarchist revolution in central Seattle — discredited the idea that identity left activists should be going beyond reformism.

More recently, the shock of the Trump administration’s naked authoritarianism has prompted critics of anti-Trump liberalism — not just Moyn, but also prominent political theorist Corey Robin — to admit the president represents a true authoritarian crisis. There is no time for anti-liberalism when fascists are at the gates.

But I think there’s something more philosophical at work here as well. Both sub-factions of the American radical left have reconciled themselves to operating within liberalism in a way that their peers on the right simply have not.

The full-spectrum failure of communist authoritarianism, ranging from economic catastrophe to mass murder, prompted a searching rethink of the older left’s critique of liberal democracy as a bourgeois fiction. Today, modern leftists overwhelmingly affirm the value of democratic elections and individual rights protections.

While calls to end capitalism are omnipresent on the radical left, few believe this can be accomplished imminently — instead proposing a more gradual series of reforms through the liberal-democratic process designed to make society more equal and, in the long run, open up the horizons of political possibility to deeper structural change.

Moreover, much of the modern left focuses its critical attention on neoliberalism: the post-1970s laissez-faire worldview embodied by Reagan on the right and Bill Clinton on the center-left. But attacking neoliberalism is entirely possible within the confines of a broadly liberal politics — a point Joe Biden demonstrated when he adopted “postneoliberal” economic thinking on issues ranging from trade to anti-trust.

So while the postliberal right talks openly of “regime change” as a near-term goal, the radical left calls for police reform and free public buses in New York City. The right is postliberal, the left is post-neoliberal.

This form of radicalism does not, at least in the near term, actually threaten to topple liberalism. The attacks on “liberals” are either an imprecise shorthand for criticisms of the centrist Democratic establishment, or else a kind of irritable mental gesture lacking philosophical substance.

The point is not that there is no radical left in America. Rather, it is that the radical left’s frequent use of “liberal” as slur masks deeper concessions to liberal politics.

Where do we go from here?

With postliberalism looking increasingly unattractive, and the left offering no clear and compelling alternative to liberalism, liberals have an opening to reinvent themselves. Such a new liberalism would not merely attempt to rebut its opponents or “defend democracy” from Trump, but rather develop a new articulation of its own ideas to address modern problems.

The most promising sign, at present, is the rise of “abundance liberalism” — a new liberal economic vision that has found its clearest expression in Ezra Klein and Derek Thompson’s bestselling book.

Abundance liberals believe that government can and should accomplish great things: Their mantra is “a liberalism that builds.” Their core insight is that the barriers to building are not merely a lack of ambition or adequate spending, as some leftists suggest, but rather government itself. In their view, the state has so hobbled itself through well-intentioned regulations — like zoning restrictions and lengthy environmental reviews — that it simply cannot deliver what citizens want of it. To save the liberal project, they argue, we need to repudiate some of its past accomplishments.

What makes abundance liberalism philosophically notable is not its bundle of policy ideas per se. It’s that abundance represents an attempt to operationalize liberal principles in order to address one of the most distinctive social challenges of the 21st century: the collapse of government legitimacy.

A protest against fascism at the Massachusetts State House

Part of what fueled the Trump movement, and the anti-liberal intellectual tide that came with it, was a sense that people were fundamentally dissatisfied with the way politics worked. The postliberal right blamed this dissatisfaction on liberalism’s attack on traditional moral values; the left blamed it on rising inequality under neoliberalism.

Abundance liberalism, by contrast, locates this legitimacy crisis in a gap between what people expect of government and what it actually does. Its solution to this problem is rooted in a liberal theory of power — a point Klein made explicitly in a recent New York Times column responding to his left-wing critics.

A wholesale reconstruction of liberal politics, though, would be a broader project than even abundance liberalism’s call for major policy reform. It would require a new ideological story about what our society is for and what it is about — one that can command basic allegiance from people who disagree profoundly on everything ranging from zoning reform to abortion.

Liberalism does not, at its heart, aim to be one political doctrine among many (in the sense that abundance liberalism competes with other economic visions, like postneoliberalism). Liberalism instead hopes to define the very system through which political disagreement happens: to create institutions, like elections and a free press, that guarantee that citizens can participate as equals in the political realm and live private lives as they so choose.

John Rawls, one of 20th-century liberalism’s giants, defined political liberalism as an attempt to create “a just and stable society of free and equal citizens who remain profoundly divided by reasonable religious, philosophical, and moral doctrines.” Such a liberalism should be able to command basic allegiance from everyone who believes their fellow citizens deserve basic respect and reciprocal rights — a group that ranges from conservative Catholics like Ahmari to democratic socialists like Moyn to left-liberal technocrats like Klein and Thompson.

The greatest task of modern liberalism is to discover the political technology necessary to build this consensus liberalism: to develop ideas, narratives, and policies addressing the specific frustrations with the political status quo that gave rise to the anti-liberal moment in the first place.

What such a liberalism might look like is still unclear. But the rise of abundance liberalism shows that American liberals retain the capacity to think creatively about adapting their basic principles to changing circumstances.

As their rivals weaken, their ambitions should widen accordingly.


From Vox via this RSS feed

7
 
 

The upper front face of the US Supreme Court building is seen in Washington, DC, framed by trees.

The US Supreme Court in Washington, DC, on June 27, 2025. | Graeme Sloan/Bloomberg via Getty Images

This story appeared in The Logoff, a daily newsletter that helps you stay informed about the Trump administration without letting political news take over your life. Subscribe here.

Welcome to The Logoff: President Donald Trump can proceed with his campaign of mass firings in the federal government, the Supreme Court announced today, clearing the way for a dramatic hollowing-out of agencies that may be hard to reverse.

What does Trump want to do? In February, the Trump administration directed federal agencies to prepare for “large-scale reductions in force” — in other words, mass layoffs and reorganizations of the federal workforce. Those layoffs had been blocked by a lower court, but can now go forward.

What did the Court say? Tuesday’s decision isn’t a final ruling on whether Trump can fire federal employees en masse — but it will likely have a seismic impact. It allows the administration to proceed with layoffs pending a final decision on the merits, and that damage may be hard to undo even if the administration later loses its case.

Which agencies could be impacted? The full scope of the administration’s cuts is uncertain, but Trump has previously proposed cutting more than 100,000 federal jobs. And what we do know about the administration’s layoff plans paints a grim picture: entire federal programs and agencies could be laid to waste, plaintiffs in the case now before the Court allege. Seven thousand employees of the Social Security Administration, which is already suffering from staffing shortages, could be fired, and as much as half of the Department of Energy.

What’s the big picture? Tuesday’s order significantly expands Trump’s power to put his own stamp on the federal government by dismantling swaths of its workforce, and simultaneously weakens Congress’s role in the separation of powers.

As the California district court judge who initially blocked Trump’s order wrote: “After dramatic staff reductions, these agencies will not be able to do what Congress has directed them to do.” At least for now, the Supreme Court has allowed Trump to proceed with dismantling them anyway.

And with that, it’s time to log off…

The first drug to treat malaria infections in newborn babies has been approved in Switzerland and could soon be cleared for use in at least eight sub-Saharan African countries, where malaria remains a serious threat. More than three-quarters of malaria deaths in the World Health Organization’s Africa region are children under five, and the drug promises to close a key treatment gap. It’s a bit of unvarnished good news for your evening, and an important reminder of the value of science. Have a great evening and we’ll see you back here tomorrow!


From Vox via this RSS feed

8
 
 

Protestors hold up signs saying “MEDICAID SAVES LIVES” and “REPUBLICAN CUTS KILL.”

Care workers with the Service Employees International Union (SEIU)  chant, rally outside the US Capitol on June 26, 2025 in Washington, DC. | Tasos Katopodis/Getty Images for SEIU

Last week, the Senate passed the largest cut to America’s welfare state in modern history. President Donald Trump’s inaptly named “One Big Beautiful Bill” would take health insurance from 11.8 million Americans and slash food assistance to low-income families by about $100 a month.

In the face of this unprecedented setback, some progressives argue that the Democratic Party must rethink its entire approach to social welfare policy.

Over the past half-century, Democrats have expanded the welfare state primarily through means-tested programs — benefits that are reserved for Americans below a certain income threshold. The left has long criticized this model, arguing that the party should instead pursue universal programs, which is to say, programs that don’t screen beneficiaries by affluence (for example, Social Security provides pensions to all senior citizens with a sufficient work history, irrespective of their wealth, making the program “universal” in this sense of the term). After all, such policies require less bureaucracy to administer, more reliably reach their intended beneficiaries, and don’t punish people for getting a raise (unlike some means-tested programs, which can abruptly cut off benefits to a person whose income rises).

In my view, these substantive arguments for favoring universal benefits over means-tested ones are strong. But debate on this subject tends to focus on political considerations. After all, a policy that is optimally designed — but legislatively nonviable — helps no one.

Many proponents of means-tested programs recognize their flaws but insist that such policies are the most politically sound way of aiding the disadvantaged in the United States. The American public’s tolerance for higher taxes or deficits is limited (at least, in today’s context, when higher deficit spending is liable to push up inflation or interest rates). As a result, the politicians who make or break Democratic majorities — which is to say, the lawmakers who win tight races in purple areas — have historically had a limited appetite for new social spending. Given that reality, some argue that the most effective way to combat inequality and disadvantage in the US is by targeting social benefits at the needier segments of the population.

But progressives have countered that this is politically myopic. In their account, means-tested programs are more vulnerable to repeal than universal ones, which boast the buy-in of more powerful segments of society. Republicans are constantly trying to cut benefits for the poor, but tend to be warier about slashing Medicare and Social Security. If what one truly cares about is expanding the welfare state in a politically durable manner, this reasoning goes, you should make new benefits universal — even if that entails a higher upfront fiscal cost.

Some in this camp have cast Trump’s bill as vindication of their analysis. After all, the president’s legislation cuts means-tested programs like Medicaid and food stamps, yet spared Medicare and Social Security. Therefore, in the long run, universal benefits are actually more politically sound than the alternative.

But I don’t think this follows, for at least two reasons:

First, it’s not clear that Medicare and Social Security evaded cuts because they are universal. Rather, benefits for the elderly may boast unique political strengths, which are not shared by all universal programs. And the longer-term history of American social policy supports this interpretation. Politically, universality does not appear to be the key determinant of a policy’s survival or growth.

Second, even if universal programs are more politically resilient once established, they could still be more challenging than means-tested policies to create in the first place.

Perhaps, Democrats should put optimal policy above political expediency. As Republicans just demonstrated, parties can enact divisive policies with slim congressional majorities, if they really want to. The next Democratic government could likewise choose to enact large tax increases for the sake of funding new universal programs. But the strongest arguments for that course of action are substantive, not political.

Universality does not guarantee political strength

To discern whether means-tested programs have been uniquely vulnerable to cuts in the United States, it’s worth looking at long-term trends. And when we examine the events of the past few decades — rather than those of the past few days — universal policies’ political superiority looks uncertain.

For one thing, Republicans have in fact cut some universal programs quite a bit over the past half century. Under Ronald Reagan, Congress cut Social Security benefits substantially by raising the retirement age. And subsequent governments have repeatedly enacted cuts to Medicare, although these have been concentrated on providers and not beneficiaries. It remains true that America’s elder benefits have held up remarkably well against cuts, given their ballooning costs. But even these programs have not been wholly immune to attack.

The trajectory of unemployment insurance (UI) is more telling. UI is a universal program, benefiting all Americans who lose their jobs, regardless of their former incomes or current wealth. Yet the federal government cut UI benefits in the 1980s, and many states slashed funding further in the 2010s. Throughout the latter decade fewer than 30 percent of America’s unemployed received benefits in an average month, down from 41 percent in 1979.

On the other hand, since Reagan’s election, means-tested social benefits have actually grown substantially. Even as Social Security benefits fell over the course of the 1980s, America’s average monthly food assistance payment rose by 10 percent, while eligibility for Medicaid expanded. Between 1990 and 2019, meanwhile, access to Medicaid grew dramatically, as did the average benefit from the Earned Income Tax Credit. Altogether, between 1979 and 2019, spending on targeted welfare programs grew by 280 percent, after accounting for inflation and population growth, according to an analysis by Robert Greenstein, founder of the Center on Budget and Policy Priorities. (A significant portion of that growth can be attributed to rising healthcare costs, which have mechanically pushed up spending on Medicaid. But such inflation does not explain anywhere close to the full growth of social spending over the period. And in any case, it is notable that, until this year, the government agreed to eat these cost increases, rather than responding to them by reducing Medicaid benefits or eligibility.)

Number of tax returns with the EITC

To be sure, some means-tested programs suffered cuts or full repeal during this period. Most notably, under Bill Clinton, Congress ended Aid to Families with Dependent Children. However, on the net, targeted anti-poverty spending has grown massively over the past half-century. And this translated into lower poverty: In 1993, America’s tax and transfer system reduced its poverty rate by 21 percent; in 2019, taxes and transfers cut the poverty rate by 47 percent.

This history should inform how we interpret the One Big Beautiful Bill Act’s social welfare cuts. It’s undoubtedly true that Republicans have proven more skittish about cutting Medicare and Social Security than many means-tested programs. But since the GOP has been quite willing to slash unemployment insurance in recent years, it’s not clear that Medicare and Social Security’s political strength derives from their universality. Rather, those programs might owe their resilience to the facts that:

They are both tied to a beneficiary’s work history and are therefore perceived as “earned.”They benefit the elderly, who Americans widely regard as deserving of social support, due to their relative difficulty in working.Older Americans vote at highly disproportionate rates, which makes it riskier to cut benefits earmarked for them specifically.

This interpretation of Medicare and Social Security’s political strength is consistent with the GOP’s recent cuts to benefits for the jobless and decision to slash Medicaid through the expansion of work requirements: Aid to prime-age Americans who are perceived as choosing not to work is more vulnerable than benefits for senior citizens who purportedly “earned” them through a lifetime of labor.

Even if universal programs are more resistant to repeal, they’re still more difficult to establish

All this said, it still seems plausible to me that universal programs are somewhat more politically secure — once established — than means-tested ones. At the very least, social benefits that are narrowly tailored to the very poorest have not fared especially well. Even Robert Greenstein, whose work challenges the political case for universality, argues that policies that serve both the poor and a “sizable share of the middle class” tend to be on stabler footing than programs that serve the indigent alone.

But the fundamental challenge facing Democrats is how to establish new social benefits in the first place. Raising taxes in the United States is politically difficult, and has been for virtually our entire history outside of wartime. As Matt Yglesias notes, federal tax revenues surged as a share of GDP with America’s entry into World War II, but have never increased all that much since then.

FRED chart

Trump’s bill will only increase the political headwinds facing new spending. Democrats have forsworn any tax increases on voters earning less than $400,000 in recent years. Now, Trump has lowered the baseline rate of taxation on this vast group, reducing the funds available to a future Democratic government in the absence of tax increases. The president’s legislation also swells the debt, thereby reducing the likelihood that future lawmakers will have lots of room to deficit spend without risking inflation or interest rate hikes.

So long as Americans remain averse to large tax increases, Democratic candidates who win narrow elections in swing districts are likely to resist large increases in social spending. To the extent that Democrats need such candidates to secure congressional majorities, the pool of financing available for social welfare spending is liable to be fixed. And as long as that’s true, there will be a political argument for means-testing new programs.

To be sure, some universal benefits are relatively affordable — we don’t need large middle-class tax hikes to provide all schoolchildren with free lunch. But other programs, such as a monthly child allowance, are quite expensive to provide on a universal basis. It may therefore be more politically viable to expand child benefits for poor and lower-middle-class families than for American households writ large.

You can just do things

Of course, to some extent, political viability is in the eye of the office-holder. We just saw Republican House members from swing districts vote for politically toxic Medicaid cuts. Given that parties almost always suffer big midterm losses no matter what they do, maybe the next Democratic government should emulate the GOP’s boldness and fund large, universal welfare programs with middle-class tax hikes, backlash be damned.

There are reasons to be wary of this logic. One lesson of the past few months is that keeping the Republican Party out of power is extremely important. Another is that the GOP can’t be trusted to preserve even popular, broadly beneficial social programs. It’s worth mentioning that Medicaid is a beloved program that enjoys an 83 percent approval rating.

Given these realities, it’s hard to be certain that Democrats can do more lasting good by worrying less about retaining power once they acquire it.

Still, the substantive case for favoring universal programs over means-tested ones is strong.

For one thing, universal policies are easier for the government to administer and for beneficiaries to claim. Imagine a world where only people who earned less than $50,000 a year were entitled to borrow books from libraries. The burden of needing to prove one’s eligibility would likely deter some individuals from bothering to take out books, while libraries would need to spend a lot of time and money on verifying borrowers’ incomes, increasing their administrative costs per book lent.

These same pathologies plague many actual means-tested programs. People who are eligible for anti-poverty benefits routinely fail to claim them. This problem is so widespread that, if everyone who qualified for public benefits received their due, poverty would decline by more than 30 percent in the US, according to an Urban Institute report.

So, there are some good reasons for Democrats to favor universalistic welfare programs. But these have less to do with means-testing’s supposed political flaws than its policy downsides.


From Vox via this RSS feed

9
 
 

Scattered debris, including vehicles and equipment after severe flooding

Scattered debris, including vehicles and equipment, are seen in Kerrville, Texas, on July 5, 2025, following severe flooding. | Eric Vryn/Getty Images

In the wake of deadly flooding in Texas, my colleague Noel King, who cohosts the Today, Explained podcast spoke with CNN senior climate reporter Andrew Freedman about what we know about the impact of cuts to the National Weather Service and what those cuts could mean for future disasters.

This conversation originally appeared on the Today, Explained podcast and was featured in the Today, Explained newsletter, which you can subscribe to here.

Below is an excerpt of their conversation, edited for length and clarity. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple PodcastsPandora, and Spotify.

What do we know about [the two weather forecasting offices in Texas]? Were they fully staffed?

The term that I would use is they were “adequately staffed.” They were staffed to the extent that most offices would be staffed overnight during the night of the horrible, tragic flooding. They each are missing some employees.

In the case of the Austin/San Antonio office, they’re missing a warning coordination meteorologist and a science and operations officer. Both of those positions were opened up because people took early retirement under the Trump administration’s program to try to get people to reduce the size of the federal government.

And then the office in San Angelo, Texas, is missing a meteorologist in charge. So that’s like the top meteorologist for the office. These staff cuts in those two particular offices do not seem to have had a major impact or even a very clear impact on the way the weather service performed during this event.

What I think I’m hearing is that your reporting does not seem to suggest at this point that the disaster in Texas was caused by the Trump administration cuts. Am I hearing that right?

You are hearing that right. The missing warning coordination meteorologist in the Austin office…that’s the biggest question. Because that position is somebody who interacts with emergency managers who are then responsible for warning the community.

But when I’ve talked to sources, they basically are saying it probably did not affect the timeliness of the warnings, the tone of the warnings, which ratcheted up very quickly, as well as how early they caught on to this event possibly taking place.

Do we know why that top meteorologist position in San Angelo was unfilled?

I don’t know specifically offhand. However, there are offices around the country that are without their meteorologist-in-charge. The National Weather Service lost about 600 people. They did get hit hard by DOGE cuts.

What’s happened with these cuts is they’re losing the most experienced people. The people who are near retirement age are taking these packages and are leaving. And then you have to train up people who are coming in, and they don’t have the same level of experience.

But there is one point that’s really important here because the cuts have been so deep around the country: This is a question that’s going to come up at every single disaster that takes place. Because the meteorologists that I’ve been talking to in the broader community are basically telling me this doesn’t seem to be the event that vacancies caused a tragedy. But mark my words that that will happen at some point if this isn’t addressed.

What are your sources worried about?

They’re concerned about hurricane season. This event actually was the remnants of Tropical Storm Barry. And that rained out over inland Texas. We also just had Tropical Storm Chantal that made landfall in South Carolina and is causing flooding today and caused flooding over the weekend. So it’s just this sense in the community that they’re really stretched thin, that people are working longer shifts, people are working six-day weeks.

When a disaster hits, FEMA is often there to pick up the pieces, to help clear up the wreckage. The Trump administration has also made cuts to FEMA. Do we know how broad those are?

The cuts to FEMA are quite broad. They really are quite deep. The administration has talked about getting rid of FEMA after the hurricane season. However, they pretty much kind of stopped that process as hurricane season came upon us.

These cuts are being made to the National Weather Service, FEMA, in the name of cost savings. Does this actually save money?

That’s a very good question. I don’t know that I can answer that conclusively here. I can tell you, though, that there are enormous economic benefits to accurate weather forecasts. Studies have shown savings of billions of dollars with more accurate hurricane track forecasts, for example.

If you look at the proposals that the administration has put forward for the 2026 budget, they want to actually close every single weather and climate lab that we have in this country. So the people who are working on improving flash flood forecasts, the people who are working on improving hurricane forecasts, and doing the research that would then lead to the forecast capabilities of tomorrow would be out of a job if that budget were enacted as proposed.


From Vox via this RSS feed

10
 
 

A black-and-white photo of a diverse group of school children stand around their teacher, who is holding an open book at her desk.

Every month or so, for the past few years, a new dire story has warned of how American children, from elementary school to college age, can no longer read. And every time I read one of these stories, I find myself conflicted.

On the one hand, I am aware that every generation complains that the kids who come next are doing everything wrong and have gotten stupider and less respectful. I fear falling into this trap myself, becoming an old man yelling at cloud.

On the other hand, with every new story, I find myself asking: … Can the kids read, though?

I don’t think I’m alone in this confusion. Similar responses emerge almost every time a new piece arrives with tales of elite college students who can’t get through Pride and Prejudice or another report reveals just how far reading scores have plunged among America’s schoolchildren. “Ten years into my college teaching career, students stopped being able to read effectively,” Slate reported bleakly in 2024. Within days, a teacher’s blog offered a rebuttal, arguing that there has never been an era where adults were impressed by kids’ reading habits: “Find a news article published since the 1940s that shows that students not only read proficiently but eagerly and a lot. I’ll wait.”

On the other hand, with every new story, I find myself asking: … Can the kids read, though?

“We’ve long seen both of those extremes,” says Elena Forzani, director of the literacy education and reading education programs at Boston University. “In a sense, you could argue both are true or neither are true.”

Much of the current anxiety is being driven by the fear that new technologies are scrambling kids’ brains in a way no other generation has faced: smartphones, social media, and now the threat of generative AI, which millions of students are currently using to do their schoolwork. How could such powerful tools not change our children’s ability to process information? Yet on the other hand, there are all those think pieces about how adults had similar worries with every new piece of era-shifting technology that came before, including television.

Broadly speaking, there are two different issues that get intertwined together in the “kids can’t read” narrative. The first is the sense from professors that their students are unprepared to read at the level college requires — that while they’re technically literate, they are not sophisticated readers. The second is that at the elementary level, kids’ reading test scores are going down.

So is it true? How much panic over kids’ literacy is warranted? Scholars who study the subject, concerned English professors, and experts in the “kids these days” phenomenon told me that the literacy landscape is a lot more nuanced than either of my gut impulses would have led me to believe.

A brief history of adults saying, “Kids these days!”

When I say that every generation complains about the kids these days, I do mean all of them. We have documentation of this phenomenon going back to Socrates.

“It’s one of these things you keep seeing generation after generation,” says John Protzko, a psychology researcher at Central Connecticut State University and the co-author of the 2019 study “Kids these days: Why the youth of today seem lacking.”

Protzko’s study found that adults tend to judge kids by their own adult standards. If you’re an adult who likes to read, he says, you tend to assume that you read just as diligently as a child.

“And then I impose that on society at large: ‘Everyone liked to read as a kid,’” Protzo explains. Rapidly, that false belief can turn into “None of the kids today read like they did in my day.”

When I say that every generation complains about the kids these days, I do mean all of them. We have documentation of this phenomenon going back to Socrates.

We’re particularly prone to this kind of false memory when it comes to the attributes on which we pride ourselves. If, for instance, we are proud of being polite, conscientious adults, we feel that children are growing ever more disrespectful. For highly educated people who like to read — like me, and a lot of other journalists who cover literacy, for instance — reading can be a big one.

Millennials, who had the misfortune of growing up in the boom of the internet think piece economy, are particularly aware of how common the “kids these days” trope is. As the generation perhaps most loudly accused of historic levels of laziness, neuroticism, whininess, and extended adolescence, we are acutely aware of how easy it is to reflexively dismiss Gen Z and Gen Alpha.

I asked Protzko if it was possible to fall into a trap of overcorrecting for the kids-these-days tendency, and to end up overlooking a real problem because you’re afraid of sounding old.

“The central problem is that we rely on our intuitions, and we rely on our memories, and we think that they’re accurate,” says Protzko. “But when it comes to something like reading ability, in many cases we do actually have decades of research.”

Can kids read in college?

Let’s start with the college problem. A spree of recent articles written by or quoting college professors make the case that their students are getting worse at reading, and that in some cases they can no longer even read full books.

“Yes, there were always students who skipped the readings, but we are in new territory when even highly motivated honors students struggle to grasp the basic argument of a 20-page article,” wrote Adam Kotsko for Slate last year. Kotsko adds that the problem is not with the kids themselves, but with the education system in which they’ve been reared. “We are not complaining about our students. We are complaining about what has been taken from them.”

Complaints of the kind Kotsko is making go back at least 10 years. “Is it just me, or are student competencies like basic writing skills in serious peril today?” wrote Azadeh Aalai in Psychology Today in 2014. “Teachers have been reporting anecdotally that even compared to five years ago, many are seeing declines in vocabulary, grammar, writing, and analysis.”

Yet there is little hard data that shows such a decline.

One recent splashy study led by English professor Susan Carlson evaluated 85 undergraduate English and English education majors on their ability to understand the first seven paragraphs of the Charles Dickens novel Bleak House.

“Fifty-eight percent of them could not get through a few paragraphs without being completely lost,” Carlson told me. “Yet 100 percent of them said they could read it with no problem. What that tells me is there’s a disconnect between what people think reading is or what they think they’re doing and what they’re actually doing.”

Carlson, a professor of Victorian literature at Pittsburg State University, didn’t set out to make a grand sweeping claim about the literacy of all college students, but to look closely at the inner workings of the minds of a specific cohort to figure out how they thought about reading. She compared them with students from a similar regional Kansas university, but she kept the rest of the study small by design. What she found is that these specific students — despite years of training in literary analysis — lacked the vocabulary, background knowledge, and reading strategies it takes to understand Dickens at a college level. It’s hard to use this data set to extrapolate past that.

As Carlson told me over the phone, “85 people is not enough to know anything. I can’t make any kind of assumptions based on that.”

Carlson’s study also doesn’t provide a comparative data set from previous years that might show us whether or not there’s been a change in the number of students who can evaluate a complicated text like Bleak House well. Notably, the data was all gathered in 2015, meaning that it was looking at the tail-end millennials who were in college in 2015, not the much-maligned Gen Z. (Why the delay between when the data was gathered and the study was published? “I teach a four-four courseload,” Carlson says.)

Carlson told me she has a feeling that her students have gotten noticeably worse at reading over the past five years. “It’s just a feeling, right? Who cares about a feeling?” she says. “But when I talked to other professors, they felt the same way.”

Currently, we don’t have enough data to show that college students are graduating with lower reading comprehension abilities than they used to have. The fears around their capabilities are only accelerating as reports emerge of their reliance on ChatGPT to do coursework. Still, what’s actually going on here is an open question.

When it comes to childhood literacy rates, though, we’ve got a lot of data. The controversy comes when we try to interpret it.

Can kids read in elementary school?

When it comes to a childhood literacy crisis, the numbers that all the horror stories cite come from the National Assessment of Educational Progress, also known as NAEP or the Nation’s Report Card. NAEP tests a sampling of representative students across the country every year to see how well American students are doing at various different subjects. For the last decade, NAEP reading scores have been in decline.

“I track closely the share of students who are not meeting what we define as a basic level of proficiency. These are students who are really struggling with the fundamentals of literacy,” says Martin West, an education professor at Harvard and vice chair of the National Assessment Governing Board. “That number is now 40 percent of students in grade four and 33 percent in grade eight.”

NAEP’s reading proficiency ratings nosedived during the pandemic years when schools went virtual. But they had already been trending downward before that, says West: “As far back as, depending on the grade level, 2013 or 2015. That’s when reading scores in the US peaked. They’ve been falling since then.”

In 2013, only 33 percent of fourth-graders were reading below basic level, and 23 percent of eighth-graders. That’s a 7 percentage point and 10 point difference, respectively, from where we are now.

Notably, the scores have held pretty flat among high-achieving students. But among the bottom third of scorers, they’ve plunged.

The plight of those kids began receiving increased attention after the massive success of the podcast Sold a Story. Published by APM Reports in 2022, the podcast drew on five years of education reporting by its creator Emily Hanford to make the case that schools have ignored the “science of reading” by skipping over important phonics work to focus on context clues, like telling kids to look at a picture and guess a word. The podcast was so impactful that in its aftermath, at least 25 states passed new legislation on how reading should be taught.

The idea that schools overlooked the importance of the science of reading has become a popular explanation behind the long-term drop in reading test scores. But that’s not the case, says Hanford. The methods she critiques in Sold a Story have been popular for much longer than a decade. They’ve been used in different parts of the country on and off since at least the 1960s.

The best reading scores the country ever got showed that a third of fourth-graders still hadn’t achieved basic literacy.

“One of the things I object to is a narrative that I often hear which is that we need to go back to the basics and back to the good old days and back to the way things were,” she says. “We don’t have any good evidence that I’m aware of that there are good old days when we were doing such a good job with this before.” The best reading scores the country ever got showed that a third of fourth-graders still hadn’t achieved basic literacy.

Another popular explanation for the drop in reading test scores is the Covid lockdowns that shut down schools. Nearly everyone I talked to agreed that Covid exacerbated the problem. Yet as West points out, reading scores started dropping well before lockdowns, from 2013 to 2015.

West outlines a few other potential ideas that could play a part in the drop in reading scores. There is, of course, the astronomical rise of screen time among children and teens. Perhaps related to the rise of screens, there is the decline in the number of children who say they read books for fun. There’s also the end of the No Child Left Behind Act, President George W. Bush’s standards-based educational reform, set into place in 2001 and ended by a bipartisan Congress in 2015. While the controversial NCLB was criticized for pushing schools to teach dogmatically around standardized tests, West notes that NAEP scores went up while it was running and started falling after it ended.

“We have lots of ideas” about what’s gone wrong, West says. “What we don’t have is definitive evidence.”

Not everyone, however, is convinced that the NAEP data is even giving the whole picture. Some of the literacy experts I spoke to felt that NAEP’s standardized tests don’t capture the full possibilities of what literacy might look like for today’s kids.

“In order for NAEP to succeed and to have these results year in and year out, it means that we need to hold a particular kind of definition of literacy,” says Antero Garcia, a Stanford professor of education and the vice president of the National Council of Teachers of English. “That’s just not how language functions historically and culturally in societies, right?” He argues that today’s kids can be quite sophisticated with language and hybrid language, like Spanglish, and in complex virtual spaces like the live-streaming platform Twitch, that NAEP just isn’t reflecting.

“The ways we evaluate if kids can read and write doesn’t start with the investment in where kids are currently at, and where culture currently thrives.”

Garcia suggested that the “gap in understanding” between the literacy that lets a kid navigate the screen-in-screen chat scroll of Twitch and the literacy that guides a kid through Bleak House offers schools an opening for education. “Those places of, ‘How do I take this highly literate conversation that might be happening on Twitch and then translate it into an academic essay’ — those feel like opportunities for scaffolding,” says Garcia. “Which oftentimes is not happening in schools, because the ways we evaluate if kids can read and write doesn’t start with the investment in where kids are currently at, and where culture currently thrives.”

Forzani is concerned that the recent wave of reforms that have hit schools since Sold a Story have narrowed into a focus on phonics drills (although the podcast emphasizes multiple times that reading involves a lot more than that).

“A lot of people are thinking about reading in terms of pretty narrow definitions of comprehension,” says Forzani. “But of course we want kids to be able to make inferences and interpretations beyond just literal interpretations, right? We want them to be able to make higher level inferences and to be able to evaluate and critique text.”

Forzani points to research from the UK, where reading curriculums were widely reimagined a few years before the US did the same thing. “They shifted attention to really focus on teaching phonics, which is good and important,” she says. “But then they’ve also seen, ‘Wait, we did too much of that focus and now we lost sight of really comprehending at a high level.’”

West says that the current concern over kids’ ability to read might actually be understated.

“I’ve been struck by the lack of a sense of urgency on the part of what seems to be the larger share of the public,” says West. “Literacy is the foundation for everything that we want schooling to be able to do for our children.”

Lots of kids can read just fine. That doesn’t mean we shouldn’t still be concerned.

By the end of my reporting, my head was spinning from all the data and studies I had read through. Parsing the whole thing out, though, here’s the conclusion I came to.

US schools have never done a very good job at teaching kids to read, but it seems as though there’s meaningful evidence that we’re doing a worse job right now. While high-achieving kids are still reading the way they’ve read for decades, the ones to whom reading doesn’t come easily are failing more now than they used to.

We don’t have clear data on what happens when kids get to college. Still, it’s certainly plausible that the problems being documented in the primary education years persist into secondary education as well. It’s not being old or out of touch to say so.

Moreover, no one seems to know what the solution is: to endlessly drill kids in phonics, or to try to build a reading curriculum that accounts more effectively for how they communicate today, or both or neither or something else. What seems pretty clear to me is that this is not a problem we should be looking away from.

In the meantime, schools are bracing for impact as generative AI continues to make its way onto students’ devices, fundamentally changing the ways they interact with text. “To study the strategies that [students are] going to use to survive is really important,” says Carlson, the English professor who wanted to know what her students were thinking when they read Bleak House, “because they’re not going to hit the wall until later.” Later: when they leave school and come to join us in a world that, for now, remains text-based.


From Vox via this RSS feed

11
 
 

Shaboozey with his arms up and Jelly Roll singing during a CMA Fest 2025 performance

Shaboozey and Jelly Roll perform during CMA Fest 2025 on June 06, 2025 in Nashville, Tennessee. | Taylor Hill/Film Magic

“We wanna thank God for giving us the grace to give him a little glory in this building tonight,” rapper-slash-country hit-generator Jelly Roll said onstage in May at the 60th Academy of Country Music Awards. The speech came during an exultant performance of his collab with Shaboozey, “Amen,” which features the chorus, “Somebody say a prayer for me / ‘Cause the pills ran out and I still can’t sleep.” The song details a religious devotion earned through a struggle with darker forces. “Even a crooked road can still get you home,” Jelly Roll concluded.

Jelly Roll might seem like a surprising mouthpiece for this kind of preachy moment, but the song is a hit even outside the country bubble. In a recent article for Christianity Today, musicologist Kelsey McGinnis identified the work of artists like Jelly Roll, Brandon Lake, and Thomas Rhett as “barstool conversion rock,” a notably masculine form of music that sits adjacent to contemporary Christian music (CCM).

But that subgenre is far from the only religiously tinged music — created by everyone from devout evangelicals to open agnostics, from country artists to rappers — climbing the charts today; a number of pop songs are likewise courting the divine. Benson Boone’s “Beautiful Things,” which arguably functions as a direct-appeal to God, was a ubiquitous bop for most of 2024. Alex Warren’s “Ordinary,” a love song that easily doubles as a Christian worship song, has slowly climbed the charts over the past few months to become one of 2025’s biggest breakout hits (it’s currently No. 1 on the Billboard Hot 100).

By establishing an industry-leading sound and a distinct identity, in a time of increased polarization around religion, Christian-coded music has finally broken containment and conquered the airwaves.

Christian rock has been around for decades. What changed?

Thirty years ago, evangelical and secular culture were very much divided, says culture writer and religious historian Kristin Kobes Du Mez. “There was a much more cohesive, and even in many cases, all-encompassing Christian culture [for] kids raised in the 1990s,” she said. “It was possible to be completely insulated from secular culture. … I certainly grew up with the understanding that top 40 music was evil.” Christian radio, Christian record labels, and Christian bookstores all functioned as gatekeepers, vetting everything they passed on to consumers.

“There was a lot of money to be made in distinctively Christian merchandise,” Du Mez said. “But of course, it wasn’t presented as a business. It was presented as ministry and as evangelism.” It was also often considered hacky or trite. “The kind of joke about Christian culture is that they just copy what’s happening in secular spaces and then produce things of lower quality,” Du Mez said.

Switched on Pop’s Charlie Harding echoed this. Christian contemporary music used to sound like “whatever’s happening in pop music, five years too late,” he told me. A fan of a secular band could usually find a Christian equivalent and listen to that instead, guilt-free. Like other guilt-free treats, it might not quite hit the spot — but for decades, many Christians eschewed the pleasures of mainstream media, even as their own art trailed behind it.

Christian pop, however, was not the only form of Christian music available. There was also church worship music (also known as praise music). Worship music gained traction in the late ’70s and ’80s, when seminal CCM songwriters like Rich Mullins modernized the classic Protestant hymnal structure by combining it with the aesthetics of modern Black gospel, emphasizing a soaring, anthemic rock chorus that everyone could sing along to. This structure has come to define praise music ever since.

In the ’90s and early aughts, as megachurches and Christian conferences exploded in popularity, along with their concert-like worship services, worship music took on increased cultural significance. This music was meant to be sung by church congregations, intended to invoke or encourage religious euphoria, even conversion. It took a basic pop-rock style and imbued it with spiritual ambiance, codifying a big, church revival sound.

Then came the rise of the internet. The increased interconnectivity of diverse communities, the subsequent explosion of the smart phone and social media, and the demise of the cultural mainstay that was the Christian bookstore all meant Christians found it much harder, if not impossible, to totally isolate themselves from the rest of the world.

“Sometimes that’s just what they are and what they do. Sometimes that’s their truth.”

Todd Nathanson, YouTube music vlogger

This increased interaction with the secular world both coincided with and fueled the erosion of the Christian music industry, which also meant that the centers of distribution and influence for Christian art changed. Now, instead of getting Christian music mainly from Christian radio and CCM artists, many Christians began to encounter it most regularly through their weekly Sunday worship service — which offered not “pop music, five years too late” but worship music.

Now, musical artists who grew up in the church, hearing worship music week after week, were also hearing and interacting with secular music and culture. They could more freely mix and learn from different musical styles. And soon, instead of merely following behind pop music, Christian music instead helped spawn an enormously influential offshoot of its own sound — via the biggest band of the 21st century.

Harding identified Coldplay as the through line between all that aughts Christian worship music and songs like “Ordinary.” In a 2019 Rolling Stone interview, band frontman Chris Martin, who was raised Christian, spoke of being influenced as a child in the ’90s by church music — by “these beautiful, big songs.” That bigness, Harding said, is crucial to what came next. Specifically, Harding said, Coldplay’s 2005 hit “Fix You” popularized a song structure that’s now ubiquitous among today’s faith-adjacent pop music.

“Start infinitely small,” he said. “You’re down on your knees praying to God.” As it unfolds, “You can see the whole cathedral around you. You’re starting to have this divine experience.” That “infinite build” structure of “Fix You” now infuses the work of a huge number of highly successful artists of the ’10s and ’20s — think Arcade Fire, Imagine Dragons, or any number of “stomp clap hey” groups — and is still featured by Christian-associated artists like Benson Boone and Alex Warren. Whether intentionally or not, their music has incorporated the vibe of a Sunday worship service, and that vibe is shaping the industry’s sound rather than following it.

This musical wave may have emerged, however indirectly, from Christian culture, but it’s managed to transcend the awkward resonances of a post-Hillsong Justin Bieber, mid-spiral Kanye West, or the Creed Cruise.

Where we are now: Masculinity, politics, and hollering to God

As Christians lost the ability to isolate themselves from the secular world, they also started to see value in interacting with secular culture.

Du Mez suggests that whereas before, Christians intentionally isolated themselves from the mainstream, in the current era, some are increasingly willing to accept and embrace secular influences because they increasingly conflate Christianity with a right-wing social and political agenda. Thus secular media and products that are not distinctly Christian, but which nevertheless reflect or promote their shared social and political values, are finding welcome among Christians who might otherwise disregard them.

“It’s not always compatible with what most people would understand to be core Christian values or theological tenets, but if it hits [certain] masculinity talking points, if it provides an attractive vision of throwback femininity or even retrograde femininity, then it’s embraced by these spaces,” Du Mez said.

This new and evolving embrace of secular messaging arguably explains why so many Christians are warming up to (and pushing up the charts) country and rock artists who, despite referencing Jesus here and there in their lyrics, would once have been viewed by them as morally dubious. This contradiction serves as the essence of barstool conversion rock: moral messages coming from spurious messengers. In writing for Christianity Today, McGinnis marries barstool rock to both country music and to “a web of crisscrossing cultural threads, including conservative politics, party culture, and evangelicalism.” While this subgenre overlaps with the much-discussed wave of “bro country,” it adds a layer of respectability via an appeal to faith.

If each of these songs involves a reckoning between the singer and God, “even the reckoning is performed.”

Indeed, what unites all of these songs across a broad sonic range is their confessional stance, as well as the performance of raw vulnerability from each male artist — a trait that modern men, especially ones steeped in a culture of conservatism, often have difficulty accessing. At the nexus of Jelly Roll’s gritty but spiritual collaborations and Morgan Wallen exiting Saturday Night Live for “God’s country” resides a desire for something deeper than just the average dirty-booted drinking song. In so many of these songs, the singer aims to find a way to express his own weakness, a familiar cry among isolated white men that contributes to these songs’ popularity.

Music critic Craig Jenkins (of Vox sister site Vulture) told me he thinks Boone’s “Beautiful Things” succeeds at this project. “Emotional, searching pop-guy songs will absolutely never lose steam,” Jenkins said.

Boone, who is no longer a practicing Mormon but does not drink or do drugs, is an interesting case, especially in his aesthetics. With his spangled jumpsuits and mustache/mullet combo, he’s somewhere between Elton John and Morgan Wallen. “The signifiers all feel very queer, but the presentation is like, lacrosse player crushing it in glee club,” Jenkins said.

Jenkins questions if Boone is “carelessly laundering stuff that used to be edgy into a teetotaling package that is just coincidentally very palatable for the most crotchety sensibilities,” or if his choices are more intentional. He ties Boone to post-punk creatives like Panic! At the Disco’s Brendon Urie and The Killers’ Brandon Flowers, who like Boone were both raised as Latter-Day Saints. This cacophonic whirl of musical antecedents reads like someone who’s going through a familiar post-adolescent Mormon journey of working out his identity beyond his family, church, and childhood.

Todd Nathanson, creator of the YouTube music vlog Todd in the Shadows, emphasizes that the authenticity is part of the package. “You don’t want to be too cynical about this because Alex Warren is an actual practicing Catholic, and you can’t expect someone to not let that inform his music,” he said. “Sometimes that’s just what they are and what they do. Sometimes that’s their truth.”

The other key to understanding this music is that while so much of its appeal is its perceived authenticity, its strength also lies in its ability to market a version of traditionalism that feels inviting, rather than alienating.

Though artists like Boone and Warren may not bear much sonic affinity with Jelly Roll or Wallen, thematically they all share an ability to express a yearning for the identity of a masculine, working-class hero, eschewing delusions of grandeur for a smaller life. These songs seem to pair images of modern masculinity with visions of a traditional lifestyle, tailored to appeal to audiences that don’t often find themselves reflected in pop music except through working-class anthems. Think of John Mellencamp’s admonition that “I can breathe in a small town,” paired with Warren’s vow to “make the mundane our masterpiece.” These lyrics are tropey, even trite, but they’re effective in breathing new life into old populist narratives.

The video for Warren’s “Ordinary,” for example, sees him pursuing a chastely styled woman (played by his real-life wife) with all the apparent wonder of a schoolboy seeing a woman for the first time. It’s both a bizarrely infantilized version of masculinity and a highly romanticized, extremely traditional view of love. It’s also hugely popular.

“There’s a synergy of thought in bro spaces that aren’t religious and ones that are,” Jenkins noted, with “treatises on how Your Woman should dress on both sides of the coin.” In an Alex Warren video, that vision of femininity isn’t so threatening.

Nathanson also points to artists who dabble in faith-adjacent themes, like Hosier and Noah Kahan, as proof of the marketability of this traditionalist message. “That kind of music is just doing very well right now,” he said — so well that other artists might be trying to gain a large market by adding “a couple of ‘Gods’ or references to heaven.”

“A lot of people see that type of proselytizing as a quick way to gain influence and a quick way to gain access and a foothold and an audience,” Nathanson added.

“Rugged and questioning is lucrative posturing in deeply weird times,” said Jenkins, who’s more cynical than Nathanson about the end result. If each of these songs involves a reckoning between the singer and God, he notes that “even the reckoning is performed.”

Beyond any cynicism, there are complex social messages to parse in this new space. For one, it’s perhaps ironic that the regressive male codes of stoic masculinity that leave these male artists seeking outlets of expression are frequently heavily reinforced by the same Christian culture they’re trying to find themselves within.

Ultimately, Harding stresses the reality of a new conservative audience making its mark on the charts. “I think that there’s something that’s really connecting with people, and I think that probably has to do with a lean toward tradition and representations of masculinity, which are currently at loggerheads in our world,” Harding said.

Whatever it is, he says, people really must like it. “I always believe that things that pop off do have an actual resonance,” he said, “because it’s so hard to make a hit.”


From Vox via this RSS feed

12
 
 

Measles outbreak in Texas leads to record US cases

The US has seen more measles cases in 2025 than in any year since 1992. | Jan Sonnenmair/Getty Images

The US has now recorded 1,277 measles cases this year, according to case data collected by Johns Hopkins University, making the current outbreak the most severe since 1992. The disease continues to spread, and by now most schools are out for the summer. July summer camps have opened and family vacations are picking up — all creating new opportunities for the virus to transmit.

The next few months will be crucial for getting measles under control in the US.

Three people have already died this year, the first measles deaths in the country in a decade. Most of the cases have been concentrated in a major outbreak in the Southwest — in New Mexico, Oklahoma, and Texas — but the virus is now also spreading in Arkansas, Illinois, Indiana, Kansas, Michigan, and Ohio. New outbreaks have recently sparked in Colorado, Montana and North Dakota. Though cases are still rising, they’re doing so more modestly than they were in March, when the country was seeing 100 or more new cases in a week. The measles outbreak does appear to be slowing down, experts say.

But as cases continue to spread, the Centers for Disease Control and Prevention in mid-June urged kids’ summer camps to check their participants for measles immunity status. Measles is one of the world’s most contagious viruses and, with vaccination rates declining among children across the country, a lot of kids tightly cloistered for several days creates a prime environment for measles to spread quickly once introduced. The CDC’s new checklist for summer camps advises camp organizers to collect vaccine records from campers and keep the documentation on hand, to check campers and staff for any signs of fever or rash upon arrival, and to set up an isolation area if anyone begins to feel sick once camp has started.

We are living in a new reality: Measles is spreading widely, vaccination rates are down, and the country’s top health official, Robert F. Kennedy Jr., has backed away from urging vaccination — even though they are 97 percent effective — as the best way to tamp down on measles’ spread. Kennedy has installed a new expert vaccine committee, opening up a review of the childhood vaccination schedule that includes the MMR shot.

Given this more lax approach from the Trump administration, now is a good time to look out for ourselves and our loved ones. Here’s what you need to know about measles as the season with the biggest potential to spread heats up:

Measles is an extremely contagious and dangerous virus

The US has been largely free of measles — a disease that still kills over 100,000 people worldwide every year, most of them young children — since the 1990s. Its risks have for most people become largely hypothetical.

2025 is the worst year for measles in the United States in decades

But for unvaccinated Americans, those risks remain very real. Measles is an extremely contagious virus that can lead to high fever and rash. Some patients develop pneumonia or encephalitis, a brain inflammation, both of which can be deadly.

Measles has a fatality rate of 0.1 percent, but about 20 percent of cases can put patients in the hospital. The virus can be particularly dangerous for kids, especially young infants, as well as pregnant women and people who are immunocompromised.

Some vaccine skeptics, including Kennedy, downplay the measles’ risk. “It used to be, when I was a kid, that everybody got measles. And the measles gave you lifetime protection against measles infection,” he said on Fox News in March.

But measles has never been some harmless disease: In the decade before a vaccine was introduced in 1963, between 400 to 500 children died annually from measles in the US. From 1968 to 2016, there were about 550 measles deaths total in the US, according to the CDC. But before this year, it had been 10 years since any measles deaths had been recorded in the US.

Even people who survive a measles infection can endure long-term health consequences, the risks of which are greater for vulnerable groups. Some of those infected in the current outbreak may have their health affected for years. Measles can cause what’s called immunity amnesia. The virus can wipe out more than half a person’s preexisting antibodies that provide protection against other pathogens. That can leave the patient at higher risk from other diseases for years after their measles infection.

And in very rare cases, measles can lead to fatal brain swelling years after the initial infection. Patients can also experience hearing loss from ear infections that started their illness, while the people who develop acute encephalitis can suffer permanent neurological damage.

You can protect your kids from measles. Here’s how.

Vaccination is without a doubt the best defense against measles: two doses of the MMR vaccine — given to protect against measles, mumps, and rubella. It is one the most effective vaccines we have for any disease, and any risks from the vaccine are extremely low when compared to the dangers of measles itself.

But in this new world, you’re more likely to see a measles outbreak in your community. People may want to be more proactive about protecting themselves and their loved ones. Here’s what you can do:

Parents of young children — the group most at risk from measles — should talk to their pediatrician about measles vaccination.

Children usually get their first shot around the time they turn 1 and receive another shot around age 5, but there is some flexibility. The CDC recommends that infants as young as 6 months receive one dose if they are traveling internationally, and the recommended age for the second dose ranges from 4 to 6 years old. Several leading public health experts, including former CDC director Rochelle Walensky, wrote in a JAMA op-ed over the spring that the recommendations should be updated to advise one shot for infants traveling in the US to areas with higher risk of exposure.

There have been reports of vaccinated individuals getting infected during the current outbreaks, which has raised questions about how protected vaccinated people actually are. As good as it is, 97 percent effectiveness is not 100 percent, and it is possible to be vaccinated against measles and still get sick. For a very small percentage of people, the vaccine simply doesn’t produce immunity. It is also possible that immunity could wane over time, but that was previously not an issue because high vaccination rates had snuffed out the virus’s spread. Per the CDC’s estimates, about 3 percent of measles cases this year have been vaccinated people — consistent with the 97 percent efficacy rate.

The priority should be vaccinating those people who do not have any protection at all: very young children and the people who weren’t vaccinated as kids. Pregnant women should not receive the vaccine, but women planning to become pregnant could consult with their doctor about a booster shot; likewise, people with immune conditions should talk to their doctor before getting any additional doses, because the vaccine’s live virus could present a risk depending on how compromised their immune system is.

People who are at a higher risk may want to take extra precautions, such as wearing a face covering, if there are any reports of measles infections in their immediate area.

For other people who have already been vaccinated but are still worried about transmission, it may be reasonable to consider a booster shot. But there are some important steps you should take first.

First, check your vaccination records if you can find them. Anyone who received two doses as a child very likely had a successful immune response; only three in 100 people don’t. And if you received one dose — which was generally the norm before 1989 — you’re probably still protected, but it is slightly more likely that you never developed immunity, Aaron Milstone, an infectious disease pediatrician at the Johns Hopkins University School of Medicine, told me earlier this year.

The next step would be to talk to your doctor, who can give you a “titer test” that measures the measles antibodies in your body. If they’re still present — then you should be good. But if they’re not, you may want to ask your doctor about getting an additional measles shot.

The risks from measles should be kept in context: If you’re not near any confirmed measles cases, your personal risk probably remains low. If you’re vaccinated and have antibodies, you are very likely protected from the virus even if there is local transmission. But summer travel introduces some new risks: Several smaller outbreaks this year have been traced to infectious travelers passing through US airports.

Measles can not be ignored. Milstone said he and his fellow infectious disease doctors could not believe it when they heard the news in February of a child’s death from measles in the United States of America.

“You hope people don’t have to die for others to take this seriously,” Milstone said.


From Vox via this RSS feed

13
 
 

In October 1979, a top-secret CIA intelligence report featured the first inklings in the West that something unusual and disturbing had allegedly taken place in the Soviet Union several months earlier.

In April of that year, patients started appearing at hospitals in the industrial city of Sverdlovsk, now known as Yekaterinburg, in the Ural region of the Soviet Union. They were showing symptoms of what doctors first thought to be an unusually virulent and deadly form of pneumonia. It wasn’t.

The outbreak that ultimately killed more than 60 people was in fact caused by anthrax spores that had been accidentally released from a Soviet biological weapons facility. How exactly this happened is still unknown.

Officially, neither the facility nor the Soviet bioweapons program was supposed to exist; a few years earlier, Moscow, along with Washington, had ratified a landmark international treaty prohibiting biowarfare work. When US officials publicly raised questions about the incident at Sverdlovsk, the Soviet government denied any biological weapons research was taking place, blaming the outbreak on contaminated meat. It wasn’t until 1992, after the fall of the Soviet Union, that Russian President Boris Yeltsin acknowledged that the incident was the result of a covert bioweapons program.

How is it possible that a bioweapons accident that killed dozens was kept secret for decades, even in the Soviet Union? As the Washington Post reporter David E. Hoffman writes in The Dead Hand, his history of the Cold War arms race, the answer lay in the nature of the weapons themselves*:* “Biological weapons were the ultimate challenge for spies, soldiers and scientists.”

Unlike a missile silo, easily distinguishable from the air, a laboratory where bioweapons are being developed doesn’t look that different from a benign medical laboratory. Unlike nuclear warheads, which leave clear radiological traces in their silos and are unmistakable in their use, a weaponized pathogen and the outbreak it would cause could be difficult to discern from a naturally occurring one, giving any attacker plausible deniability.

The mystery surrounding these weapons is just as much a problem today as it was during the Cold War. Putting aside the still politically fraught question of whether Covid-19 escaped from a Chinese lab or, like most outbreaks, jumped from animals to humans naturally, the bigger problem is the simple fact that we may never know for certain.

“What the pandemic tells us is that nobody can do attribution,” said Drew Endy, professor of biological engineering at Stanford. Intelligence agencies have determined that Covid was not a deliberately engineered bioweapon, but the confusion about its origins does suggest that if an even more virulent, intentionally designed pathogen were to be unleashed, it might be very difficult to say for certain who was behind the attack, or even whether it was an attack at all.

This kind of plausible deniability could make using such a weapon more attractive to attackers. Biowarfare is only set to become a bigger threat in the coming years if, as many experts predict, artificial intelligence makes it easier, cheaper, and faster to develop new biological compounds, including weaponized pathogens far more sophisticated and deadly than the anthrax that killed dozens in Sverdlovsk 46 years ago. That’s why Endy, a pioneer in the field of synthetic biology — the construction of new biological systems or deliberate alteration of existing ones through genetic manipulation — argues that new forms of detection are desperately needed for this new threat landscape.

“When the Iron Curtain came down, we found it useful to have geospatial intelligence to see what was happening on the other side regarding nuclear weapons,” he told Vox. “Today, there’s a molecular curtain. The stuff that’s invisible, that we can’t see, is all around us and could be harmful. And we don’t really do that kind of intelligence.”

The technologies that could allow adversaries to create ever more dangerous bioweapons are advancing at a much faster clip than defensive measures. But at the moment when AI might be amplifying the risks of this type of weapon, it may also be emerging as the key for detecting and stopping them.

Germ war is nothing new, but the threat is changing

Biological warfare dates back at least as far as the 14th century BC, far before anyone knew that germs caused disease, when the Hittites sent diseased rams to their enemies to infect them with the dangerous bacterial infection tularemia. Every major combatant in World War II had a biological weapons research program — including the US — and Japan even deliberately unleashed germs in China.

A history of germ violence

Warfare and disease have always gone together; until the 20th century, illness was responsible for killing more soldiers than weapons in many conflicts. Even today, bullet and shrapnel wounds in the war in Ukraine have become breeding grounds for drug-resistant bacteria. The deliberate use of illness as a weapon also has a long history.

In the 14th century BC, the Hittites sent diseased rams to their enemies to infect them with tularemia, a dangerous bacterial infection still classified as a potential bioweapon by the Centers for Disease Control and Prevention today. British soldiers infamously gave blankets infected with smallpox to American Indian tribes in the 18th century. During World War II, Japan’s military tested pathogens on prisoners of war in China and dropped ceramic bombs containing plague-infested fleas and grain on Chinese cities.

The United States had its own biowarfare research program starting in World War II, and testing of potential weapons, especially anthrax, expanded dramatically in the early years of the Cold War. In 1969, President Richard Nixon, facing increasing public pressure — and believing that biological agents weren’t particularly useful in a world of thermonuclear weapons — ordered the program shut down. Six years later, the Biological Weapons Convention, an international treaty banning their use, went into effect.

It’s not only states that have used bioweapons. In 1984, the Rajneeshees, a religious cult in Oregon, sickened hundreds by infecting salad bars with salmonella — the first recorded bioterror attack in US history. In 2001, shortly after the 9/11 attacks, a lone wolf perpetrator mailed anthrax to media and congressional offices, killing five people and sickening more than a dozen. Terrorist organizations including ISIS and al-Qaida have also sought unsuccessfully to acquire bioweapons.

Fear over the use of biological weapons eventually led to the ratification of the Biological Weapons Convention (BWC) in 1975, banning the use and development of bioweapons globally — though the Soviet program continued in secret for years later, despite Moscow having signed onto the treaty. But even more than the treaty, biowarfare has been held back by the fact that biological weapons have been difficult to develop, deploy, and — should they be used — control. But that may be changing.

New gene editing tools like CRISPR have brought down the cost and difficulty of tinkering with DNA. But the same kind of tools also can make it easier for malign actors to create designer diseases for use in warfare or terrorism.

AI is already revolutionizing the field of synthetic biology: The 2024 Nobel Prize in chemistry was awarded to researchers who used AI to predict and design new proteins. This is likely to have positive effects, like dramatically accelerating drug development.

But, says Matt McKnight, head of biosecurity at the synthetic biology company Ginkgo Bioworks, past periods of rapid scientific advancement, from chemicals in the early 1900s (poison gas), to physics in the 1930s (nuclear weapons), to computer science in the later 20th century (cyber offensives), suggest that the new confluence of AI and gene editing is almost certain to be put to violent ends.

“My assumption is that bioweapons will be used by a bad actor in this century because that would be the baseline expectation given all of human actions throughout history,” McKnight said. “And I want to reduce the likelihood that that happens.”

A recent report from the Center for a New American Security (CNAS) suggested several worrying scenarios for how AI could be used to optimize pathogens for warfare. Entirely new viruses could be designed, or modifications could be made to existing viruses to make them more resistant to existing treatments.

Kevin Esvelt, a synthetic biology researcher and director of the Sculpting Evolution group at MIT, said one of his greatest concerns was that large language models could facilitate “not just the replication of an existing natural pathogen, but building something entirely new that doesn’t occur in nature.” This means that both our natural immunological defenses and existing vaccines would be entirely unprepared for it.

How bad could it be? Mustafa Suleyman, the CEO of Microsoft AI, has warned that the combination of AI and synthetic biology could allow the creation of a pathogen with the death rate of Ebola but the transmissibility of seasonal flu, causing “more than a billion deaths in a matter of months.”

This wouldn’t be a very practical weapon for anyone but a doomsday cult. But AI-enhanced engineering could also allow for viruses to be made more controllable by adapting them to only work in particular locations. More disturbingly, viruses could be tailored to attack particular populations. The CNAS report quotes Zhang Shibo, former president of China’s National Defense University and a one-time general in the Chinese military, who has speculated that new technology would allow for the development of diseases for “specific ethnic genetic attacks.”

Beyond these nightmare scenarios, AI may simply make it easier to produce existing dangerous viruses. In a 2023 experiment, a group of students at MIT used commercially available AI chatbots to generate suggestions for assembling several deadly viruses — including smallpox, which currently exists only in ultra-secure labs in the US and Russia — from their genetic material. The chatbots also suggested the supplies needed and listed several companies and labs that might print the genetic material without screening.

Concerns about scenarios like these have prompted some AI companies to incorporate new safeguards into their models — though the intense commercial and geopolitical competition to reach artificial general intelligence may erode those safeguards over time.

These advances don’t mean just anyone can grow their own smallpox today. The technical obstacles to actually constructing a disease are still formidable, even if you have the instruction manual and a very patient AI to walk you through it. But it suggests the barriers to entry are coming down.

In the past, nonstate actors like ISIS or the Japanese cult Aum Shinrikyo, responsible for the 1995 sarin gas attack on the Tokyo subway, have tried to acquire biological weapons, but inevitably hit up against talent and supply limits. But advances in both AI and synthetic biology means actors with limited means will now have more tools at their disposal.

As a method of warfare, synthetic biology “just seems to favor offense,” said Esvelt, who led the MIT experiment on the use of chatbots in virus design. “There’s just a lot of ways you can attack, and it’s much cheaper to build a virus than it is to develop and distribute a vaccine.”

That’s why defense needs to start catching up to offense. When it comes to the risk of this technology being misused, Ginkgo’s McKnight argues, “You can’t regulate your way out of it. You have to be better at it. You have to be as good as the adversaries at making countermeasures.” His company is working to build one.

Peeking behind the molecular curtain

One vision of what the future of biowarfare defense might look like can be found in a tucked-away corner of a busy, sprawling lab overlooking Boston Harbor. That’s where Ginkgo Bioworks is based, and where the company, founded by former MIT scientists in 2008, designs custom microorganisms for industrial use.

Through that work, Ginkgo has developed advanced testing capability to determine whether the microscopic organisms they built work as intended. During Covid, Ginkgo’s “foundry,” as it refers to its main lab, was pressed into service processing nasal swabs and surveilling wastewater to help governments monitor community-level spread of the coronavirus and the emergence of new variants.

A science lab

That work has evolved into an ongoing monitoring operation at eight international airports in the US, as well as airports in the Middle East and Africa, for evidence of potentially dangerous pathogens crossing international borders. Nasal swabs from arriving passenger volunteers as well as wastewater samples are collected from planes and sent to Ginkgo for analysis. That means the next time you use the airplane lavatory, you may be contributing to a vast database of the genetic material moving around the world

During a recent visit to Ginkgo’s foundry, I was shown a rack of thermocyclers — each resembling oversized George Foreman grills — where these wastewater samples were being subjected to a process known as polymerase chain reaction. (That’s the “PCR” that was in PCR tests during the pandemic.) The process involves heating and cooling DNA samples in order to replicate them for analysis, allowing scientists to identify genetic abnormalities that would otherwise be difficult to detect. This process can track how certain pathogens — Covid or the flu, for instance — are migrating around the world, and how they are evolving, which can help guide any public health response.

Such work would be important enough given the documented rise in naturally emerging new pathogens. But Ginkgo is no longer only looking at biological threats that emerge from nature. With the support of the Intelligence Advanced Research Projects Activity (IARPA), the US intelligence community’s research arm, Ginkgo has developed a tool known as ENDAR, or engineered nucleotide detection and ranking, which is specifically designed to detect artificially engineered genetic material.

This is where AI comes in. Through its cell engineering business, Ginkgo has accumulated a vast library of engineered genomes. “We engineer microbes all day, every day. And we actually use [tools] to validate that our engineering worked,” said Casandra Philipson, director of bioinformatics at Ginkgo. Just as AI tools like ChatGPT train on vast amounts of written material or images to be able to answer user prompts with uncanny accuracy, ENDAR was trained on a vast library of engineered genomes. This enables ENDAR to detect when something doesn’t look quite right.

“You can get really specific and start looking at what’s called the base pair, like very specifically every ATCG,” said Philipson, referring to the basic nucleotides that bond together to form DNA. These structures tend to evolve in predictable patterns, and the system can detect anomalies. “You can actually calculate a molecular clock and say, ‘Does its ancestry match what we would expect, given the evolutionary history?'” If it doesn’t, this could be a sign that genetic engineering has taken place.

If so, this could go a long way toward addressing the attribution problem Endy referred to. If a new virus as bad or even worse than Covid emerges, we should have a pretty good idea if it was designed that way. That can help policymakers plan a public health response, and if necessary, a political or military one.

Why we need a better bioradar

Given that its purpose is making synthetic biology easier and cheaper at scale, one could argue that Ginkgo is itself part of the problem, simply by producing the kind of tools that could, say, make designer smallpox more feasible. Still, that dichotomy is one Ginkgo appears to embrace — the foundry is decorated with Jurassic Park memorabilia, a reminder of the potential, and perhaps also the risks, of the kind of DNA tinkering the company is engaged in. (One wonders what Professor Ian Malcolm would make of the company’s confidence.)

The government has tried to reduce those risks — former President Joe Biden’s executive order on artificial intelligence in 2023 included restrictions on the purchases of synthetic DNA. But Ginkgo’s McKnight argues that given the speed of biotech innovation in multiple countries, and the benefits it can bring, “there’s no choice you can make to clamp down on all the technology.”

When it comes to the risk of this technology being misused, he says, “You can’t regulate your way out of it. You have to be better at it. You have to be as good as the adversaries at making countermeasures.”

The State Department has assessed that Russia and North Korea both maintain active offensive biological weapons programs, even though both countries have signed on to the Biological Weapons Convention. But the bigger concern, given its increasingly dominant position in global biotech innovation, heavy investments in frontier AI, and its scientists’ often controversial approach to genetic research, is China.

When it comes to China, the State Department assesses more vaguely that the country has “continued to engage in biological activities with potential [bioweapon] applications” and has failed to supply sufficient information on a “diverse families of potent toxins with dual-use applications.” The writings of a number of prominent defense scholars in China, including a textbook published by the People’s Liberation Army, have identified biotechnology as a “new domain of warfare.”

Fighting an infopandemic amid a real one

McKnight believes the main takeaway from the experience of the Covid pandemic was that “America is not a biosecure nation today.” And as bad as Covid was, what’s coming could be far worse.

A worker in full white protective suit, gloves, hood, and mask, reaches across a table to swab a patient’s open mouth.

Covid, he said, cost the US economy trillions of dollars and “was probably the biggest factor in creating the chaos we’re seeing in our political system. [But] it wasn’t really even that bad compared to some of the potential things that are out there.”

Esvelt believes that the pandemic showed the importance of stockpiling preventive equipment like respirators, likely to be a far more effective first line of defense than vaccines — and that it should be viewed as a military priority as well as a public health one. “We’re going to lose a hot war in which our civilian support personnel are taken out by a pandemic and the adversaries are not,” he said.

AI could come into play here in a different way. We’re already seeing generative AI’s potential as a tool for creating and spreading misinformation as effectively as a virus. Epidemiologist Jay Varma recently warned of the risk of “a rogue actor using existing AI tools to simulate a bioterrorism attack that would destabilize a region or the world.” For example, Varma imagines a scenario in which an extremist group uses faked evidence of a biological attack, spread on social media, to foment a security crisis between nuclear rivals China and India.

Even without AI, rumors and misinformation ran rampant in the pandemic, eroding public trust in vaccines. And despite the intelligence community’s assessment that Covid was not a deliberate bioweapon, some politicians have continued to insinuate that it was. (The uncertainty around this is probably not helped by the ongoing confusion over whether Covid was inadvertentlyreleased from a lab doing benign research, which several intelligence agencies consider a real possibility.)

Likewise, the Russian government has spread unsubstantiated rumors that Ukraine is running labs where bioweapons are being developed with the support of the US government, a campaign that was picked up and spread by prominent US media figures, including Tucker Carlson.

All this suggests that determining a bioweapons attack is not taking place during an outbreak may be just as important an application for Gingko’s ENDAR technology as determining one is.

Despite the increasing attention being devoted to biosecurity at the government level, there’s also reason to be concerned about whether the US is moving toward becoming a more biosecure nation. The Trump administration recently canceled a $12 million grant to Harvard University for biosecurity research, despite the warnings of Pentagon officials that this would pose national security risks.

Data as disinfectant

As Hoffman writes in The Dead Hand, many of the leading Soviet biologists who worked on the country’s bioweapons program did so under the sincere impression that their counterparts in the US were doing exactly the same thing. Once the Cold War ended, they were stunned to learn that the Americans had halted their offensive program decades before. But that only shows how the distrust and competition of an arms race can obscure reality.

With the rapid pace of advances of both synthetic biology and AI today, Endy worries about a new arms race mentality taking hold. Whereas nuclear competition has, since the dawn of the Cold War, been governed by the logic of mutually assured destruction (MAD), he worries that the dynamic of biosecurity will be governed by what he calls SAD — stupidly assured destruction.

“It’s really important to be thoughtful and cautious about accusations,” he says. “If we’re not careful about how we are framing and talking about weapons programs, we get this type of geopolitical autoimmune response that leads to some really bad policy outcomes. We don’t want to go down the deterrence path” — in other words, deterring an enemy from unleashing bioweapons by having more powerful ones of our own. Instead, Endy said, “we want to go down the resilience path” — building societal defenses from biological threats, natural or artificial.

The first step of building those defenses is knowing exactly what threats are out there.


From Vox via this RSS feed

14
 
 

Zohran Mamdani on stage speaking

As New York City’s mayoral election unfolds, it’s clear that public figures are feeling more and more emboldened to make openly racist statements against candidate Zohran Mamdani. | Yuki Iwamura/Getty Images

Throughout Zohran Mamdani’s campaign for New York City mayor, he’s faced a barrage of attacks that have only gotten worse since he handily won the Democratic primary two weeks ago. And this isn’t just happening at the local level; New York City’s mayoral race has drawn attention from across the country, and politicians and pundits have been fearmongering about Mamdani from afar.

Here’s just a sampling:

On X, US Rep. Marjorie Taylor Greene (R-GA) shared a photo of the Statue of Liberty dressed in a burqa shortly after Mamdani’s victory, saying, “This hits hard.” US Rep. Brandon Gill, of Texas, criticized Mamdani for eating with his hands, saying, “Civilized people in America don’t eat like this.” US Rep. Andy Ogles, of Tennessee, who referred to the Democratic nominee for New York City mayor as “Zohran ‘little Muhammad’ Mamdani,” called for Mamdani to be denaturalized and deported. And shortly after Mamdani’s primary win, David Frum, an Atlantic staff writer, posted on X, “Well, at least we can retire that faded and false line, ‘antisemitism has no place in New York City.’”

It’s also not just conservatives. In an interview on CNN, US Rep. Eric Swalwell (D-CA) said, “I don’t associate myself with what [Mamdani] has said about the Jewish people,” without expanding on what, exactly, Mamdani has said. (While Mamdani has criticized Zionism and the Israeli government, he has not said anything negative about Jewish people.) Kirsten Gillibrand, New York’s Democratic senator, falsely claimed that Mamdani had made “references to global jihad” in a radio interview. She later apologized to Mamdani, according to her team, “for mischaracterizing Mamdani’s record and for her tone.”

Anti-Muslim and anti-immigrant bigotry is, of course, not a new feature of American politics. But how emboldened public figures clearly feel to make such brazen and openly racist statements seems to have reached a fever pitch. Since when, for example, is it acceptable to call on deporting American citizens? Here are three reasons why the racism against Mamdani in particular has been so extreme:

1) Trumpism has ushered in a new age of bullying

The kind of rhetoric directed at Mamdani is a product of an era of politics where hate speech and cruelty have become normalized. That has made public figures far more comfortable saying things in public that they might have thought twice about before. Stephen Miller, a senior Trump administration official, said that New York City is “the clearest warning yet of what happens to a society when it fails to control migration. Even the president’s son retweeted a post that said, “I’m old enough to remember when New Yorkers endured 9/11 instead of voting for it,” adding, “New York City has fallen.” It’s no longer shocking to see members of Congress, pundits, and business leaders criticizing entire peoples and cultures within the US as un-American.

In addition, President Donald Trump has pushed anti-immigrant rhetoric since he launched his campaign for president in 2015, and has only become more extreme in return to office. In this new era, meanness is not only politically rewarded but openly embraced and promoted by the White House and a variety of official online accounts. Trump’s White House, for example, has turned videos of deportations into memes, taken dehumanizing photo ops, and used AI-generated images to make light of genuinely cruel policies. This kind of politics has made hate speech all the more acceptable.

Part of the reason the attacks on Mamdani — who was born in Uganda and is of Indian descent — have been so widespread is precisely because this type of rhetoric gets spewed from the very top of American politics and government on a regular basis.

Since Mamdani’s recent rise in New York City politics, there have been calls to deport him despite the fact that he moved to New York with his family when he was 7 and became a naturalized citizen of the United States in 2018. Trump himself has threatened to arrest Mamdani, saying, “Look, we don’t need a communist in this country.” That we are now at a point where we’re talking about deporting citizens is a new low, but it is a direct product of Trump’s style of politics, which has ushered in a new era of online bullying, extreme xenophobia, and open racism.

2) Mamdani is a victim of anti-Palestinian racism

As I wrote last year, anti-Palestinian racism specifically targets people because they support the cause of Palestinian liberation — even if they aren’t Palestinian themselves. This is why crackdowns on college campus protests were so extreme, and why the Trump administration has detained and attempted to deport international students, including non-Palestinians, simply because of what they have said about Israel.

Mamdani isn’t Palestinian, but has been a vocal critic of Israel and has a history of organizing and advocating for Palestinian rights. Like many activists in pro-Palestinian spaces, Mamdani has been baselessly smeared as antisemitic. That’s why comments like Frum’s have cropped up since Mamdani won his primary: By supporting Palestinians, Mamdani is inherently viewed as a threat — not just to Israel but to Jewish people as a whole.

Frum, for example, later posted on X about the NYC primary, “[…]people with zero (or worse) regard for Jewish life and Jewish safety scolding actual Jews about how wrong and stupid we are about Jewish life and Jewish safety.” But this has nothing to do with what Mamdani has said about Jewish people. In fact, Mamdani’s platform also includes addressing antisemitism in the city by dramatically increasing funding for hate-crime prevention.

Anti-Palestinian racism is still an acceptable form of bigotry that we often see displayed in American politics and media without receiving the kind of pushback that other forms of racism do. It also results in amplifying other forms of racism when its victims come from other marginalized groups. “Like many other forms of hate, there can be intersectionalities, and that’s also true when it’s allies of ours who are speaking for Palestinian human rights,” one expert told me last year. “If it’s a Black ally, we will see anti-Black racism. If it’s an Indigenous ally, we will see anti-Indigenous racism. [If it’s] queer allies, trans allies, we will see homophobic and anti-queer rhetoric.”

3) Islamophobia is broadly acceptable

Anti-Muslim bigotry has long been a constant in American politics, and it has been especially potent since the War on Terror. Former President Barack Obama, for example, was accused of being Muslim — as though that would disqualify him from public office — even though he is a Christian. Since Rashida Tlaib and Ilhan Omar were elected to Congress in 2018, they have been routinely victimized by smear campaigns and hate speech that has specifically targeted their identities. Tlaib, for example, has been accused by her colleagues in Congress of engaging in “antisemitic activity” and “sympathizing with terrorist organizations.” And at a fundraiser, Republican Rep. Lauren Boebert of Colorado called Omar the “jihad squad.”

Now, Mamdani is seeing his own Muslim background be weaponized against him. He has been said to come “from a culture that lies about everything,” that he is uncivilized, and that he is a threat to people’s safety simply because New York might have a “Muslim mayor.”

There are still several months until New Yorkers head to the polls to vote for a new mayor in the general election. And unfortunately, this kind of open bigotry against Mamdani is likely to only get worse as Election Day nears. But while the attacks on Mamdani might seem like just one attempt at bringing down a candidate in a local race, their ultimate effect is much more damaging: They will make US politics all the more toxic and will only further normalize this kind of extreme bigotry against Muslims and immigrants in America.


From Vox via this RSS feed

15
 
 

Donald Trump, wearing a suit with an open-collar white shirt and a red MAGA hat, speaks on Air Force One.

President Donald Trump speaks to journalists aboard Air Force One on July 4, 2025. | Brendan Smialowski/AFP via Getty Images

This story appeared in The Logoff, a daily newsletter that helps you stay informed about the Trump administration without letting political news take over your life. Subscribe here.

Welcome to The Logoff: President Donald Trump is preparing to rev up his trade war again, even as he extends a pause on some tariffs until next month.

What just happened? Trump announced new tariff rates on multiple countries Monday in a slew of form letters, citing their “unsustainable Trade Deficits” with the US. The tariffs — including on Japan, South Korea, Malaysia, and numerous others — would take effect August 1.

The White House also said that previously announced “reciprocal” tariffs, which had been set to take effect this week, would be suspended until August 1.

What’s the context? Trump announced draconian tariffs on many countries in April, only to pause them a week later after financial markets cratered. He left in place a 10 percent “global tariff,” as well as tariffs on China (since reduced from their triple-digit highs).

Why is Trump doing this now? When Trump paused his tariffs in April, he said he would use the next 90 days to strike trade deals. But almost 90 days later, relatively few deals have been reached. Trump may now hope to force the issue with renewed threats of economic pain.

Will any of these tariffs actually take effect? Trump’s trade policy has been mercurial, to describe it generously, and it’s even unclear how he’s deciding on new tariff rates. Previously, a drastic dip in the markets managed to spook him into backing down, and the markets reacted negatively to his Monday announcement — but it’s unclear what we should expect at this point.

There’s also a pending court case over the legality of Trump’s tariffs, which he has imposed using emergency authority, but don’t expect a quick resolution there, as a ruling by the US Court of International Trade blocking the tariffs is currently on hold.

And with that, it’s time to log off…

The Toledo Mud Hens, a minor league baseball team affiliated with the Detroit Tigers, have a new mascot: Webley the kitten, who was rescued from the team’s stadium last week and has been adopted by a team employee. You can — and should! — read about his adventures here.


From Vox via this RSS feed

16
 
 

Kerrville resident Leighton Sterling watches flood waters along the Guadalupe River on July 4, 2025. She is wearing a stars and stripped patterned blouse.

Kerrville resident Leighton Sterling watches flood waters along the Guadalupe River on July 4, 2025. | Eric Vryn/Getty Images

At least 90 people have died in central Texas in extraordinary floods, the deadliest in the Lone Star State since Hurricane Harvey killed 89 people.

A torrential downpour started off the July 4 weekend with several months’ worth of rain falling in a few hours, lifting water levels in the Guadalupe River as high as 22 feet. Among the dead are 27 children and counselors at a summer camp near Kerrville in Kerr County. One adult at the camp may have died trying to rescue children. More people are still missing, and more rain is in the forecast.

View Link

The storm arose from the fading remnants of Tropical Storm Barry, which formed on June 28. It was well ahead of schedule for the typical second named storm of the Atlantic hurricane season, which usually forms in mid-July. The weather system parked over Texas where it converged with a band of moisture moving north, forming thunderstorms that squeezed out a torrential downpour.

With its topography of hills and rivers as well as a history of sudden downpours, this region in Texas has been dubbed “flash flood alley.” Kerrville itself experienced a deadly flood in 1987 when the Guadalupe River received 11 inches of rain in less than five hours, raising water in some portions by 29 feet. The flood killed 10 people.

But there were several factors that converged to make this storm so deadly — and not all of them had to do with the sheer amount of rain. Here are some things to know about disasters like this:

Texas isn’t in the tropics. How did it get hit so hard by a tropical storm?

Kerr County, population 54,000, is a couple hundred miles inland from the Gulf of Mexico, but it has a history of tropical storms and hurricanes passing through the region on occasion. So the leftovers from Tropical Storm Barry reaching the area isn’t too surprising. Scientists, however, are still trying to find out how storms that are powered by warm ocean water continue to get energy over land.

The recent flooding is occurring in an era where even “ordinary” storms are becoming more dangerous. Strong thunderstorms and tornadoes are a common sight in Texas summer skies and the state has a history of deadly floods. Over the years, the amount of rain falling from major storms has been increasing.

As average temperatures rise due to climate change, air can retain more moisture, which means when storms occur, there’s more water falling out of the sky, turning roads into rivers and submerging the landscape.

Did something go wrong here with the forecast or disaster warnings?

Ahead of the Texas floods, the Texas Division of Emergency Management activated its emergency response system on July 2 in anticipation of major floods, including mobilizing water rescue squads, helicopters, and road-clearing equipment. On July 3, the National Weather Service issued a flood watch. (NPR has a very useful timeline of the planning and response to the floods.)

View Link

But as the watches turned to warnings, they revealed gaps in the communication system. There are spots along the Guadalupe River that don’t have flood warning sirens, including Kerr County. Officials there contemplated installing a flood warning system, but it was rejected for being too expensive.

Text message alerts did go out, but they were sent in the middle of the night after the July Fourth holiday, when many people were camping or traveling in unfamiliar places. Parts of the county also have spotty cell service. And residents who did get the alerts weren’t sure what to do about them, whether to stay or evacuate, until the water levels were perilously high.

The National Weather Service this year has lost 600 employees between layoffs, buyouts, and retirements spurred by the Trump administration’s “Department of Government Efficiency.” That included Paul Yura, the warning coordination meteorologist at the National Weather Service Austin/San Antonio office, which is responsible for Kerr County. However, National Weather Service staff said the office was operating normally during the floods and wasn’t dealing with a staff shortage.

In general, natural disasters are killing fewer people over time. There are a lot of reasons why, like stronger building codes that can better resist fires, floods, and earthquakes.

One of the most important lifesaving trends is better warning systems ahead of huge storms. Improvements in observations, a growing understanding of the underlying physics, and advances in computer modeling have led forecasters to build up their lead time ahead of severe weather. Researchers are even starting to get more forewarnings about volcanic eruptions and earthquakes.

But warnings are only effective if people have the knowledge and the tools to react to them. During floods, people often underestimate currents and try to cross dangerous submerged areas. “Purposely driving or walking into floodwaters accounts for more than 86% of total flood fatalities,” according to a study of flood deaths in the US between 1959 and 2019.

It is possible to protect lives against the forces of nature, but it requires a lot of parts working together — planning, infrastructure, forecasting, alerts, and evacuations.

Are floods getting more difficult to predict?

Not necessarily, but the baselines are changing.

Most assessments of flood risk are based on historical data. Local, state, and federal agencies can map out high watermarks of the past and show which properties might be at the greatest risk. But at best, these maps are conservative estimates; they don’t show the full potential of where water can reach. Often, flood maps aren’t revised regularly and don’t take into account how the risk landscape is changing.

For instance, more construction in an area can lead to more impervious surfaces that retain water or shunt it toward a certain neighborhood. Losing natural watersheds that normally soak up rain can increase the probability of floods. Overdrawing groundwater can cause land to sink.

In coastal areas, rising sea levels are increasing the reach of coastal flooding, while rainstorms inland are pouring out more water. Disasters can also compound each other. A major wildfire can wipe out trees and grasses anchoring soil, leading to floods and landslides when the rain comes, for example.

Inflation, growing populations, and rising property values mean that when floods do occur, they extract a much bigger price from the economy. Kerr County’s population has grown about 25 percent since 2000.

As a result, when it comes to floods, many people don’t even realize that they’re at risk. And even in the wake of a major inundation, the lessons are quickly forgotten.

One analysis showed that people buy more flood insurance after a major flood recedes, but gradually, they let their policies lapse, returning to the baseline insurance rate in three years in some cases. That’s why one of the biggest challenges in disaster risk reduction is simply trying to get people to understand that bad things can happen to them and they should prepare.


From Vox via this RSS feed

17
 
 

An illustration of a child care center classroom filled with brightly colored toys. A child care instructor and three children’s empty silhouettes sit around an activity table

President Donald Trump’s recently passed Big Beautiful Bill features crippling cuts to health insurance, food stamps, and clean energy programs, yet significant new spending on child care. Lawmakers plan to invest $16 billion into three federal tax credit programs that haven’t been permanently updated in decades.

That a Republican-led Congress would lead on new child care spending is unusual and reflects changing political priorities, as child care becomes harder for families to afford and harder for businesses to ignore.

Traditionally, social conservatives have been skeptical of federal involvement in child care, viewing it as a family responsibility rather than a government role, and fearing that subsidies could edge out private or faith-based providers. Meanwhile, fiscal conservatives have spent years pushing to shrink — not expand — domestic spending.

Even Democrats, who made child care central to their social agenda, failed to deliver when they held unified control in 2021. Their sweeping Build Back Better agenda collapsed as advocates struggled to prioritize among competing demands and key lawmakers balked at the overall cost.

The new child care provisions — spearheaded primarily by Sen. Katie Britt of Alabama — deliver distinct nudges toward affordability, access, and infrastructure.

Lawmakers:

Raised the cap on Dependent Care Flexible Spending Accounts for the first time since 1986, from $5,000 to $7,500. These employer-sponsored accounts let families pay for eligible expenses like day care, preschool, and summer camp using pre-tax dollars.Permanently expanded the tax credit for working parents — known as the Child and Dependent Care Tax Credit (CDCTC) for the first time since 2001. The typical benefit for a dual-earner, middle-income family with two kids will increase from $1,200 to $2,100.Tripled the tax credit for businesses that help employees find or provide care, with extra incentives for small employers. This was also last updated in 2001.Modestly expanded the Child Tax Credit (which can be used for a broader array of household expenses beyond child care). They raised the maximum refundable portion from only $1,600 to $1,800 per child.

Those likely to benefit from most of these investments are middle- and upper-middle-class parents, especially those with steady earnings and access to workplace benefits like flexible spending accounts. These families often face steep child care costs — median spending is around $800 per month, according to the Federal Reserve — but earn too much to qualify for most existing subsidies.

Very low-income families will have much less to celebrate since the child care credits can’t be claimed by households that owe little or no income tax. Lawmakers say they hope to address that gap later this year through new federal funding to a child care program that specifically serves low-income households.

The new investments are being praised by more moderate advocacy organizations, including the National Child Care Association, First Five Years Fund, the Early Care & Education Consortium, Child Care Aware of America, Moms First, and the U.S. Chamber of Commerce.

“Expanding child care tax credits in the Senate bill is a step in the right direction toward making care more affordable and accessible for families nationwide,” said Sarah Rittling, executive director of the bipartisan group First Five Years Fund. “We appreciate the Senate’s inclusion of these updates and want to thank Senator Katie Britt for her leadership.”

More progressive child care advocates have instead focused their response on the expected harm of welfare cuts to children, families, and child care staff. The National Association for the Education of Young Children did not mention the new child care investments at all in its statement on the bill. Julie Kashen, director of the Century Foundation’s division on women’s economic justice, blasted the child care provisions for not “giv[ing] parents more options or expand[ing] the child care sector and only giv[ing] a small number of families at most a few hundred extra dollars.”

Elliot Haspel, a child care advocate and author of the Family Frontier newsletter, described the new investments as “not game-changing, but fine” in a vacuum but emphasized that the investments don’t exist in a vacuum and cautioned against celebrating small wins in the context of a broader, more harmful bill.

A lobbyist who worked to secure the child care spending said that many child care-focused advocacy groups made little effort to support the push and were quick to criticize the outcome despite having failed to secure permanent federal wins under Biden.

“I don’t agree with Katie Britt on everything, but we do agree on the need to prioritize child care — and that means making sure she knows her political capital was well spent,” the lobbyist said, who spoke on background to describe private conversations with lawmakers and coalition partners. “Some of the cynicism you hear behind the scenes is, ‘Should Republicans even bother doing anything on child care?’ They feel like they’re damned if they do, damned if they don’t. If they act, they’re told it wasn’t generous enough, and people point to all the other bad things Republicans did — so in their view, it’s a losing issue. We need to show them that it’s not.”

How a freshman lawmaker became the key Senate force for child care

Britt, a first-term federal lawmaker, does not sit on the powerful Senate Committee on Finance, the panel responsible for writing the core elements of any reconciliation tax package. But in early 2024, she managed to turn a politically volatile moment for Republicans into a platform that significantly raised her profile within the party and earned her a degree of trust among both GOP leadership and the Trump campaign that she’s still leveraging today.

The catalyst was the Alabama Supreme Court’s February 2024 decision declaring frozen embryos to be “children,” a ruling that forced IVF clinics across the state to suspend services and that set off a national backlash. Republicans were caught flat-footed. Britt picked up the phone and called Donald Trump — then the party’s presumptive nominee — to urge him to publicly support IVF access. Trump soon repeated her message on Fox News, calling her “a very wonderful young senator” and effectively signaling to Republicans across the country that it was safe to back IVF without losing “pro-life” credentials.

Britt used her nationally televised response to Joe Biden’s State of the Union in March 2024 to reinforce that message. Then, with Sen. Ted Cruz, she introduced the IVF Protection Act, which threatened to withhold Medicaid funds from states that banned IVF without requiring states to cover IVF or explicitly declaring that embryos are not people. The bill didn’t advance, but it gave GOP lawmakers something to point to. By June, Britt and Cruz had organized a joint statement signed by all 49 Senate Republicans affirming IVF access, a symbolic show of unity that helped neutralize Democratic attacks.

That stretch positioned Britt as a credible party messenger on polarizing family policy, and she’s continued to lean into that role. In July 2024, she introduced a bipartisan bill with Sen. Tim Kaine (D-VA), the Child Care Availability and Affordability Act, aimed at expanding tax credits to reduce child care costs and increase provider supply. While the workforce aspects of the legislation have stalled, Britt lobbied hard for its core tax provisions over the last few months.

She met with every member of the Senate Committee on Finance to pitch the child care tax credits as a political imperative, according to advocates involved in the process. If Republicans wanted to campaign in the fall and during the midterms on a pro-family message, she stressed to her GOP colleagues, they needed to deliver tangible policy. Britt’s team pointed to polling commissioned by the women’s economic security group Engage showing that 75 percent of voters said they’d be more likely to support a candidate backing the Britt-Kaine proposal.

That message, combined with her reputation after the IVF fight, helped her build a coalition for child care tax credits that included not just moderates but even spending hawks like DOGE committee chair Sen. Joni Ernst.

Why child care made it into BBB—and almost didn’t

Some of the child care ideas Congress considered were more controversial than others. Deeper investments in 45F, the employer child care tax credit that covers up to 25 percent of an employer’s costs for care, was one of the more popular ideas, and it reflected a shifting consensus on child care’s role in the economy. For decades, employer child care was a fairly rare, high-end perk. It usually came from large, white-collar firms offering on-site centers as a recruiting incentive. It wasn’t a workforce necessity.

That began to change during the pandemic when millions of working parents — especially in health care, hospitality, and manufacturing — were forced to leave jobs or reduce hours due to a total collapse of child support. Since then, businesses have become more vocal about the need for structural solutions. Last year, the US Chamber of Commerce and the federal Commerce Department co-hosted the first national Child Care Innovation Summit where employers shared how offering care benefits improved retention, reduced absenteeism, and boosted profits. Companies began lobbying for changes to 45F to allow more businesses to access it.

Rep. David Kustoff, a Republican from Tennessee, introduced legislation to expand 45F early in 2024, and remained a consistent advocate as the reconciliation bill took shape.

But the other major child care provision on the table — the Child and Dependent Care Tax Credit, which offsets care costs for working parents — had fewer champions prioritizing it. Some House Republicans, including Rep. Blake Moore of Utah, raised concerns that the CDCTC unfairly favors dual-earner households. Because the credit only applies when both parents are working or in school, social conservatives like Moore argue it penalizes stay-at-home parents. This discomfort was compounded by other proposals in the tax package, such as new Medicaid work requirements, exposing tensions within the GOP that lawmakers never really resolved.

When the House passed its version of the bill, it included a modest increase to the Child Tax Credit, another pro-family benefit that can be used for a wide array of household expenses, as well as an expanded 45F credit. But the House left out expansions to the CDCTC and the pre-tax Dependent Care Assistance Program. Advocates entered Senate negotiations feeling relatively discouraged.

That changed in part because of Britt’s lobbying and continued pressure from business groups. Last month, over 60 local chambers of commerce and the US Chamber of Commerce sent a letter to Senate leadership urging them to expand both 45F and the CDCTC. Without changes to the latter, they argued, the bill “misses a critical chance to deliver real relief to families for whom child care remains one of the most burdensome monthly costs.”

The final package included a $9.3 billion increase to the CDCTC, a $6 billion boost to DCAP, and a $700 million expansion to 45F.

Still, the Senate’s expansion to the Child Tax Credit proved much more modest; lawmakers raised the maximum refundable portion from only $1,600 to $1,800 per child amid intense pressure to limit new spending. Unlike the targeted child care provisions, the CTC reaches a much broader share of families, making it much more expensive to expand. Pushing the CTC higher or making it fully refundable would have added tens of billions to the bill’s cost, and many GOP senators remained wary of eliminating the credit’s work requirements, even if that means 17 million children are left with very little or no money.

Child care advocates like Sarah Rittling say it’s important to acknowledge the progress made while continuing to push for more.

“We can’t control Congress or its agenda, but we feel very strongly this is an opportunity to help families and move the ball forward,” she told Vox. “That doesn’t mean we’ll lay off the pedal when it comes to the annual appropriations, and we know many members of Congress aren’t done either. It’s on us, as advocates, to show what else still needs to be done.”

This work was supported by a grant from the Bainum Family Foundation. Vox Media had full discretion over the content of this reporting.


From Vox via this RSS feed

18
 
 

Your Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. To submit a question, fill out this anonymous form or email [email protected]. Here’s this week’s question from a reader, condensed and edited for clarity:

I graduate college soon, and like everyone around me, I’m working hard to find a job. But unlike those around me, I have a sense for how inactivity enlivens me — I get lots of joy from silence, reflection, and complete agency over my mind. I’ve quit most social media, and I got into meditation a while ago and never looked back. This awareness makes me tilt towards a life that optimizes for this. But I also have very altruistic leanings, which could become serious scruples if I don’t do good in the world.

Should I be trying to balance the pursuit of two seemingly opposed life goals — pursuing true happiness through inactivity and contemplation (as hypothesized by thinkers like Aristotle and Byung-Chul Han) and striving to do good in the world through robust goal-oriented action? The first is indifferent to which ends (if any) one’s life contributes to, as long as it is blanketed in leisurely contemplation and true inactivity. The second invites and rewards behaviors that are constantly opposed to prolonged inactivity (working efficiently, constantly learning, etc). So I really don’t know how to handle this.

Dear Contemplative and Caring,

Matthieu Ricard is known as the “world’s happiest man.” When he lay down in an MRI scanner so scientists could look at his brain, they saw that the regions associated with happiness were exploding with activity, while those associated with negative emotions were nearly silent. The scientists were stunned. How did his brain get that way?

The answer: 60,000 hours of meditation. See, Ricard grew up in France, earned a PhD in genetics, and then, at age 26, abandoned a bright scientific career in favor of going to Tibet. He became a Buddhist monk and spent nearly three decades training his mind in love and compassion. The result was that one stupendously joyous brain.

But what if he’d instead spent 60,000 hours bringing joy to other people?

Philosopher Peter Singer once put this question to Ricard, basically asking if it was self-indulgent to spend so much time in a hermitage when there are problems in the world that urgently need fixing. Ricard gave a complex answer, and I think looking at all three components of it will be helpful to you.

Have a question you want me to answer in the next Your Mileage May Vary column?

Feel free to email me at [email protected] or fill out this anonymous form! Newsletter subscribers will get my column before anyone else does and their questions will be prioritized for future editions. Sign up here!

For one thing, Ricard pointed out that there are many different values in life. Helping other people is absolutely a wonderful value. But there are others, too: art, for instance. He noted that we don’t go around scolding Yo-Yo Ma for the thousands of hours he spent perfecting the cello; instead, we appreciate the beauty of his music. Spiritual growth through contemplation or meditation is like that, Ricard suggested. It’s another value intrinsically worth pursuing.

Ricard also emphasized, though, that helping others is something he values very deeply. Just like you, he prizes both contemplation and altruism. But he doesn’t necessarily see a conflict between them. Instead, he’s convinced that contemplative training actually helps you act altruistically in the world. If you don’t have a calm and steady mind, it’s hard to be present at someone’s bedside and comfort them while they’re dying. If you haven’t learned to relinquish your grip on the self, it’s hard to lead a nonprofit without falling prey to a clash of egos.

Still, Ricard admitted that he is not without regret about his lifestyle. His regret, he said, was “not to have put compassion into action” for so many years. In his 50s, he decided to address this by setting up a foundation doing humanitarian work in Tibet, Nepal, and India. But the fact that he’d neglected to concretely help humanity for half a century seemed to weigh on him.

What can we learn from Ricard’s example?

For someone like you, who values both contemplation and altruism, it’s important to realize that each one can actually bolster the other. We’ve already seen Ricard make the point that contemplation can improve altruistic action. But another famous Buddhist talked about how action in the wider world can improve contemplation, too.

That Buddhist was Thich Nhat Hanh, the Zen teacher and peace activist who in the 1950s developed Engaged Buddhism, which urges followers to actively work on the social, political, and environmental issues of the day. Asked about the idea that people need to choose between engaging in social change or working on spiritual growth, the teacher said:

I think that view is rather dualistic. The [meditation] practice should address suffering: the suffering within yourself and the suffering around you. They are linked to each other. When you go to the mountain and practice alone, you don’t have the chance to recognize the anger, jealousy, and despair that’s in you. That’s why it’s good that you encounter people — so you know these emotions. So that you can recognize them and try to look into their nature. If you don’t know the roots of these afflictions, you cannot see the path leading to their cessation. That’s why suffering is very important for our practice.

I would add that contact with the world improves contemplation not only because it teaches us about suffering, but also because it gives us access to joyful insights. For example, Thich Nhat Hanh taught that one of the most important spiritual insights is “interbeing” — the notion that all things are mutually dependent on all other things. A great way to access that would be through a moment of wonder in a complex natural ecosystem, or through the experience of pregnancy, when cells from one individual integrate into the body of another seemingly separate self!

At this point, you might have a question for these Buddhists: Okay, it’s all well and good for you guys to talk about spiritual growth and social engagement going hand-in-hand, but you had the luxury of doing years of spiritual growth uninterrupted first! How am I supposed to train my mind while staying constantly engaged with a modern world that’s designed to fragment my attention?

Part of the answer, Buddhist teachers say, is to practice both “on and off the cushion.” When we think about meditation, we often picture ourselves sitting on a cushion with our eyes closed. But it doesn’t have to look that way. It can also be a state of mind with which we do whatever else it is we’re doing: volunteering, commuting to work, drinking a cup of tea, washing the dishes. Thich Nhat Hanh was fond of saying, “Washing the dishes is like bathing a baby Buddha. The profane is the sacred. Everyday mind is Buddha’s mind.”

But I think it’s really hard to do that in any kind of consistent way unless you’ve already had concerted periods of practice. And that’s the reason why retreats exist.

Buddhist monks commonly do this — sometimes for three years, or for three months, depending on their tradition — but you don’t have to be a monk or even a Buddhist to do it. Anyone can go on a retreat. I’ve found that even short, weekend-long retreats, where you’re supported by the silent company of other practitioners and the guidance of teachers, can provide a helpful container for intensive meditation and catalyze your growth. It’s a lot like language immersion: Sure, you can learn Italian by studying a few words on Duolingo alone each night, but you’ll probably learn a whole lot faster if you spend a chunk of time living in a Tuscan villa.

So here’s what I’d suggest to you: Pursue a career that includes actively doing good in the world — but be intentional about building in substantial blocks of time for contemplation, too. That could mean a year (or two or three) of meditative training before you go on the job market, to give you a stable base to launch off from. But it could also mean scheduling regular retreats for yourself — anywhere from three days to three months — in between your work commitments.

More broadly, though, I want you to remember that the ideas about the good life that you’re thinking through didn’t emerge in a vacuum. They’re conditioned by history.

As the 20th-century thinker Hannah Arendt points out, vita contemplativa (the contemplative life) has been deemed superior tovita activa (the life of activity) by most pre-modern Western thinkers, from the Ancient Greeks to the medieval Christians. But why? Aristotle, whom you mentioned, put contemplation on a pedestal because he believed it was what free men did, whereas men who labored were coerced by the necessity to stay alive, and were thus living as if they were enslaved whether they were literally enslaved or not.

In our modern world, Arendt notes, the hierarchy has been flipped upside down. Capitalist society valorizes the vita activa and downgrades the vita contemplativa. But this reversal still keeps the relationship between the two modes stable: It keeps them positioned in a hierarchical order. Arendt thinks that’s silly. Rather than placing one above the other, she encourages us to consider the distinct values of both.

I think she’s right. Not only does contemplation need action to survive (even philosophers have to eat), but contemplation without action is impoverished. If Aristotle had had an open-minded encounter with enslaved people, maybe he would have been a better philosopher, one who challenged hierarchies rather than reinforcing them.

It can be perfectly okay, and potentially very beneficial, to spend some stretch of time in pure contemplation like Aristotle — or like the Buddhist monk Ricard. But if you do it forever, chances are you’ll end up with the same regret as the monk: the regret of not putting compassion into action.

Bonus: What I’m reading

Not only does modern life make it hard to think deeply and contemplatively — with the advent of AI, it also risks homogenizing our thoughts. The New Yorker’s Kyle Chayka examines the growing body of evidence suggesting that chatbots are degrading our capacity for creative thought.

This week, I learned that rich Europeans in the 18th century actually paid men to live in their gardens as…“ornamental hermits”? Apparently it was trendy to have an isolated man in a goat’s hair robe wandering around in contemplative silence! Some scholars think the trend took off because philosopher Jean-Jacques Rousseau had just argued that people living in a “state of nature” are morally superior to those corrupted by modern society.Twentieth-century Trappist monk Thomas Merton was a great lover of stillness. His poem “In Silence” is mainly an ode to the contemplative life. But he ends the poem with these cryptic lines: “How can a man be still or listen to all things burning? How can he dare to sit with them when all their silence is on fire?”


From Vox via this RSS feed

19
 
 

For the last half-century, America’s population growth has been concentrated in the sweltering, equal parts bone-dry and waterlogged, yet ever-sprawling Sunbelt. Undeterred by the limits of hydrology or climate, metro areas from Las Vegas to Miami have gotten one thing undeniably right. They have long led the country in housing construction, resulting in a relative plenitude and affordability that shames coastal cities in California and the Northeast, as well as a booming industry of takes imploring blue cities to learn from red states on housing.

But that abundance is already becoming a thing of the past. Across Sunbelt metros like Phoenix, Dallas, and Atlanta, housing supply growth has actually plummeted since the early 2000s, to rates almost as low as in hyper-expensive coastal cities, according to a new working paper by the leading urban economists Edward Glaeser and Joe Gyourko. Housing costs in these metros, while still lower than major coastal cities, have surged as a result.

Gyourko, a professor at the University of Pennsylvania’s Wharton School who decades ago documented slowing housing growth in superstar cities like New York and San Francisco, told me he was surprised to find the same pattern again unfolding, as if on a 20-year lag, in a region known for its lax regulations and enthusiasm for building things. Looking at the data, he thought, “Wow, Phoenix and Miami look like LA did as it was gentrifying in the ’80s.”

Although metro Phoenix, to unpack one example, is building a similar absolute number of homes as it did in the early 2000s, its population has grown by more than 58 percent since the turn of the century, so as a share of its current housing stock — the number that most matters, Gyourko says — it’s now building far less. If that trend continues even as demand to live in the Sunbelt remains undimmed, he said, “you would expect them to start to look more and more like Los Angeles.” By 2045, Arizona might be facing unaffordability and population loss crises much like those choking California today.

Chart showing rates of housing supply growth decreasing steadily each decade from 1950 to today, in Atlanta, Dallas, Detroit, LA, Miami, and Phoenix.

For many years, suburbs and exurbs have been the leading drivers of housing growth in Sunbelt cities, capturing most of the new population moving to the region. “The concepts ‘Sunbelt city’ and ‘suburb’ are nearly synonymous,” as historian Becky Nicolaides put it. But the slowdown in new housing builds across the region, Glaeser and Gyourko found, has been especially pronounced in well-off, low-density suburbs with desirable amenities like good schools. These suburbs have plenty of room to densify and welcome more neighbors — they just aren’t doing it.

“America’s suburban frontier,” the authors warn, “appears to be closing.”

The findings suggest that the fundamentals of housing in Raleigh, Orlando, or Miami are not so different from every other hot real estate market in the country. In most parts of the US with a growing economy and good jobs, the housing market has become badly broken to a degree that transcends the usual explanations, like regional differences in construction licensure rules or environmental review requirements — although those factors, without a doubt, matter.

So what’s really going on? Housing markets are complicated, and economic shocks like the Great Recession and the recent spike in interest rates have surely played a role. But the downturn in housing builds predates both those things, Glaeser and Gyourko found, suggesting a deeper cause. The Sunbelt may be confronting the same obstacle that has paralyzed growth elsewhere. It’s one of the most taken-for-granted facts of modern American life: the suburban model itself, and all its attendant political, regulatory, and financial problems.

Since the end of World War II, housing supply growth in the United States has overwhelmingly been driven by suburban sprawl radiating ever outward from city centers. Instead of building up, with density, we largely built out. But that engine may be running out of steam — and as a strategy for filling our national housing shortage, it’s failing spectacularly.

“It hasn’t been working in the supply-constrained coastal markets for four decades. What’s new is it looks like it’s starting not to work in the Sunbelt,” the country’s fastest-growing, most economically dynamic region, Gyourko said. “That changes the nature of America.”

The strangeness of housing policy in the US can be summed up like this: On a national level, we long for growth. On a local level, we do everything possible to smother it. That contradiction stems, in part, from our dependence on sprawl.

America is a nation of suburbs — that’s certainly not changing any time soon. And there’s nothing inherently wrong with suburbs, a housing arrangement as old and varied as human civilization. But to solve the housing crisis that is at the root of so many national problems, Americans will have to fundamentally rethink what the suburb is, and what it could become.

American suburbia, briefly explained

If you, like me, are too online for your own good, perhaps you’ve seen some version of this meme.

![A screenshot of a tweet by "bob's burgers urbanist" (@yhdistyminen).

The tweet reads: "This kind of smart, walkable, mixed-use urbanism is illegal to build in many American cities."

The image attached to the tweet is a still from the animated show "Bob's Burgers," showing the street view of the restaurant and its neighboring buildings. The colorful, multi-story buildings are built close together, with businesses like Bob's Burgers and Jimmy Pesto's Pizzeria on the ground floor and apartments on the floors above. The scene depicts a dense, traditional city street with power lines overhead.](https://platform.vox.com/wp-content/uploads/sites/2/2025/06/dwvm9h70bm591.jpg?quality=90&strip=all&crop=0%2C26.653797432782%2C100%2C46.692405134435 "A screenshot of a tweet by "bob's burgers urbanist" (@yhdistyminen).

The tweet reads: "This kind of smart, walkable, mixed-use urbanism is illegal to build in many American cities."

The image attached to the tweet is a still from the animated show "Bob's Burgers," showing the street view of the restaurant and its neighboring buildings. The colorful, multi-story buildings are built close together, with businesses like Bob's Burgers and Jimmy Pesto's Pizzeria on the ground floor and apartments on the floors above. The scene depicts a dense, traditional city street with power lines overhead.")

That image is a pretty accurate reflection of what American cities used to look like by default. Our suburbs, too, once looked much like this — remnants of the pattern can still be seen in places like Oak Park, Illinois (a suburb of Chicago), University City, Missouri (outside St. Louis), or Brookline, Massachusetts (neighboring Boston). Derived from the Latin word “suburbium,” meaning the area under or near a city, suburbs are so old that if you’ve ever thought about them, congratulations, you’ve been thinking about the Roman Empire.

Of course, what dense, older suburbs like Brookline or Oak Park have in common is that, like the cities they neighbor, they were largely laid out before mass car ownership. It was only relatively recently that suburbs became synonymous with a specific, car- and sprawl-oriented development style.

If the Western frontier defined American optimism in the 19th century, the suburban frontier defined it in the 20th. It’s a story you may already know in broad strokes: Before World War II, only a small share of Americans lived in suburbs, with the bulk living in rural areas and central cities. After the war, a complex alchemy of factors — including a national economic and population boom, federally backed mortgages that favored suburban homes, a Great Depression- and war-era housing shortage, and white flight — produced one of the greatest social and spatial transformations in the country’s history.

It would be easy, from our 21st-century perspective, to simply be bewildered by the urban planning decisions that fueled this wave of suburbanization. But those choices make a lot more sense when framed by the daily realities of mid-century urban life. Much of the prewar urban housing stock was genuinely terrible — many people lacked access to a full bathroom or even a flush toilet. Cities were still manufacturing centers and had the pollution to go with it. Americans who could afford to move were understandably pulled toward modern, spacious houses being built on an unprecedented scale in new tracts outside the city.

A high-angle, black and white aerial photograph of a sprawling, post-World War II American suburban development. Hundreds of nearly identical, two-story single-family homes are arranged in uniform rows along curving streets. The community is carved out of a dense forest, which borders the neighborhood and stretches into the background where a body of water is visible.

As this shift took place, the nature of the suburbs changed, from an organic extension of the city to what must have looked, to some at the time, like an alien planet. By 1970, most Americans dwelling in metropolitan areas — meaning a core city and its adjacent areas — were living in suburbs, and by 2010, most Americans were, full stop. Sunbelt cities like Phoenix and Las Vegas, which in the decades after World War II grew from little more than desert towns to megacities, developed in a particularly suburban, car-dependent form.

For a long time, that model worked well for a lot of people. But there was a problem that slowly made itself felt: Though they were themselves the product of a major transformation, postwar American suburbia relied on a restrictive set of rules that made suburban neighborhoods, once built, very difficult to change. Irrationally rigid regulations on housing remain in place across the country today. If you live in a single-family home, there’s a very good chance you’re banned from dividing your house into a duplex, redeveloping it into a triplex or apartment building, renting out a floor to a tenant, or opening a corner store.

These rules are set out by a system known as zoning: local regulations on what kind of things can be built where. Zoning, including single-family-exclusive zoning, first spread across the US in the early 20th century (before that, development was far more freewheeling and improvised). It reached its full expression after World War II, when it became a near-universal toolkit for shaping suburban America.

At first glance, the idea of zoning seems reasonable enough: Factories that emit toxic pollutants should probably be kept away from residential areas, for example. In practice, it has amounted to an extraordinarily heavy-handed, totalizing form of central planning controlling the fabric of daily life.

The overwhelming majority of residential land nationwide, as housing advocates are fond of saying, bans the construction of anything other than detached single-family houses — and that’s just the beginning. Zoning codes include legions of other rules, often including minimum size requirements (effectively banning starter homes) and mandatory parking spots for at least two cars. Apartments, in many areas, are zoned out of existence.

Suburbs exist all over the world. But the US, despite our national reputation for freedom and individualism, is relatively unique in having such a prescriptive segregation of land uses governing what people are allowed to do with what is, don’t forget, their own property, as Sonia Hirt, an architecture and planning professor at the University of Georgia, explains in her book Zoned in the USA.

We’re also unusual in granting privileged status to one specific, costly, and resource-intensive type of housing. “I could find no evidence in other countries that this particular form — the detached single-family home — is routinely, as in the United States, considered to be so incompatible with all other types of urbanization as to warrant a legally defined district all its own, a district where all other major land uses and building types are outlawed,” Hirt writes.

Suburban-style zoning has become widespread not just in suburbs proper, but also in core cities, many of which have adopted similar zoning codes that would have made their original growth and housing diversity impossible.

In that sense, suburbia isn’t just a specific place — it’s a mindset that’s become the default American settlement pattern. For mid-century home buyers, the costs of our suburban revolution were distant. But it didn’t take long for those costs to become felt nationally.

The suburban wall

By rigidly defining what a community is allowed to look like, suburban zoning has done more than simply shape the physical form of our cities. It has also made it all but impossible for many communities to adapt and grow, as human societies always have, which has created severe distortions in housing markets.

“The suburban development model is built on the premise of stasis,” as Charles Marohn, a civil engineer and founder of the advocacy group Strong Towns, has put it. “These neighborhoods are frozen at their current number of households, no matter how much the surrounding city transforms. No matter how many jobs are created. No matter how desirable the area is or how high rents get,” he wrote in his recent book Escaping the Housing Trap.

That stasis quickly froze America’s most desirable metro areas, leaving them unable to build enough housing to meet demand. And when housing becomes scarce relative to the number of people who want to live in the community, it simply becomes more expensive.

Starting in the 1970s, home construction plummeted and prices soared in high-opportunity coastal cities because of restrictions on supply. Los Angeles, incredibly, downzoned its residential areas to such an extent between 1960 and 1990 that its total population capacity, as measured by the number of households it’s zoned to accommodate, declined from 10 million people to about 4 million, which is the level the city’s population has hovered around for the last decade.

The upshot is that many of America’s metropolitan areas have become dominated by what economist Issi Romem identified in 2018 as a “dormant suburban interior.” After World War II, cities and suburbs built out and out, mostly low-density single-family homes, before they largely stopped building altogether because zoning laws forced them to maintain an inflexible suburban form. Despite a few pockets of dense growth, most residential areas have been locked out of building incrementally and thickening up, even as demand to live there increases.

A gif of a map showing the Los Angeles area building progressively less housing every decade since 1940.

When a high-demand city refuses to allow greater housing density, the dynamic becomes progressively more toxic, not just because homes become more scarce, but also because market incentives can push developers to replace cheaper, smaller single-family houses with more costly McMansions (as opposed to, in a healthier market, building apartments or a set of townhomes that could house more people in the same amount of space, for less money per household).

In expensive cities, proposals to build more housing have, famously, often been blocked by angry neighbors (derisively called NIMBYs) who rely on a labyrinthine tangle of zoning laws to foil change that they don’t like. Now, that vicious cycle is poised to catch up with the South and Southwest, where, Glaeser and Gyourko believe, the decline in housing starts is likely a function of incumbent residents using regulation to make it harder to build.

“People in the Sunbelt, now that things have gotten big enough, they’ve figured out what the Bostonians figured out a long time ago, and the Angelenos,” Gyourko said. (And plenty of anecdotal evidence from local housing fights in the Sunbelt, Slate reporter Henry Grabar has noted, points to the same thing.)

Suburbia offered Americans an implicit bargain: Neighborhoods would never have to change, and we could instead accommodate more people by sprawling outward forever. To a great extent, that’s what’s happened, and it’s given us lots of single-family homes, but also a mind-boggling expanse of costly, deteriorating infrastructure, nightmare commutes, unrestrained greenhouse gas emissions, sedentary, disease-promoting lifestyles, and one of the highest car crash death rates in the developed world.

And we’re still in a housing crisis, because even in the sprawl-hardened, car-loving Southwest, sprawl has its limits. I put this question to Gyourko: Once the most distant, low-density exurbs of, say, Dallas declare themselves full, why don’t developers simply keep building the next ring of sprawl 50-plus miles away? “People don’t want to live that far,” he said (he later clarified that we don’t know the precise outer limit beyond which housing demand dwindles). Human prosperity has always depended on proximity to one another and to opportunity — and even in 2025, it still does.

Let people do things

The US has gotten steadily more suburban over the last century, but not uniformly so. In the early 2010s, many core cities, including Denver, Atlanta, and Washington, DC, grew faster than suburbs, due to a combination of younger generations’ increasing interest in urban lifestyles and a collapse in suburban home construction after the Great Recession.

Some of the most expensive homes in the country are consistently those located in dense, vibrant prewar cities, a clear signal that there’s high demand for those amenities. The revival of cities in the last few decades and the ongoing suburbanization of the US, Gyourko said, have both been happening at the same time.

Nevertheless, many Americans today, particularly post-Covid, still demonstrate a preference for the suburbs, for all sorts of reasons, including cheaper, larger homes for families of the kind that can be hard to find in cities. Americans are also spending more time alone and at home, and working remotely, which might increase their preference for spacious living quarters and diminish interest in urban life.

“There’s a pendulum that swings between loving the city and loving the suburbs, and it was absolutely shifting towards loving the city” in the 2010s, Romem told me. “And then the pandemic came and undid all of that.”

The disruptions of Covid also revealed the fragility created by American-style urban planning. Because of our preexisting shortage of about 3.8 million homes, a small share of Americans moving residences upended housing markets across the country.

We’re starting to see big shifts in housing policy

Plenty of cities and states, especially since the post-2020 run-up in home prices, have finally begun to take their largely self-inflicted housing shortages seriously. “A bunch of broken policies that seemed unfixable a year ago are actively being fixed,” said M. Nolan Gray of California YIMBY.

The sheer volume of new laws meant to make it easier to build homes has been overwhelming, reflecting the morass of local obstacles. Here are just a few:

2016: California made it much easier to build accessory dwelling units (ADUs), also known as mother-in-law suites or granny flats, alongside houses on single-family lots. The state has since passed several additional laws to close loopholes that localities were using to block ADU construction.2018: Minneapolis became the first major US city to end all single-family-exclusive zoning, prompting national discussion about why we ban apartments in residential areas at all.2019: Oregon required municipalities larger than 10,000 people to allow duplexes, and those over 25,000 to allow duplexes, triplexes, and other multi-family housing, on single-family lots.2023: Montana and Washington state required many cities and suburbs to allow multi-family housing and ADUs.2025: California exempted apartment construction in its cities and suburbs from onerous environmental review requirements that in practice have often been weaponized to block density. North Carolina’s House unanimously passed a bill to prevent local governments from requiring parking spots — which are expensive and take up lots of space — in new housing.

If it sounds draconian for states to interfere in cities’ and suburbs’ policies, consider that the US is unusual in its hyperlocal control over housing. Although huge barriers remain, we’re just beginning to see the contours of a major shift in how housing in America gets regulated and built.

Skyrocketing housing prices since the pandemic have given new fuel to the YIMBY (or “Yes in my backyard”) movement, which for more than a decade has sought to legalize the full diversity of housing options across the US. At bottom, YIMBYism is about freeing cities and suburbs from “the zoning straitjacket,” as M. Nolan Gray, an urban planner and senior director of legislation and research for the housing advocacy nonprofit California YIMBY, put it. In other words, he said: “Let people do things.”

“A city is the ultimate form of emergent order. A city represents the plans of the millions of people who live there and work there and play there and study there,” he said. “The basic instinct of zoning is that we can sit down and write out the exact appropriate types of uses, scale of those uses, and exactly where those uses can go — and it’s just such a presumptuous way to govern a city.”

The deeper implication is not just that we need more housing, but also that suburbs must be allowed to function like the miniature cities they are. They should be flexible enough to support a range of human aspirations — not just the hallmarks of stereotypical suburban life, but also the amenities of urban life. “No neighborhood can be exempt from change,” as Marohn put it.

Zoning exclusively for detached single-family homes, for example, has never made much sense, but it especially doesn’t make sense in 2025, when most Americans live in household sizes of two or one. Recognizing this, along with the severity of their housing crises, a number of cities and states have gotten rid of single-family-exclusive zoning in the last decade, along with other barriers to building housing. But because zoning codes are enormously complicated, repealing one barrier often isn’t enough to actually allow multifamily housing to get built — things like height limits or excessive parking minimums can still make it infeasible.

“Housing is like a door with a bunch of deadbolts on it,” Alli Thurmond Quinlan, an architect and small-scale developer based in Fayetteville, Arkansas, told me. “You have to unlock all the deadbolts, but as soon as you do, there’s an enormous amount of human creativity” that rushes in. She stresses that communities shouldn’t be afraid of going too far in repealing zoning rules, and that if anything, they should err on the side of going further.

Repeal minimum lot sizes, and a developer might find a way to build a cute narrow house in a gap between existing houses. Removing parking requirements made it possible to build this lovely set of densely clustered cottages — a development style that can blend harmoniously into suburban neighborhoods — in Atlanta at a significantly lower cost:

A cluster of four modern cottage-style homes arranged around a shared green lawn on a sunny day. The houses vary in size and color, including a two-story dark blue house on the left, and a two-story light teal and a one-story darker teal house on the right. The homes are nestled among large, mature trees under a clear blue sky. This image showcases a pocket neighborhood or cottage court development.

A single-family house, meanwhile, can be turned into a duplex:

An exterior photo of a modern, two-story red brick duplex situated on a residential street between two older, traditional houses. The new building has a boxy shape with several gabled rooflines, dark metal balconies, and a central section that rises higher than the rest. To its right is a classic Victorian home in shades of yellow and cream, creating a stark architectural contrast between the new infill construction and the historic style of the neighborhood. The photo is taken on a sunny day with a clear blue sky.

Right now, what little density is being added to cities and suburbs often comes in the form of large apartment buildings (you may know them as “gentrification buildings”). There’s nothing wrong with those, and they have an important role to play in mitigating the housing shortage. Yet many people don’t want them built in single-family neighborhoods. Making it legal to incrementally densify single-family neighborhoods would allow suburbs to still look like suburbs, while greatly increasing their population capacity and their ability to support essential services like public transit.

“The dormant suburban sea is so vast that if the taboo on densification there were broken, even modest gradual redevelopment — tearing down one single-family home at a time and replacing it with a duplex or a small apartment building — could grow the housing stock immensely,” Romem wrote in 2018.

That style of neighborhood development — gradually over time, rather than building to completion all at once — also happens to be the secret to creating places with a visually appealing vernacular character, Romem said. “True character comes from layer upon layer over a span of many years, from many people’s different, disparate decisions. And that requires change.”

What should suburbs be for?

At the dawn of mass suburbanization, Americans had legitimate reasons for wanting to move out of cities, where substandard housing and overcrowding were still commonplace. But “one generation’s solutions can become the next generation’s problems,” as journalists Ezra Klein (a Vox co-founder) and Derek Thompson wrote in their book Abundance. The same forces that built the American dream 80 years ago are now suffocating it, inflicting profound pain on families across the country.

For me, this subject is personal: I’ve lived in apartments literally my entire life, a form of housing often treated as second-class, if it’s even permitted to exist. Some of that has been in cities, and some in a suburb. My immigrant mother worked incredibly hard to find homes that were safe, stable, and affordable to raise a child in. America gave me everything, but our national housing reality made things far more difficult for her than they needed to be.

There’s no shortage of wonky policy ideas about how to fix housing in the US — and they go far beyond just zoning codes (you don’t want to hear me get started on building codes or impact fees). We will also need a society-wide paradigm shift beyond policy: The financial and real estate industries will need to relearn models for supporting incremental densification, which, experts consistently told me, have fallen by the wayside since the entrenchment of sprawl and restrictive zoning.

More than that, our minds will have to open up to the inevitability of constant change, and abandon the idea that any of us has a right to veto our community’s evolution. As Marohn points out in Escaping the Housing Trap, “a community that has lost all affordable starter housing already has changed irreversibly. It is only the buildings that have not.”

The suburbs, above all, must be allowed to be plural. Across cultures and centuries, people of all sorts of circumstances have lived on the outskirts of urban life. Today, Americans of every social class seek homes in the suburbs. Some are affluent; many are not. Others want to be near a good school, a job, a support system, or simply a hard-won foothold of affordability. It’s not the role of a planning board or a legacy zoning map to decide. We don’t know what the future of the suburbs will be — but we can free them to become what we need of them.


From Vox via this RSS feed

20
 
 

People react after getting their certificate of naturalization during a naturalization ceremony.

People react after getting their certificate of naturalization during a naturalization ceremony at the JFK Library in Boston, Massachusetts, on May 22.

President Donald Trump is reviving a familiar playbook to target naturalized US citizens.

The Justice Department recently announced a new push to strip certain people of their citizenship through denaturalization proceedings. Individuals who pose a danger to national security, have committed violent crimes, or fail to disclose a felony history (or make other misrepresentations) on their citizenship application are among those now being prioritized for denaturalization and deportation. In doing so, the administration is likely seeking to expand an authority that the Supreme Court drastically limited decades ago.

The president and White House officials have suggested that some prominent denaturalization targets could include one-time Trump megadonor Elon Musk, with whom the president had a public falling out, and Zohran Mamdani, a progressive who recently won the Democratic nomination for mayor of New York City. It’s not clear, however, what legitimate grounds the administration might have to denaturalize either of them.

The news may rattle any of the estimated 24.5 million naturalized citizens currently living in the US. That might especially be the case for those who have voiced opposition to Trump, given that his administration has already weaponized immigration policy against dissidents.

Ostensibly, denaturalization is about protecting the integrity of the citizenship process. In practice, the new push “is about targeting speech the government doesn’t like, and it is chilling all naturalized citizens,” said Amanda Frost, a professor at the University of Virginia School of Law and author of You Are Not American: Citizenship Stripping From Dred Scott to the Dreamers.

This wouldn’t be the first time denaturalization has been used as a tool of political repression. During the Red Scare following World War II, the US pursued denaturalization cases with an eye toward rooting out un-American behavior, both real and perceived.

Scholars now see echoes of that era in Trump’s strategy.

“There’s increasing rhetoric of trying to take people’s citizenship away for political reasons,” said Cassandra Burke Robinson, a professor at Case Western Reserve University School of Law who has studied denaturalization. “I think any time you treat that as even a possibility to be considered, you’re going down a really dangerous slope.”

What denaturalization looked like during the Red Scare

In the 1950s and 1960s, fears about the spread of communism took hold of the US. A political movement known as McCarthyism — named after then-Senator Joseph McCarthy — sought to purge anyone in government with connections to the Communist Party. Denaturalization was one of the tools McCarthyites relied on, and, at the height of the movement, the US was denaturalizing more than 20,000 people per year, Burke Robinson said.

In these cases, the government argued that if an individual became a member of the Communist Party at any time, that person had been lying when taking an oath of allegiance to the US as part of their citizenship test and, therefore, could be denaturalized. Later, that argument evolved to target Americans with disfavored political views or who were perceived as disloyal to the US more broadly, not just Communist Party members.

One of the primary targets of denaturalization were members of the German American Bund, the American Nazi organization. However, targets also included political gadflies, such as labor leaders, journalists, and anarchists.

“Those whose speech the government didn’t like could get removed, and everyone else could stay. They used their discretion in this area to accomplish that goal,” Frost said.

Among those targeted for denaturalization was the Australian-born labor leader Harry Bridges, who led longshoremen strikes in California. He accepted support from the Communist Party as part of his union activities, but the government never found evidence that he was a member himself. The notorious House Un-American Activities Committee investigated Bridges, and the government sought his deportation and, once he became a citizen, denaturalization, but never succeeded.

Denaturalizations decreased significantly, from tens of thousands to fewer than 10 annually, after the Supreme Court’s 1967 decision in Afroyim v. Rusk. In that case, the justices found that the US government does not have the power to denaturalize people without their consent because citizenship is guaranteed by the Constitution’s 14th Amendment.

“They said you could only lose your citizenship if you very explicitly renounce,” Frost said. “The United States government governs with the consent of the citizens. It’s not allowed to choose its citizens.”

For decades, the ruling meant that denaturalization was a rare phenomenon. However, the court included an exception for cases in which citizenship is “unlawfully procured” — meaning they were not eligible for citizenship in the first place due to acts like committing war crimes. That’s what Trump is now relying on to revive the tactic.

What Trump’s denaturalization plans could look like

Denaturalizations have been increasing since the Obama administration, when the digitization of naturalization records made it easier to identify individuals whose citizenship applications showed discrepancies with other government records. Most denaturalization cases during this period involved people who had committed acts of terrorism or war crimes.

But Trump made denaturalization a priority during his first administration, including targeting anyone who merely had errors on their naturalization papers. The DOJ launched a new section focused on denaturalization and investigated some 700,000 naturalized citizens, resulting in 168 active denaturalization cases — more than under any other modern president. It’s not clear how many of them were ultimately denaturalized and deported.

Trump is now picking up where he left off. The administration has said that it will pursue these denaturalization cases in civil rather than criminal court proceedings. In such proceedings, individuals are not entitled to an attorney, and the legal bar for the administration to prove that a citizen did something to warrant denaturalization is lower than it would be in criminal court. There is also no limit on how long after naturalization the government can seek to revoke someone’s citizenship.

All of that raises due process concerns.

“Somebody might not know about the proceedings against them. There might be a good defense that they’re not able to offer. There’s no right to an attorney,” Burke Robinson said. “It seems to me to be really problematic.”

There’s also the question of to what degree this Supreme Court will be willing to rein in Trump’s denaturalization efforts. Its 2017 decision in Maslenjak v. United Statesmaintained a high bar for denaturalization: The court found that an alleged misstatement in a Bosnian refugee’s citizenship paperwork could not have kept them from becoming a citizen, even if it had been discovered before their naturalization, and could not be used as grounds to denaturalize them in criminal proceedings.

That makes Burke Robinson “somewhat hopeful that the court does take the issue very seriously.”

“But that was 2017,” she added. “It is a different court now, so it’s very hard to predict.”


From Vox via this RSS feed

21
 
 

For all the talk of the glamour and ritz of big-city living, the United States is (and will remain) a nation of suburbs. Yet suburbs, too, are increasingly unable to build enough housing, affected by the same slowdown as their denser counterparts. The old suburban frontier is closing — but that means we have an opportunity to take another look at this often-maligned model of American life and develop a new and better vision.

Also in this issue, you’ll find the “intellectual vibe shift” that could lead to an era of liberal flourishing and a feature on the path to a less painful IUD procedure. Plus: Can AI stop bioweapons? And are microwaves actually bad for your health?

The old suburban frontier is closing. Here’s what the new one could look like.

By Marina Bolotnikova

The spiritual life calls out to me. But is it self-indulgent?

By Sigal Samuel

Will we know if the next plague is human-made?

By Joshua Keating

Coming July 8

The end of the anti-liberal moment

By Zack Beauchamp

Coming July 9

What if IUD insertion didn’t have to be so painful?

By Allie Volpe

Coming July 10

Microwaves produce radiation. Is that bad for me?

By Dylan Scott

Coming July 11


From Vox via this RSS feed

22
 
 

Zohran Mamdani, a bearded, dark-haired man wearing a suit with a white shirt and a patterned tie, stands with a crowd of supporters behind him.

Zohran Mamdani, the Democratic candidate for New York City mayor, speaks during a press conference celebrating his primary victory on July 2. | Angela Weiss/AFP via Getty Images

Last weekend, my colleague Christian Paz wrote about how the Democratic Party could be on the brink of a grassroots takeover, similar to what the GOP experienced with the Tea Party movement. It’s a fascinating piece that could have huge ramifications for Democratic politics, so I sat down with him to chat about his reporting for Vox’s daily newsletter, Today, Explained.

Our conversation is below, and you can sign up for the newsletter here for more conversations like this.

Hey, Christian, how are you? Remind us what the original Tea Party was. What is this movement we’re talking about?

The movement that I’m talking about started before Obama was elected. It was a mostly libertarian, grassroots, localized, not-that-big movement — a reaction to the bailouts at the end of the Bush administration. The idea being there’s too much deficit spending and government is becoming way too big and becoming unmoored from constitutional limited-government principles.

It evolved when Obama was elected into a broader anti-Obama backlash and then a major explosion because of the Affordable Care Act fights. It basically turned into an effort to primary incumbent Republicans, an effort to move the party more toward this wing and eventually try to win back control of Congress.

After it took off, what happened to the GOP?

They were able to win, I believe, five out of the 10 Senate seats that they were challenging. Something like 40 members of Congress were Tea Party-affiliated.

The primary thing was that they were successful in massively mobilizing Republican voters and getting people to turn out in the 2010 midterms, which turned out to be one of the biggest “shellackings,” as Obama called it, that Democrats or that any incumbent president and their party had sustained. Democrats lost control of the House and lost seats in the Senate, and that was a massive setback.

From then on, what happened was a successful move by more conservative primary challengers in future elections, the most iconic one being in 2014 — the primary that ousted Eric Cantor, the House majority leader, in favor of a Tea Party activist. It also forced the party as a whole to move to the right, making it more combative, more extreme, and more captive to a more ideological part of the Republican base.

Why are we hearing about this now with the Democratic Party?

The underlying idea is that there’s a divide between the establishment Democrats and populist-minded progressive Democratic candidates. And that’s part of the reason why we’re hearing this now, because there was a victory in New York City’s mayoral primary by Zohran Mamdani, a candidate who is fully in that latter category — a self-described democratic socialist appealing to this idea of bringing out new parts of the electorate, mobilizing people with populist appeal, with targeted, non-polished messaging, and taking more left-leaning positions on policy.

The big thing fueling talk about this Tea Party moment for Democrats is that the base has never really been as angry as it is right now. What we’re seeing is a combination of anti-Trump anger, wanting a change in direction, wanting a change in leadership, and also some folks who are like, Maybe we should become more progressive as a party.

So tell me about that. A change in leadership, a change in the establishment — what does this movement actually want?

It’s interesting. Because at least back with the original Tea Party movement, you could point to a core list of priorities there were about repealing Obamacare, about never repeating a bailout, about limiting the federal government’s ability to spend.

Something like that doesn’t exist right now, because it is a pretty disparate energy. The core thing is Democratic voters do not want the current leadership in Congress. They don’t like Hakeem Jeffries’s style of leadership in the House. They don’t like Chuck Schumer’s style of leadership in the Senate. There’s frustration at older members of Congress being in Congress and serving in leadership capacity right now.

In the polling, over and over again, we see, Democrats should be focused on providing a working-class vision for Americans. They should be more focused on kitchen table affordability issues. And that is the thing that most Democratic voters can actually agree on, and basically saying that that’s not what they think their current leadership is focused on.

What would it look like for the Democratic Party if this actually happens?

There are some strategists and activists who are drawing up lists of potential candidates to primary. There are already some challenges underway. I’m thinking of some House seats in Arizona, House seats in Illinois. There’s talk, especially after this New York City mayoral contest, about primarying Kirsten Gillibrand or Chuck Schumer and finding challengers to some more moderate House members in the New York area.

I’d be looking to see if there actually are younger people launching primary campaigns targeting older or centrist Democratic members of Congress. Once we get to primary season next year, how successful in fundraising are these candidates? Is there an actual effort by some established progressive members of the House to try to support some of these younger candidates?

Basically, just seeing if there’s money there, if there’s actual interest there in supporting these candidates, and whether we do see primary challenges in New York, in Massachusetts, be successful.


From Vox via this RSS feed

23
 
 

Kendrick Lamar, wearing a red, white, and blue jacket, stands amid dancers wearing red, white, and blue and choreographed to resemble the American flag.

Kendrick Lamar performs onstage during Super Bowl LIX Halftime Show on February 9, 2025, in New Orleans, Louisiana. | Patrick Smith/Getty Images

Imagine your average Fourth of July party. There are probably hot dogs on the grill, everyone is clad in red, white, and blue, and it culminates in a fireworks show. It may sound like a lovely way to spend a day off. But for a lot of Americans, the celebration, and the flag itself, are more complicated than that.

That’s the question that Explain It to Me, Vox’s weekly call-in show, is setting out to tackle this holiday weekend: What’s the relationship like between Black people and the American flag?

Specifically, one listener wanted to know, in the wake of the red-white-and-blue spectacle of Beyoncé’s Cowboy Carter and Kendrick Lamar’s Super Bowl halftime show, how that conversation has evolved over time.

This is something Ted Johnson thinks a lot about. Johnson, who is Black, is an adviser at the liberal think tank New America, a columnist at the Washington Post, and a retired US Navy commander. “The flag has sort of been hijacked by nationalists — folks who believe either America is perfect and exceptional, or at the very least, anything that it’s done wrong in the past should be excused by all the things that it’s done well,” Johnson told Vox. “And that is not my relationship with the flag. It’s much more complicated because there has been tons of harm done under that flag.”

How do Black Americans square that harm and that pride? And how has that relationship changed through the years? Below is an excerpt of the conversation with Johnson, edited for length and clarity.

You can listen to Explain It to Me on Apple Podcasts, Spotify, or wherever you get podcasts. If you’d like to submit a question, send an email to [email protected] or call 1-800-618-8545.

One way to tease out this relationship between Black Americans and the flag is to talk about the experience of Black service members. What’s that history?

One of the earliest instances is the story of an enslaved man named Jehu Grant in Rhode Island during the Revolutionary War. The man that owned him was a loyalist to the Brits. Grant was afraid that he was going to be shipped off and sold to the Brits to fight for them. So he runs away, joins Washington’s army and fights in the Continental Army, and then his master shows up and says, “You’ve got my property, and I want it back.” And the Army turns him back over to the guy that owns him, where he serves for many years and eventually buys his freedom.

When Andrew Jackson becomes president in the 1820s, he makes it policy to provide pensions for those Revolutionary War folks still alive. And so Grant applies for his pension and is denied. The government says that services rendered while a fugitive from your master are not recognized.

That is the relationship of Black service members to the flag. It represents a set of principles that many would be willing to die for and also a way of life that intentionally excluded Black folks for no other reason than race and status of their servitude. And so if you look at any war, you will find Black folks in uniform who have both been oppressed in the country they represent, and are willing to die for that country because of the values it stands for and for their right to be able to serve and benefit from the programs that the military has made available to folks.

My grandfather served in the military and I never got the chance to really talk with him about that experience. But I’m curious if you can speak to the motivations of Black Americans who continue serving, especially during the Jim Crow era.

Pre-Civil War, a lot of enslaved Black folks that decided to fight did so because they believed their chances at liberty, emancipation, and freedom were connected to their willingness to serve the country. Then we get the draft and a lot of the Black folks that served in the early part of the 20th century were drafted into service. They weren’t eager volunteers lining up as a way of earning their citizenship, but the fact that the vast majority of them honored that draft notice even though they were treated as second-class citizens was a sort of implicit demand for access to the full rights of the Constitution.

“There is a belief that the United States is ours as well. We have a claim of ownership. And to claim ownership also means you must sort of participate in the sacrifice.”

I’d be remiss if I say that folks joining today, for example, are doing so because they love the flag. The military has a great pension program. The military offers great programs if you want to buy a home or if you want to get an education. So there’s a sort of socioeconomic attractiveness to the military that I think explains why Black folks continue to join the military post-draft.

But it is also because there is a belief that the United States is ours as well. We have a claim of ownership. And to claim ownership also means you must sort of participate in the sacrifice.

When a lot of those service members came back from war, they were met with systemic institutionalized racism. How were people continuing to foster that sense of patriotism despite all that?

When Black folks were coming home from World War I and II, many were lynched in uniform.They weren’t even excused from the racial dynamics by being willing to die for the country.

One of the most famous genres of music in this period was called coon music. One of the songs was about Black people not having a flag. They talked about how white folks in the Northeast could fly flags from Italy, Ireland, wherever they’re from. And white people in the States could just fly the American flag. Black people could fly none of those because we didn’t know where we were from and the United States is not ours. And so in this song, they say the Black flag is basically two possums shooting dice and that would be an accurate representation.

Wow. That is some classic old-school racism.

Yeah, the song is called “Every Race Has a Flag, but the Coon.” And so we are very familiar with the red, black, and green pan-African flag. This was Marcus Garvey’s response to this coon genre of music.

There’s this idea among Black Americans of, We built this. Of course I’m going to reclaim this. Of course I’m going to have pride in it because I built it. I think that’s what we’re seeing with a lot of the imagery now.

But what about Black artists and also Black people in general who say, Our ancestors may have done all this work, but there really is no way to be a part of this and maybe we should not be trying to be a part of this?

If you take pride in the flag because you believe America is exceptional, you’re going to find a lot fewer subscribers to that belief system than one where your pride in the country means being proud of the people you come from and proud of the arc of your people’s story in this country.

On the latter, you will find people who are very proud of what Black people have accomplished in this country. For me, patriotism means honoring those sacrifices, those people that came before us. It does not mean excusing the United States from its racism, from its perpetuated inequality, or for putting its national interests ahead of the people that it’s supposed to serve. So it is very complicated, and there’s no easy way through it.

I will say that I think part of the reason we’re seeing more folks willing to sort of reclaim the flag for their own is because of Gen X. My generation was the first one born post-Civil Rights Act of 1964, so Jim Crow was the experience of our parents. Those experiences connected to the hijacking of the flag to connect it to explicit statutory racism feels generations removed from folks who have grown up in America where opportunity is more available, where the Jim Crow kind of racism is not as permitted. And while the country is not even close to being the kind of equal nation it says it was founded to be, it’s made progress.

I think a reclamation of that flag by Beyoncé and others is a sort of signal that yes, we built it. Yes, we’ve progressed here. And no, we’re not leaving. There’s no “go back to Africa.” This is home. And if this is home, I’m going to fly the flag of my country. There’s lots to be proud of about what the country has achieved and by Black Americans in particular. And for me, that is all the things that patriotism represents, not the more narrow exclusive version that tends to get more daylight.

I think one thing we need to discuss is the definition of Black we’re using here. I am what they would call Black American. My ancestors are from Alabama and Arkansas. They were formerly enslaved.

But Blackness in America now has a much wider net. I have so many friends whose parents are immigrants from the Caribbean or Africa. And it’s interesting in this moment where there are lots of conversations about what it means to be Black, and who gets to claim it, we’re also seeing this flag resurgence.

I think probably true that there are more Black people who are first-generation Americans today than there have been since they started erasing our nations of origin during slavery. That means Black American doesn’t just mean people who descended from slaves. It means Black people of all kinds.

When we talk about Black politics, we don’t consider the Black immigrant experience. When we talk about Black Americanism or Black patriotism, we often don’t account for the Black immigrant experience, except to the extent that that experience is shed and the American one is adopted. Those views sort of get thrown into this pot of Blackness instead of disaggregated to show how Black folks from other places who become Americans have a distinct relationship with the country that also affects their relationship with the iconography of the country like the flag, the national anthem, and this reclamation of red, white, and blue.

There may be some Black artists — I think of Beyoncé — who are reclaiming this imagery, but we also can’t ignore who has a majority stake in it. When people think of the flag, they think of white people. Is that changing?

It is, but slowly. If you ask people from around the world to picture a stereotypical American, they’re not picturing LeBron James, despite the medals he’s won at the Olympics. They’re probably picturing a white man from the Midwest.

The fact that so much of our nation’s history is racialized means that many of the nation’s symbols are also racialized. And to deracialize the things that were created in its origin is a long-term process. I do think it’s beginning to happen. I think it’s going to be some time before we get to a de-racialized conception of the United States.


From Vox via this RSS feed

24
 
 

Cabinet containing an automatic external defibrillator in Austin, Texas, on March 9, 2023. | Smith Collection/Gado/Getty Images

A day before my 47th birthday last month, I took the subway to Manhattan’s Upper East Side for a coronary artery calcium scan (CAC).

For those who haven’t entered the valley of middle age, a CAC is a specialized CT scan that looks for calcium deposits in the heart and its arteries. Unlike in your bones, having calcium in your coronary arteries is a bad thing, because it indicates the buildup of plaque comprised of cholesterol, fat, and other lovely things. The higher the calcium score, the more plaque that has built up — and with it, the higher the risk of heart disease and even heart attacks.

A couple of hours after the test, I received a ping on my phone. My CAC score was 7, which indicated the presence of a small amount of calcified plaque, which translates to a “low but non-zero cardiovascular risk.” Put another way, according to one calculator, it means an approximately 2.1 percent chance of a major adverse cardiovascular event over the next 10 years.

2.1 percent doesn’t sound high — it’s a little higher than the chance of pulling an ace of spades from a card deck — but when it comes to major adverse cardiovascular events, 2.1 percent is approximately 100 percent higher than I’d like. That’s how I found myself joining the tens of millions of Americans who are currently on statin drugs, which lower levels of LDL cholesterol (aka the “bad” cholesterol).

I didn’t really want to celebrate my birthday with a numerical reminder of my creeping mortality. But everything about my experience — from the high-tech calcium scan to my doctor’s aggressive statin prescription — explains how the US has made amazing progress against one of our biggest health risks: heart disease, and especially, heart attacks.

A dramatic drop in heart attack deaths

A heart attack — which usually occurs when atherosclerotic plaque partially or fully blocks the flow of blood to the heart — used to be close to a death sentence. In 1963, the death rate from coronary heart disease, which includes heart attacks, peaked in the US, with 290 deaths per 100,000 population. As late as 1970, a man over 65 who was hospitalized with a heart attack had only a 60 percent chance of ever leaving that hospital alive.

A sudden cardiac death is the disease equivalent of homicide or a car crash death. It meant someone’s father or husband, wife or mother, was suddenly ripped away without warning. Heart attacks were terrifying.

Yet today, that risk is much less. According to a recent study in the Journal of the American Heart Association, the proportion of all deaths attributable to heart attacks plummeted by nearly 90 percent between 1970 and 2022. Over the same period, heart disease as a cause of all adult deaths in the US fell from 41 percent to 24 percent. Today, if a man over 65 is hospitalized with a heart attack, he has a 90 percent chance of leaving the hospital alive.

By my calculations, the improvements in preventing and treating heart attacks between 1970 and 2022 have likely saved tens of millions of lives. So how did we get here?

How to save a life

In 1964, the year after the coronary heart disease death rate peaked, the US surgeon general released a landmark report on the risks of smoking. It marked the start of a decades-long public health campaign against one of the biggest contributing factors to cardiovascular disease.

That campaign has been incredibly successful. In 1970, an estimated 40 percent of Americans smoked. By 2019, that percentage had fallen to 14 percent, and it keeps declining.

The reduction in smoking has helped lower the number of Americans at risk of a heart attack. So did the development and spread in the 1980s of statins like I’m on now, which make it far easier to manage cholesterol and prevent heart disease. By one estimate, statins save nearly 2 million lives globally each year.

When heart attacks do occur, the widespread adoption of CPR and the development of portable defibrillators — which only began to become common in the late 1960s —  ensured that more people survived long enough to make it to the hospital. Once there, the development of specialized coronary care units, balloon angioplasty and artery-opening stents made it easier for doctors to rescue a patient suffering an acute cardiac event.

Our changing heart health deaths

Despite this progress in stopping heart attacks, around 700,000 Americans still die of all forms of heart disease every year, equivalent to 1 in 5 deaths overall.

Some of this is the unintended result of our medical success. As more patients survive acute heart attacks and life expectancy has risen as a whole, it means more people are living long enough to become vulnerable to other, more chronic forms of heart disease, like heart failure and pulmonary-related heart conditions. While the decline in smoking has reduced a major risk factor for heart disease, Americans are in many other ways much less healthy than they were 50 years ago. The increasing prevalence of obesity, diabetes, hypertension, and sedentary behavior all raise the risk that more Americans will develop some form of potentially fatal heart disease down the line.

Here, GLP-1 inhibitors like Ozempic hold amazing potential to reduce heart disease’s toll. One study found that obese or overweight patients who took a GLP-1 inhibitor for more than three years had a 20 percent lower risk of heart attack, stroke, or death due to cardiovascular disease. Statins have saved millions of lives, yet tens of millions more Americans could likely benefit from taking the cholesterol-lowering drugs, especially women, minorities, and people in rural areas.

Lastly, far more Americans could benefit from the kind of advanced screening I received. Only about 1.5 million Americans received a CAC test in 2017, but clinical guidelines indicate that more than 30 million people could benefit from such scans.

Just as it is with cancer, getting ahead of heart disease is the best way to stay healthy. It’s an astounding accomplishment to have reduced deaths from heart attacks by 90 percent over the past 50-plus years. But even better would be preventing more of us from ever getting to the cardiac brink at all.

A version of this story originally appeared in the Good News newsletter. Sign up here!


From Vox via this RSS feed

25
 
 

The text software ChatGPT is seen on a laptop screen.

What’s the point of college if no one’s actually doing the work?

It’s not a rhetorical question. More and more students are not doing the work. They’re offloading their essays, their homework, even their exams, to AI tools like ChatGPT or Claude. These are not just study aids. They’re doing everything.

We’re living in a cheating utopia — and professors know it. It’s becoming increasingly common, and faculty are either too burned out or unsupported to do anything about it. And even if they wanted to do something, it’s not clear that there’s anything to be done at this point.

So what are we doing here?

James Walsh is a features writer for New York magazine’s Intelligencer and the author of the most unsettling piece I’ve read about the impact of AI on higher education.

Walsh spent months talking to students and professors who are living through this moment, and what he found isn’t just a story about cheating. It’s a story about ambivalence and disillusionment and despair. A story about what happens when technology moves faster than our institutions can adapt.

I invited Walsh onto The Gray Area to talk about what all of this means, not just for the future of college but the future of writing and thinking. As always, there’s much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday.

This interview has been edited for length and clarity.

Let’s talk about how students are cheating today. How are they using these tools? What’s the process look like?

It depends on the type of student, the type of class, the type of school you’re going to. Whether or not a student can get away with that is a different question, but there are plenty of students who are taking their prompt from their professor, copying and pasting it into ChatGPT and saying, “I need a four to five-page essay,” and copying and pasting that essay without ever reading it.

One of the funniest examples I came across is a number of professors are using this so-called Trojan horse method where they’re dropping non-sequiturs into their prompts. They mention broccoli or Dua Lipa, or they say something about Finland in the essay prompts just to see if people are copying and pasting the prompts into ChatGPT. If they are, ChatGPT or whatever LLM they’re using will say something random about broccoli or Dua Lipa.

Unless you’re incredibly lazy, it takes just a little effort to cover that up.

Every professor I spoke to said, “So many of my students are using AI and I know that so many more students are using it and I have no idea,” because it can essentially write 70 percent of your essay for you, and if you do that other 30 percent to cover all your tracks and make it your own, it can write you a pretty good essay.

And there are these platforms, these AI detectors, and there’s a big debate about how effective they are. They will scan an essay and assign some grade, say a 70 percent chance that this is AI-generated. And that’s really just looking at the language and deciding whether or not that language is created by an LLM.

But it doesn’t account for big ideas. It doesn’t catch the students who are using AI and saying, “What should I write this essay about?” And not doing the actual thinking themselves and then just writing. It’s like paint by numbers at that point.

Did you find that students are relating very differently to all of this? What was the general vibe you got?

It was a pretty wide perspective on AI. I spoke to a student at the University of Wisconsin who said, “I realized AI was a problem last fall, walking into the library and at least half of the students were using ChatGPT.” And it was at that moment that she started thinking about her classroom discussions and some of the essays she was reading.

The one example she gave that really stuck with me was that she was taking some psych class, and they were talking about attachment theories. She was like, “Attachment theory is something that we should all be able to talk about [from] our own personal experiences. We all have our own attachment theory. We can talk about our relationships with our parents. That should be a great class discussion. And yet I’m sitting here in class and people are referencing studies that we haven’t even covered in class, and it just makes for a really boring and unfulfilling class.” That was the realization for her that something is really wrong. So there are students like that.

And then there are students who feel like they have to use AI because if they’re not using AI, they’re at a disadvantage. Not only that, AI is going to be around no matter what for the rest of their lives. So they feel as if college, to some extent now, is about training them to use AI.

What’s the general professor’s perspective on this? They seem to all share something pretty close to despair.

Yes. Those are primarily the professors in writing-heavy classes or computer science classes. There were professors who I spoke to who actually were really bullish on AI. I spoke to one professor who doesn’t appear in the piece, but she is at UCLA and she teaches comparative literature, and used AI to create her entire textbook for this class this semester. And she says it’s the best class she’s ever had.

So I think there are some people who are optimistic, [but] she was an outlier in terms of the professors I spoke to. For the most part, professors were, yes, in despair. They don’t know how to police AI usage. And even when they know an essay is AI-generated, the recourse there is really thorny. If you’re going to accuse a student of using AI, there’s no real good way to prove it. And students know this, so they can always deny, deny, deny. And the sheer volume of AI-generated essays or paragraphs is overwhelming. So that, just on the surface level, is extremely frustrating and has a lot of professors down.

Now, if we zoom out and think also about education in general, this raises a lot of really uncomfortable questions for teachers and administrators about the value of each assignment and the value of the degree in general.

How many professors do you think are now just having AI write their lectures?

There’s been a little reporting on this. I don’t know how many are. I know that there are a lot of platforms that are advertising themselves or asking professors to use them more, not just to write lectures, but to grade papers, which of course, as I say in the piece, opens up the very real possibility that right now an AI is grading itself and offering comments on an essay that it wrote. And this is pretty widespread stuff. There are plenty of universities across the country offering teachers this technology. And students love to talk about catching their professors using AI.

I’ve spoken to another couple of professors who are like, I’m nearing retirement, so it’s not my problem, and good luck figuring it out, younger generation. I just don’t think people outside of academia realize what a seismic change is coming. This is something that we’re all going to have to deal with professionally.

And it’s happening much, much faster than anyone anticipated. I spoke with somebody who works on education at Anthropic, who said, “We expected students to be early adopters and use it a lot. We did not realize how many students would be using it and how often they would be using it.”

Is it your sense that a lot of university administrators are incentivized to not look at this too closely, that it’s better for business to shove it aside?

I do think there’s a vein of AI optimism among a certain type of person, a certain generation, who saw the tech boom and thought, I missed out on that wave, and now I want to adopt. I want to be part of this new wave, this future, this inevitable future that’s coming. They want to adopt the technology and aren’t really picking up on how dangerous it might be.

I used to teach at a university. I still know a lot of people in that world. A lot of them tell me that they feel very much on their own with this, that the administrators are pretty much just saying, Hey, figure it out**. And I think it’s revealing that university admins were quickly able, during Covid, for instance, to implement drastic institutional changes to respond to that, but they’re much more content to let the whole AI thing play out.**

I think they were super responsive to Covid because it was a threat to the bottom line. They needed to keep the operation running. AI, on the other hand, doesn’t threaten the bottom line in that way, or at least it doesn’t yet. AI is a massive, potentially extinction-level threat to the very idea of higher education, but they seem more comfortable with a degraded education as long as the tuition checks are still cashing. Do you think I’m being too harsh?

I genuinely don’t think that’s too harsh. I think administrators may not fully appreciate the power of AI and exactly what’s happening in the classroom and how prevalent it is. I did speak with many professors who go to administrators or even just older teachers, TAs going to professors and saying, This is a problem.

I spoke to one TA at a writing course at Iowa who went to his professor, and the professor said, “Just grade it like it was any other paper.” I think they’re just turning a blind eye to it. And that is one of the ways AI is exposing the rot underneath education.

It’s this system that hasn’t been updated in forever. And in the case of the US higher ed system, it’s like, yeah, for a long time it’s been this transactional experience. You pay X amount of dollars, tens of thousands of dollars, and you get your degree. And what happens in between is not as important.

The universities, in many cases, also have partnerships with AI companies, right?

Right. And what you said about universities can also be said about AI companies. For the most part, these are companies or companies within nonprofits that are trying to capture customers. One of the more dystopian moments was when we were finishing this story, getting ready to completely close it, and I got a push alert that was like, “Google is letting parents know that they have created a chatbot for children under [thirteen years old].” And it was kind of a disturbing experience, but they are trying to capture these younger customers and build this loyalty.

There’s been reporting from the Wall Street Journal on OpenAI and how they have been sitting on an AI that would be really, really effective at essentially watermarking their output. And they’ve been sitting on it, they have not released it, and you have to wonder why. And you have to imagine they know that students are using it, and in terms of building loyalty, an AI detector might not be the best thing for their brand.

This is a good time to ask the obligatory question, Are we sure we’re not just old people yelling at clouds here? People have always panicked about new technologies. Hell, Socrates panicked about the written word. How do we know this isn’t just another moral panic?

I think there’s a lot of different ways we could respond to that. It’s not a generational moral panic. This is a tool that’s available, and it’s available to us just as it’s available to students. Society and our culture will decide what the morals are. And that is changing, and the way that the definition of cheating is changing. So who knows? It might be a moral panic toda,y and it won’t be in a year.

However, I think somebody like Sam Altman, the CEO of OpenAI, is one of the people who said, “This is a calculator for words.” And I just don’t really understand how that is compatible with other statements he’s made about AI potentially being lights out for humanity or statements made by people at an Anthropic about the power of AI to potentially be a catastrophic event for humans. And these are the people who are closest and thinking about it the most, of course.

I have spoken to some people who say there is a possibility, and I think there are people who use AI who would back this up, that we’ve maxed out the AI’s potential to supplement essays or writing. That it might not get much better than it is now. And I think that’s a very long shot, one that I would not want to bank on.

Is your biggest fear at this point that we are hurtling toward a post-literate society? I would argue, if we are post-literate, then we’re also post-thinking.

It’s a very scary thought that I try not to dwell in — the idea that my profession and what I’m doing is just feeding the machine, that my most important reader now is a robot, and that there’s going to be fewer and fewer readers is really scary, not just because of subscriptions, but because, as you said, that means fewer and fewer people thinking and engaging with these ideas.

I think ideas can certainly be expressed in other mediums and that’s exciting, but I don’t think anybody who’s paid attention to the way technology has shaped teen brains over the past decade and a half is thinking, Yeah, we need more of that. And the technology we’re talking about now is orders of magnitude more powerful than the algorithms on Instagram.

Listen to the rest of the conversation and be sure to follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you listen to podcasts.


From Vox via this RSS feed

view more: next ›