Ars Technica - Biz & IT

26 readers
4 users here now

Serving the Technologist for more than a decade. IT news, reviews, and analysis.

founded 11 months ago
MODERATORS
1
 
 

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump's tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems "could change the world, fundamentally, within two years; in 10 years, all bets are off."

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

Read full article

Comments


From Biz & IT – Ars Technica via this RSS feed

2
 
 

Marketers promote AI-assisted developer tools as workhorses that are essential for today’s software engineer. Developer platform GitLab, for instance, claims its Duo chatbot can “instantly generate a to-do list” that eliminates the burden of “wading through weeks of commits.” What these companies don’t say is that these tools are, by temperament if not default, easily tricked by malicious actors into performing hostile actions against their users.

Researchers from security firm Legit on Thursday demonstrated an attack that induced Duo into inserting malicious code into a script it had been instructed to write. The attack could also leak private code and confidential issue data, such as zero-day vulnerability details. All that’s required is for the user to instruct the chatbot to interact with a merge request or similar content from an outside source.

AI assistants’ double-edged blade

The mechanism for triggering the attacks is, of course, prompt injections. Among the most common forms of chatbot exploits, prompt injections are embedded into content a chatbot is asked to work with, such as an email to be answered, a calendar to consult, or a webpage to summarize. Large language model-based assistants are so eager to follow instructions that they’ll take orders from just about anywhere, including sources that can be controlled by malicious actors.

Read full article

Comments


From Biz & IT – Ars Technica via this RSS feed

3
 
 

On Tuesday, Google launched Veo 3, a new AI video synthesis model that can do something no major AI video generator has been able to do before: create a synchronized audio track. While from 2022 to 2024, we saw early steps in AI video generation, each video was silent and usually very short in duration. Now you can hear voices, dialog, and sound effects in eight-second high-definition video clips.

Shortly after the new launch, people began asking the most obvious benchmarking question: How good is Veo 3 at faking Oscar-winning actor Will Smith at eating spaghetti?

First, a brief recap. The spaghetti benchmark in AI video traces its origins back to March 2023, when we first covered an early example of horrific AI-generated video using an open source video synthesis model called ModelScope. The spaghetti example later became well-known enough that Smith parodied it almost a year later in February 2024.

Read full article

Comments


From Biz & IT – Ars Technica via this RSS feed

4
 
 

On Sunday, the Chicago Sun-Times published an advertorial summer reading list containing at least 10 fake books attributed to real authors, according to multiple reports on social media. The newspaper's uncredited "Summer reading list for 2025" supplement recommended titles including "Tidewater Dreams" by Isabel Allende and "The Last Algorithm" by Andy Weir—books that don't exist and were created out of thin air by an AI system.

The creator of the list, Marco Buscaglia, confirmed to 404 Media that he used AI to generate the content. "I do use AI for background at times but always check out the material first. This time, I did not and I can't believe I missed it because it's so obvious. No excuses," Buscaglia said. "On me 100 percent and I'm completely embarrassed."

A check by Ars Technica shows that only five of the fifteen recommended books in the list actually exist, with the remainder being fabricated titles falsely attributed to well-known authors. AI assistants such as ChatGPT are well-known for creating plausible-sounding errors known as confabulations, especially when lacking detailed information on a particular topic. The problem affects everything from AI search results to lawyers citing fake cases.

Read full article

Comments


From Biz & IT – Ars Technica via this RSS feed

5
 
 

For a short period of time on Friday, Darth Vader could drop F-bombs in the video game Fortnite as part of a voice AI implementation gone wrong, reports GameSpot. Epic Games rapidly deployed a hotfix after players encountered the Sith Lord responding to their comments with profanity and strong language.

In Fortnite, the AI-voiced Vader appears as both a boss in battle royale mode and an interactive character. The official Star Wars website encourages players to "ask him all your pressing questions about the Force, the Galactic Empire… or you know, a good strat for the last Storm circle," adding that "the Sith Lord has opinions."

The F-bomb incident involved a Twitch streamer named Loserfruit, who triggered the forceful response when discussing food with the virtual Vader. The Dark Lord of the Sith responded by repeating her words "freaking" and "fucking" before adding, "Such vulgarity does not become you, Padme." The exchange spread virally across social media platforms on Friday.

Read full article

Comments


From Biz & IT – Ars Technica via this RSS feed

6
 
 

The world has been abuzz for weeks now about the inclusion of a journalist in a group message of senior White House officials discussing plans for a military strike. In that case, the breach was the result of then-National Security Advisor Mike Waltz accidentally adding The Atlantic Editor-in-Chief Jeffrey Goldberg to the group chat and no one else in the chat noticing. But what if someone controlling or hacking a messenger platform could do the same thing?

When it comes to WhatsApp—the Meta-owned messenger that’s frequently touted for offering end-to-end encryption—it turns out you can.

A clean bill of health except for ...

A team of researchers made the finding in a recently released formal analysis of WhatsApp group messaging. They reverse-engineered the app, described the formal cryptographic protocols, and provided theorems establishing the security guarantees that WhatsApp provides. Overall, they gave the messenger a clean bill of health, finding that it works securely and as described by WhatsApp.

Read full article

Comments


From Biz & IT – Ars Technica via this RSS feed

7
 
 

A new study analyzing the Danish labor market in 2023 and 2024 suggests that generative AI models like ChatGPT have had almost no significant impact on overall wages or employment yet, despite rapid adoption in some workplaces. The findings, detailed in a working paper by economists from the University of Chicago and the University of Copenhagen, provide an early, large-scale empirical look at AI's transformative potential.

In "Large Language Models, Small Labor Market Effects," economists Anders Humlum and Emilie Vestergaard focused specifically on the impact of AI chatbots across 11 occupations often considered vulnerable to automation, including accountants, software developers, and customer support specialists. Their analysis covered data from 25,000 workers and 7,000 workplaces in Denmark.

Despite finding widespread and often employer-encouraged adoption of these tools, the study concluded that "AI chatbots have had no significant impact on earnings or recorded hours in any occupation" during the period studied. The confidence intervals in their statistical analysis ruled out average effects larger than 1 percent.

Read full article

Comments


From Biz & IT – Ars Technica via this RSS feed

8
 
 

Nvidia announced plans today to manufacture AI chips and build complete supercomputers on US soil for the first time, commissioning over one million square feet of manufacturing space across Arizona and Texas. The politically timed move comes amid rising US-China tensions and the Trump administration's push for domestic manufacturing.

Nvidia's announcement comes less than two weeks after the Trump administration's chaotic rollout of new tariffs and just two days after the administration's contradictory messages on electronic component exemptions.

On Friday night, the US Customs and Border Protection posted a bulletin exempting electronics including smartphones, computers, and semiconductors from Trump's steep reciprocal tariffs. But by Sunday, Trump and his commerce secretary Howard Lutnick contradicted this move, claiming the exemptions were only temporary and that electronics would face new "semiconductor tariffs" in the coming months.

Read full article

Comments


From Biz & IT – Ars Technica via this RSS feed

9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
view more: next ›