remixtures

joined 2 years ago
 

"All experts interviewed for this piece believe AI will assist developers rather than replace them wholesale. In fact, most view keeping developers in the loop as imperative for retaining code quality. “For now, human oversight remains essential when using AI-generated code,” says Digital.ai’s Kentosh.

“Building applications will mostly remain in the hands of the creative professionals using AI to supplement their work,” says SurrealDB’s Hitchcock. “Human oversight is absolutely necessary and required in the use of AI coding assistants, and I don’t see that changing,” adds Zhao.

Why? Partially, the ethical challenges. “Complete automation remains unattainable, as human oversight is critical for addressing complex architectures and ensuring ethical standards,” says Gopi. That said, AI reasoning is expected to improve. According to Wilson, the next phase is AI “becoming a legitimate engineering assistant that doesn’t just write code, but understands it.”"

https://www.infoworld.com/article/3844363/why-ai-generated-code-isnt-good-enough-and-how-it-will-get-better.html

#AI #GenerativeAI #LLMs #Programming #SoftwareDevelopment #AICodingAssistants

 

Hire more technical writers: Isn't the solution obvious?? :-D

"Documentation was especially valuable when it came time to refactor code by providing a blueprint that saved time and improved focus. The researchers found that good documentation “ensures that refactoring efforts are directed towards tangible and specific quality improvements, maximizing the value of each refactoring action and ensuring the long-term maintainability and evolution of the software.”

As our co-founder Joel Spolsky put it, documentation encodes generational wisdom that goes beyond the simple specs of what was built. “Think of the code in your organization like plumbing in a building. If you hire a new superintendent to manage your property, they will know how plumbing works, but they won’t know exactly how YOUR plumbing works,” said Spolsky. “Maybe they used a different kind of pump at their old site. They might understand how the pipes connect, but they won’t know you have to kick the boiler twice on Thursday to prevent a leak from springing over the weekend.”

If we know from decades of research that documentation is a key component of creating and maintaining quality code, why is it so often considered low-priority work developers would rather avoid if they can be writing code instead?
(...)
By embracing AI-powered documentation tools, development teams can significantly reduce toil work, mitigate technical debt, and foster an environment where developers can thrive. Wise organizations will also keep humans in the loop, ensuring that documentation engineers or technical writers act as editors and stewards of any AI-generated documentation, preventing errors or hallucinations from creeping into otherwise accurate docs."

#Documentation #SoftwareDocumentation #TechnicalWriting #SoftwareDevelopment #Programming

https://stackoverflow.blog/2024/12/19/developers-hate-documentation-ai-generated-toil-work/

 

"The guidelines appear designed to be both conservative and business-friendly simultaneously, leaving the risk that we have no clear rules on which systems are caught.

The examples at 5.2 of systems that could fall out of scope may be welcome – as noted, the reference to linear and logistic regression could be welcome for those involved in underwriting life and health insurance or assessing consumer credit risk. However, the guidelines will not be binding even when in final form and it is difficult to predict how market surveillance authorities and courts will apply them.

In terms of what triage and assessment in an AI governance programme is likely to look like as a result, there is some scope to triage out tools that will not be AI systems, but the focus will need to be on whether the AI Act obligations would apply to tools:"

https://www.dataprotectionreport.com/2025/02/the-commissions-guidelines-on-ai-systems-what-can-we-infer/

#EU #EC #AI #AIAct #DataProtection #AIGovernance #Privacy #GenerativeAI

 

"Since Amazon announced plans for a generative AI version of Alexa, we were concerned about user privacy. With Alexa+ rolling out to Amazon Echo devices in the coming weeks, we’re getting a clearer view at the privacy concessions people will have to make to maximize usage of the AI voice assistant and avoid bricking functionality of already-purchased devices.

In an email sent to customers today, Amazon said that Echo users will no longer be able to set their devices to process Alexa requests locally and, therefore, avoid sending voice recordings to Amazon’s cloud. Amazon apparently sent the email to users with “Do Not Send Voice Recordings” enabled on their Echo. Starting on March 28, recordings of everything command spoken to the Alexa living in Echo speakers and smart displays will automatically be sent to Amazon and processed in the cloud."

https://arstechnica.com/gadgets/2025/03/everything-you-say-to-your-echo-will-be-sent-to-amazon-starting-on-march-28/

#Amazon #Alexa #AI #GenerativeAI #AmazonEcho #Privacy #Surveillance #DataProtection

 

The start of the chatbots revolution: LLMs start striking 🤖👾

"On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice.

According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing—it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities.""

https://arstechnica.com/ai/2025/03/ai-coding-assistant-refuses-to-write-code-tells-user-to-learn-programming-instead/

#AI #GenerativeAI #Chatbots #LLMs #Programming #SoftwareDevelopment

 

"Online discussions about using Large Language Models to help write code inevitably produce comments from developers who’s experiences have been disappointing. They often ask what they’re doing wrong—how come some people are reporting such great results when their own experiments have proved lacking?

Using LLMs to write code is difficult and unintuitive. It takes significant effort to figure out the sharp and soft edges of using them in this way, and there’s precious little guidance to help people figure out how best to apply them.

If someone tells you that coding with LLMs is easy they are (probably unintentionally) misleading you. They may well have stumbled on to patterns that work, but those patterns do not come naturally to everyone.

I’ve been getting great results out of LLMs for code for over two years now. Here’s my attempt at transferring some of that experience and intution to you."

https://simonwillison.net/2025/Mar/11/using-llms-for-code/

#AI #GenerativeAI #LLMs #Programming #Coding #Chatbots #SoftwareDevelopment

 

"ARTICLE 19’s new report reveals how China is expanding its digital authoritarian model of cybersecurity governance across the Indo-Pacific, posing a grave threat to people’s rights – regionally and globally.

Through its Digital Silk Road, China is not only developing digital infrastructure, but also aggressively promoting its own norms for governing these technologies. One area where this is most pronounced is in the promotion of cybersecurity norms. The success of China’s digital norms-setting in this critical realm of internet governance risks supercharging digital authoritarianism regionally – and normalising Beijing’s model internationally – at the expense of human rights, internet freedom, and democracy.

Cybersecurity with Chinese Characteristics establishes a baseline understanding of China’s repressive cybersecurity norms and reveals how it is smuggling them, via the Trojan Horse of digital development, into 3 Indo-Pacific countries: Indonesia, Pakistan, and Vietnam. It also presents a compelling alternative model of cybersecurity governance: Taiwan’s transparent, rights-based, multi-stakeholder approach."

https://www.article19.org/resources/china-taiwan-cybersecurity/

#CyberSecurity #China #Taiwan #DigitalAuthoritarianism #HumanRights #DigitalRights #DigitalSilkRoad

 

"Technical documentation as an industry is at a crossroads. If we don't invest in the next generation of experts—consultants, educators, and documentation strategists—we're looking at a future where companies try to automate away problems they don't fully understand, and the only available documentation training is a two-hour webinar hosted by someone who just discovered Markdown last week.

So, if you're an industry veteran, here's my plea: write it all down before you retire. Mentor someone. Record a video explaining why metadata matters. Start a blog. Write a book (or three). Please do something to ensure that when you finally sign off for good, the rest aren't left googling "How to create a sustainable documentation strategy " and getting a bunch of AI-generated nonsense in return.

Otherwise, the future of tech comm might be one long, desperate, less-than-helpful Slack thread."

https://www.thecontentwrangler.com/p/the-great-documentation-brain-drain?r=8sej&amp%3Butm_campaign=post&amp%3Butm_medium=web

#TechnicalWriting #SoftwareDocumentation #SoftwareDevelopment #Programming #BrainDrain

 

"The Trump administration may soon demand the social media accounts of people applying for green cards, US citizenship, and asylum or refugee status. US Citizenship and Immigration Services (USCIS) — the federal agency that oversees legal migration, proposed the new policy in the Federal Register this week — calling this information “necessary for a rigorous vetting and screening” of all people applying for “immigration-related benefits.”

In its Federal Register notice, USCIS said the proposed social media surveillance policy is needed to comply with President Trump’s “Protecting the United States from Foreign Terrorists and Other National Security and Public Safety Threats” executive order, issued on his first day in office. That order requires the Department of Homeland Security (DHS) and other government agencies to “identify all resources that may be used to ensure that all aliens seeking admission to the United States, or who are already in the United States, are vetted and screened to the maximum degree possible.”"

https://www.theverge.com/policy/624945/trump-uscis-social-media-review-policy?amp%3Butm_medium=email

#USA #Trump #USCIS #SocialMedia #Surveillance #Privacy #PoliceState

 

"Signal President Meredith Whittaker warned Friday that agentic AI could come with a risk to user privacy.

Speaking onstage at the SXSW conference in Austin, Texas, the advocate for secure communications referred to the use of AI agents as “putting your brain in a jar,” and cautioned that this new paradigm of computing — where AI performs tasks on users’ behalf — has a “profound issue” with both privacy and security.

Whittaker explained how AI agents are being marketed as a way to add value to your life by handling various online tasks for the user. For instance, AI agents would be able to take on tasks like looking up concerts, booking tickets, scheduling the event on your calendar, and messaging your friends that it’s booked.

“So we can just put our brain in a jar because the thing is doing that and we don’t have to touch it, right?,” Whittaker mused.

Then she explained the type of access the AI agent would need to perform these tasks, including access to our web browser and a way to drive it as well as access to our credit card information to pay for tickets, our calendar, and messaging app to send the text to your friends."

https://techcrunch.com/2025/03/07/signal-president-meredith-whittaker-calls-out-agentic-ai-as-having-profound-security-and-privacy-issues/

#CyberSecurity #Privacy #AI #AIAgents #GenerativeAI

 

"Age verification laws do far more than ‘protect children online’—they require the creation of a system that collects vast amounts of personal information from everyone. Instead of making the internet safer for children, these laws force all users—regardless of age—to verify their identity just to access basic content or products. This isn't a mistake; it's a deliberate strategy. As one sponsor of age verification bills in Alabama admitted, "I knew the tough nut to crack that social media would be, so I said, ‘Take first one bite at it through pornography, and the next session, once that got passed, then go and work on the social media issue.’” In other words, they recognized that targeting porn would be an easier way to introduce these age verification systems, knowing it would be more emotionally charged and easier to pass. This is just the beginning of a broader surveillance system disguised as a safety measure.

This alarming trend is already clear, with the growing creep of age verification bills filed in the first month of the 2025-2026 state legislative session. Consider these three bills:"

https://www.eff.org/deeplinks/2025/03/first-porn-now-skin-cream-age-verification-bills-are-out-control

#USA #AgeVerification #Surveillance #Privacy #DataProtection

 

"The U.K. government appears to have quietly scrubbed encryption advice from government web pages, just weeks after demanding backdoor access to encrypted data stored on Apple’s cloud storage service, iCloud.

The change was spotted by security expert Alec Muffett, who wrote in a blog post on Wednesday that the U.K.’s National Cyber Security Centre (NCSC) is no longer recommending that high-risk individuals use encryption to protect their sensitive information.

The NCSC in October published a document titled “Cybersecurity tips for barristers, solicitors & legal professionals,” that advised the use of encryption tools such as Apple’s Advanced Data Protection (ADP).

ADP allows users to turn on end-to-end encryption for their iCloud backups, effectively making it impossible for anyone, including Apple and government authorities, to view data stored on iCloud."

https://techcrunch.com/2025/03/06/uk-quietly-scrubs-encryption-advice-from-government-websites/

#UK #CyberSecurity #Encryption #Surveillance #Apple #iCloud

[–] [email protected] 3 points 2 weeks ago

@[email protected] Just because Silicon Valley companies over-engineer their models, that doesn't mean it must be necessarily so... Look at DeepSeek: https://github.com/deepseek-ai/open-infra-index/blob/main/202502OpenSourceWeek/day_6_one_more_thing_deepseekV3R1_inference_system_overview.md

[–] [email protected] 5 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

@joachim: You have every right to not use LLMs. Personally, I find them a great help for improving my productivity. Every person has its own reasons for using or not using generative AI. Nevertheless, I'm afraid that this technology - like many other productivity-increasing technologies - will become a matter of fact in our daily lifes. The issue here is how best to adapt it to our own advantage.Open-source LLMs should be preferred, of course. But I don't think that mere stubbornness is a very good strategy to deal with new technology.

"If we don’t use AI, we might be replaced by someone who will. What company would prefer a tech writer who fixes 5 bugs by hand to one who fixes 25 bugs using AI in the same timeframe, with a “good enough” quality level? We’ve already seen how DeepSeek AI, considered on par with ChatGPT’s quality, almost displaced more expensive models overnight due to the dramatically reduced cost. What company wouldn’t jump at this chance if the cost per doc bug could be reduced from $20 to $1 through AI? Doing tasks more manually might be a matter of intellectual pride, but we’ll be extinct unless we evolve."

https://idratherbewriting.com/blog/recursive-self-improvement-complex-tasks

[–] [email protected] 10 points 3 weeks ago

"Today, in response to the U.K.’s demands for a backdoor, Apple has stopped offering users in the U.K. Advanced Data Protection, an optional feature in iCloud that turns on end-to-end encryption for files, backups, and more.

Had Apple complied with the U.K.’s original demands, they would have been required to create a backdoor not just for users in the U.K., but for people around the world, regardless of where they were or what citizenship they had. As we’ve said time and time again, any backdoor built for the government puts everyone at greater risk of hacking, identity theft, and fraud.

This blanket, worldwide demand put Apple in an untenable position. Apple has long claimed it wouldn’t create a backdoor, and in filings to the U.K. government in 2023, the company specifically raised the possibility of disabling features like Advanced Data Protection as an alternative."

https://www.eff.org/deeplinks/2025/02/cornered-uks-demand-encryption-backdoor-apple-turns-its-strongest-security-setting

[–] [email protected] 1 points 3 weeks ago

"And it’s crazy that people can be so into their ideology that they just refuse to look at reality. It can’t all just be “America’s fault.” People in Zimbabwe are just regular people like you and me, and they’re not better than anyone or worse. Their leaders do bad things and are corrupt, just like anywhere else. In what country in the world does one party remain in power for thirty, forty years and not become corrupt? And it’s interesting to me how easily people are still able to call on the boogeyman of the West and say, “Oh, yeah. Now forget all of the things that are going wrong. America did everything.” America does lots of things wrong. America has its own problems, and America spreads its problems around the world.

I have people that still tell me that the West caused the situation in Ukraine. And I’m like, but [Vladimir Putin] has done this in Crimea. He did this in Georgia; he did this in Chechnya. So America just did all of these? America is the reason that Russia took Abkhazia and Ossetia? They took Crimea; they took Donbas."

[–] [email protected] 1 points 3 weeks ago

"In the 1970s, ostensibly leftist movements were in power in many parts of the Middle East and also were the dominant groups fighting for revolution and liberation in Palestine. And here we are now. The failure of those governments, the rise of political Islam, and the failures of the secular state in the Middle East have profoundly changed the whole dynamic. Now if you’re talking about the Middle East and resistance movements, you’re almost always talking about movements that are religious in nature. And you see the rise of political Islam and the sidelining of socialism.

Some of that is also the failure of ostensibly socialist states that just became kleptocracies and dictatorships. There’s nothing wrong with wanting and desiring revolution. But [there should be] some level of recognition that in any revolution you’re letting a tiger out of the cage. What’s going to happen after that is hard to say."

[–] [email protected] 4 points 1 month ago

"At a press conference in the Oval Office this week, Elon Musk promised the actions of his so-called Department of Government Efficiency (DOGE) project would be “maximally transparent,” thanks to information posted to its website.

At the time of his comment, the DOGE website was empty. However, when the site finally came online Thursday morning, it turned out to be little more than a glorified feed of posts from the official DOGE account on Musk’s own X platform, raising new questions about Musk’s conflicts of interest in running DOGE.

DOGE.gov claims to be an “official website of the United States government,” but rather than giving detailed breakdowns of the cost savings and efficiencies Musk claims his project is making, the homepage of the site just replicated posts from the DOGE account on X."

https://www.wired.com/story/doge-website-is-just-one-big-x-ad/

[–] [email protected] 1 points 1 month ago

"The US DOGE Service's access to the private data of ordinary Americans and federal employees is being challenged in several lawsuits filed this week.

Three new complaints seek court orders that would stop the data access and require the deletion of unlawfully accessed data. Two of the complaints also seek financial damages for individuals whose data was accessed.

The US DOGE Service, Elon Musk, the US Office of Personnel Management (OPM), and OPM Acting Director Charles Ezell were named as defendants in one suit filed yesterday in US District Court for the Southern District of New York.

"The Privacy Act [of 1974] makes it unlawful for OPM Defendants to hand over access to OPM's millions of personnel records to DOGE Defendants, who lack a lawful and legitimate need for such access," the lawsuit said. "No exception to the Privacy Act covers DOGE Defendants' access to records held by OPM. OPM Defendants' action granting DOGE Defendants full, continuing, and ongoing access to OPM's systems and files for an unspecified period means that tens of millions of federal-government employees, retirees, contractors, job applicants, and impacted family members and other third parties have no assurance that their information will receive the protection that federal law affords.""

https://arstechnica.com/tech-policy/2025/02/largest-data-breach-in-us-history-three-more-lawsuits-try-to-stop-doge/

[–] [email protected] 4 points 1 month ago (1 children)

Fascists love to surveil and harass... 😕

"The Italian founder of the NGO Mediterranea Saving Humans, who has been a vocal critic of Italy’s alleged complicity in abuses suffered by migrants in Libya, has revealed WhatsApp informed him his mobile phone was targeted by military-grade spyware made by the Israel-based company Paragon Solutions.

Luca Casarini, an activist whose organisation is estimated to have saved 2,000 people crossing the Mediterranean to Italy, is the most high profile person to come forward since WhatsApp announced last week that 90 journalists and other members of civil society had probably had their phones compromised by a government client using Paragon’s spyware.

The work of the three alleged targets to have come forward so far – Casarini, the journalist Francesco Cancellato, and the Sweden-based Libyan activist Husam El Gomati – have one thing in common: each has been critical of the prime minister, Giorgia Meloni. The Italian government has not responded to a request for comment on whether it is a client of Paragon."

https://www.theguardian.com/technology/2025/feb/05/activists-critical-of-italian-pm-may-have-had-their-phones-targeted-by-paragon-spyware-says-whatsapp

[–] [email protected] 1 points 1 month ago

"They said that the idea of using AI coding agents in the federal government would be a major security risk, and that training them on existing federal contracts raises red flags considering that Elon Musk, the head of DOGE, has billions of dollars worth of federal contracts. 404 Media granted the employee anonymity to talk about sensitive issues in an administration that has targeted those who speak out."

[–] [email protected] 2 points 1 month ago

"Paragon’s spyware was allegedly delivered to targets who were placed on group chats without their permission, and sent malware through PDFs in the group chat. Paragon makes no-click spyware, which means users do not have to click on any link or attachment to be infected; it is simply delivered to the phone.

It is not clear how long Cancellato may have been compromised. But the editor published a high-profile investigative story last year that exposed how members of Meloni’s far-right party’s youth wing had engaged in fascist chants, Nazi salutes and antisemitic rants.

Fanpage’s undercover reporters – although not Cancellato personally – had infiltrated groups and chat forums used by members of the National Youth, a wing of Meloni’s Brothers of Italy party. The outlet published clips of National Youth members chanting “Duce” – a reference to Benito Mussolini – and “sieg Heil”, and boasting about their familial connections to historical figures linked to neo-fascist terrorism. The stories were published in May."

[–] [email protected] 4 points 1 month ago (1 children)

"An Italian investigative journalist who is known for exposing young fascists within prime minister Giorgia Meloni’s far-right party was targeted with spyware made by Israel-based Paragon Solutions, according to a WhatsApp notification received by the journalist.

Francesco Cancellato, the editor-in-chief of the Italian investigative news outlet Fanpage, was the first person to come forward publicly after WhatsApp announced on Friday that 90 journalists and other members of civil society had been targeted by the spyware.

The journalist, like dozens of others whose identities are not yet known, said he received a notification from the messaging app on Friday afternoon.

WhatsApp, which is owned by Meta, has not identified the targets or their precise locations, but said they were based in more than two dozen countries, including in Europe.

WhatsApp said it had discovered that Paragon was targeting its users in December and shut down the vector used to “possibly compromise” the individuals. Like other spyware makers, Paragon sells use of its spyware, known as Graphite, to government agencies, who are supposed to use it to fight and prevent crime."

https://www.theguardian.com/technology/2025/jan/31/italian-journalist-whatsapp-israeli-spyware

[–] [email protected] 1 points 1 month ago

"End-to-end encryption (E2EE) has become the gold standard for securing communications, bringing strong confidentiality and privacy guarantees to billions of users worldwide. However, the current push towards widespread integration of artificial intelligence (AI) models, including in E2EE systems, raises some serious security concerns.

This work performs a critical examination of the (in)compatibility of AI models and E2EE applications. We explore this on two fronts: (1) the integration of AI “assistants” within E2EE applications, and (2) the use of E2EE data for training AI models. We analyze the potential security implications of each, and identify conflicts with the security guarantees of E2EE. Then, we analyze legal implications of integrating AI models in E2EE applications, given how AI integration can undermine the confidentiality that E2EE promises. Finally, we offer a list of detailed recommendations based on our technical and legal analyses, including: technical design choices that must be prioritized to uphold E2EE security; how service providers must accurately represent E2EE security; and best practices for the default behavior of AI features and for requesting user consent. We hope this paper catalyzes an informed conversation on the tensions that arise between the brisk deployment of AI and the security offered by E2EE, and guides the responsible development of new AI features."

https://eprint.iacr.org/2024/2086.pdf

view more: next ›