remixtures

joined 2 years ago
 

"In response to the challenges raised by age verification mandates and the Supreme Court’s recent decision to allow age verification for online adult content, this brief explores a path toward implementation that protects a person’s privacy and data security through zero-knowledge proofs. To help assure that this emerging solution advances in ways consistent with a largely open and secure internet, the brief provides four key considerations that industry players and policymakers should consider to build the necessary supporting digital ecosystem. The brief also offers recommendations for policymakers and industry players that are or will be tasked with implementing age verification. These recommendations can help ensure online age verification is applied in a narrowly tailored, least restrictive manner, while also prioritizing user privacy and security."

https://www.newamerica.org/oti/briefs/exploring-privacy-preserving-age-verification/

#USA #SocialMedia #Privacy #Surveillance #AgeVerification #CyberSecurity

 

"[O]ne does not have to have sympathy or empathy for a CEO to see how this sort of thing could and often does go off the rails. This example is emblematic of the problem specifically because it’s easy to laugh at these people and because they’re doing something distasteful, but not illegal. The same technologies used to dox and research this CEO are routinely deployed against the partners of random people who have had messy breakups, attractive security guards, people who look “suspicious” and are caught on Ring cameras by people on Nextdoor, people who dance funny in public, and so on. There has been endless debate about the ethics of doxing cops and ICE agents and Nazis, and there are many times where it makes sense to research people doing harm on behalf of the state or who are doing violent, scary things in to innocent people. It is another to deploy these technologies against random people you saw on an airplane or who had a messy breakup with an influencer. And of course, these same technologies are regularly deployed by police and the feds against undocumented immigrants, regular people, and people wanting to visit the United States on tourist visas."

https://www.404media.co/the-astronomer-ceos-coldplay-concert-fiasco-is-emblematic-of-our-social-media-surveillance-dystopia/

#FacialRecognition #SocialMedia #Surveillance #DataProtection #Privacy

 

"OpenAI uses a giant monorepo which is ~mostly Python (though there is a growing set of Rust services and a handful of Golang services sprinkled in for things like network proxies). This creates a lot of strange-looking code because there are so many ways you can write Python. You will encounter both libraries designed for scale from 10y Google veterans as well as throwaway Jupyter notebooks from newly-minted PhDs. Pretty much everything operates around FastAPI to create APIs and Pydantic for validation. But there aren't style guides enforced writ-large.

OpenAI runs everything on Azure. What's funny about this is there are exactly three services that I would consider trustworthy: Azure Kubernetes Service, CosmosDB (Azure's document storage), and BlobStore. There's no true equivalents of Dynamo, Spanner, Bigtable, Bigquery Kinesis or Aurora. It's a bit rarer to think a lot in auto-scaling units. The IAM implementations tend to be way more limited than what you might get from an AWS. And there's a strong bias to implement in-house.

When it comes to personnel (at least in eng), there's a very significant Meta → OpenAI pipeline. In many ways, OpenAI resembles early Meta: a blockbuster consumer app, nascent infra, and a desire to move really quickly. Most of the infra talent I've seen brought over from Meta + Instagram has been quite strong.

Put these things together, and you see a lot of core parts of infra that feel reminiscent of Meta. There was an in-house reimplementation of TAO. An effort to consolidate auth identity at the edge. And I'm sure a number of others I don't know about."

https://calv.info/openai-reflections

#AI #GenerativeAI #OpenAI #ChatGPT #Python #Azure #Programming #SoftwareDevelopment

 

"[Google] has been gradually eroding Android’s open-source capacity in the last decade.

For example, it recently released the source code for Android 16 without the device trees and drivers for its Pixel phones. Device trees tell the operating system what hardware is present in the device: camera, display, speakers, Bluetooth, and so on. Drivers provide instructions for how to use these components. Without them, your phone is just an expensive paperweight.

In March, Google said that it would develop Android behind closed doors. Previously everyone could see the code as it was being written. Developers working on alternative versions could grab this prerelease code, make their changes, and test them on actual devices. They could release their versions just days after Google. Now they must wait for months until Google dumps the code alongside the stable release. This greatly delays the development cycle for competitors.

In 2023, Google deprecated the open-source Dialer and Messaging features and made future versions proprietary. This means that others must build their own software to make phone calls or send text messages from scratch. Over the years, Google has moved many crucial features, such as the camera, keyboard, and push notifications, from the open-source project to its closed-source black box. Competitors must now spend their scarce resources on reinventing the wheel rather than implementing new features.

Being open source helped Android compete against the iPhone and swiftly dominate the global smartphone market. Manufacturers could quickly adapt it to their devices and sell at lower prices than they could if they had to make their own operating systems from scratch. But now that it has captured the market, Google is rolling up the ladder behind it to keep competition at bay."

https://jacobin.com/2025/07/google-android-smartphones-open-source

#OpenSource #Android #Google #Smartphones #Oligopolies #Monopolies #IP #BigTech

 

"Many trains in the U.S. are vulnerable to a hack that can remotely lock a train’s brakes, according to the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the researcher who discovered the vulnerability. The railroad industry has known about the vulnerability for more than a decade but only recently began to fix it.

Independent researcher Neil Smith first discovered the vulnerability, which can be exploited over radio frequencies, in 2012.

“All of the knowledge to generate the exploit already exists on the internet. AI could even build it for you,” Smith told 404 Media. “The physical aspect really only means that you could not exploit this over the internet from another country, you would need to be some physical distance from the train [so] that your signal is still received.”

Smith said that a hacker who knew what they were doing could trigger the brakes from a distance."

https://www.404media.co/hackers-can-remotely-trigger-the-brakes-on-american-trains-and-the-problem-has-been-ignored-for-years/

#CyberSecurity #Trains #Transportation #Railways #Hacking

 

Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

Joel Becker, Nate Rush, Beth Barnes, David Rein

Model Evaluation & Threat Research (METR)

"Despite widespread adoption, the impact of AI tools on software development in the wild remains understudied. We conduct a randomized controlled trial (RCT) to understand how AI tools at the February–June 2025 frontier affect the productivity of experienced open-source developers. 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience. Each task is randomly assigned to allow or disallow usage of early-2025 AI tools. When AI tools are allowed, developers primarily use Cursor Pro, a popular code editor, and Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%—AI tooling slowed developers down. This slowdown also contradicts predictions from experts in economics (39% shorter) and ML (38% shorter). To understand this result, we collect and evaluate evidence for 20 properties of our setting that a priori could contribute to the observed slowdown effect—for example, the size and quality standards of projects, or prior developer experience with AI tooling. Although the influence of experimental artifacts cannot be entirely ruled out, the robustness of the slowdown effect across our analyses suggests it is unlikely to primarily be a function of our experimental design."

https://metr.org/Early_2025_AI_Experienced_OS_Devs_Study.pdf

#AI #GenerativeAI #OpenSource #Programming #Productivity #SoftwareDevelopment

 

"We are in a constant dialogue with Internet search engines, ranging from the mundane to the confessional. We ask search engines everything: What movies are playing (and which are worth seeing)? Where’s the nearest clinic (and how do I get there)? Who’s running in the sheriff’s race (and what are their views)? These online queries can give insight into our private details and innermost thoughts, but police increasingly access them without adhering to longstanding limits on government investigative power.

A Virginia appeals court is poised to review such a request in a case called Commonwealth v. Clements. In Clements, police sought evidence under a “reverse-keyword warrant,” a novel court order that compels search engines like Google to hand over information about every person who has looked up a word or phrase online. While the trial judge correctly recognized the privacy interest in our Internet queries, he overlooked the other wide-ranging harms that keyword warrants enable and upheld the search.

But as EFF and the ACLU explained in our amicus brief on appeal, reverse keyword warrants simply cannot be conducted in a lawful way. They invert privacy protections, threaten free speech and inquiry, and fundamentally conflict with the principles underlying the Fourth Amendment and its analog in the Virginia Constitution. The court of appeals now has a chance to say so and protect the rights of Internet users well beyond state lines."

https://www.eff.org/deeplinks/2025/07/eff-tells-virginia-court-constitutional-privacy-protections-forbid-cops-finding

#USA #Virginia #Privacy #KeywordWarrant #SearchEngines

 

"If you want a job at McDonald’s today, there’s a good chance you'll have to talk to Olivia. Olivia is not, in fact, a human being, but instead an AI chatbot that screens applicants, asks for their contact information and résumé, directs them to a personality test, and occasionally makes them “go insane” by repeatedly misunderstanding their most basic questions.

Until last week, the platform that runs the Olivia chatbot, built by artificial intelligence software firm Paradox.ai, also suffered from absurdly basic security flaws. As a result, virtually any hacker could have accessed the records of every chat Olivia had ever had with McDonald's applicants—including all the personal information they shared in those conversations—with tricks as straightforward as guessing that an administrator account's username and password was “123456."

On Wednesday, security researchers Ian Carroll and Sam Curry revealed that they found simple methods to hack into the backend of the AI chatbot platform on McHire.com, McDonald's website that many of its franchisees use to handle job applications. Carroll and Curry, hackers with a long track record of independent security testing, discovered that simple web-based vulnerabilities—including guessing one laughably weak password—allowed them to access a Paradox.ai account and query the company's databases that held every McHire user's chats with Olivia. The data appears to include as many as 64 million records, including applicants' names, email addresses, and phone numbers."

https://www.wired.com/story/mcdonalds-ai-hiring-chat-bot-paradoxai/?amp%3Bmc_eid=ceff4c8226

#CyberSecurity #AI #GenerativeAI #Chatbots #DataProtection

 

"The future of TikTok matters more than you might think. TikTok itself reports that more than 170 million Americans and 7.5 million U.S. businesses use the app. Pew Research Center estimates that nearly 60 percent of adults under 30 are on TikTok, and more than 50 percent of those users consider it their primary source for news.

Beyond the sheer reach of its userbase, an Oxford Economics report commissioned by TikTok found that in 2023, the app generated more than $24 billion in gross domestic product (GDP) and created more than 224,000 jobs in the U.S. alone.

In a recent interview with content creator Jessica Hawk, we spoke about her strategy for transitioning to other platforms, the ripple effects of a potential TikTok ban on brand partnerships and income streams, and how the creator economy might shift. While the fate of an app many young people use to share videos of themselves dancing may seem trivial, for the millions of people and businesses across the U.S. relying on the platform to make a living, the stakes are incalculable.

Outside of the detrimental impact that a TikTok ban would have on the livelihoods of its users, there is also much to be said about how PAFACA’s ultimatum perpetuates the global shift away from an open internet. As recent New America Technology and Democracy Fellow Tianyu Fang observed in New America’s The Thread, the internet has become “less open globally” in recent years due to the actions of both authoritarian governments and democracies.

Just as the Founding Fathers emphasized the importance of a separation of church and state, ensuring a separation between our government and Big Tech will be crucial to defending democracy—both nationally and globally—for years to come. TikTok is only the beginning."

https://www.newamerica.org/oti/blog/the-tiktok-ban-was-never-just-about-tiktok/

#SocialMedia #USA #TikTok #Censorship #DataProtection #Privacy

[–] [email protected] 1 points 1 week ago

@[email protected] Yes, but that's in the U.S., only - at least as of now.

 

"DeepSeek could soon disappear from Apple and Google's official app stores in Germany as data protection officials accuse the Chinese chatbot of alleged privacy violations.

"DeepSeek's transfer of user data to China is unlawful," said Berlin Data Protection Commissioner Meike Kamp, in an official announcement dated June 27, 2025. Kamp has called on the Big Tech giant to consider blocking the app in the country.

Another EU member, Italy, already banned Deepseek from the Apple App Store and Google Play Store in January 2025 over similar grounds. The block was enforced about a week after the release of the ChatGPT rival.

According to German authorities, the company behind DeepSeek AI (Hangzhou DeepSeek Artificial Intelligence Co., Ltd) violates Art. 46 (1) of the GDPR, which rules the need for "appropriate safeguards" when transferring EU citizens' personal data to a third country.

According to Kamp, DeepSeek failed to convince German officials that users' data is protected when these details are transferred to China, as expected by EU laws."

https://www.techradar.com/computing/cyber-security/deepseek-faces-ban-in-germany-as-privacy-watchdog-reports-the-app-to-google-and-apple-as-illegal-content

#EU #Germany #AI #GenerativeAI #DeepSeek #DataProtection #GDPR #Privacy #China

 

"In selling law enforcement agencies bulk access to such sensitive information, these airlines—through their data broker—are putting their own profits over travelers' privacy. U.S. Immigration and Customs Enforcement (ICE) recently detailed its own purchase of personal data from ARC. In the current climate, this can have a detrimental impact on people’s lives.

Movement unrestricted by governments is a hallmark of a free society. In our current moment, when the federal government is threatening legal consequences based on people’s national, religious, and political affiliations, having air travel in and out of the United States tracked by any ARC customer is a recipe for state retribution.

Sadly, data brokers are doing even broader harm to our privacy. Sensitive location data is harvested from smartphones and sold to cops, internet backbone data is sold to federal counterintelligence agencies, and utility databases containing phone, water, and electricity records are shared with ICE officers."

https://www.eff.org/deeplinks/2025/07/data-brokers-are-selling-your-flight-information-cbp-and-ice

#USA #Surveillance #PoliceState #ICE #CBP #DataBrokers #Privacy

[–] [email protected] 1 points 1 week ago

@[email protected] Thanks! I like to share links to articles that I personally find interesting :)

[–] [email protected] 2 points 1 week ago

"In a few years, almost everyone will claim they opposed this genocide. But it is now that people of good conscience need to take a stand. As economists we stand, today, with Francesca Albanese, the UN Special Rapporteur under attack by the US and Israeli governments because her recent report throws indescribably important light on the political economy of Israel’s occupation and genocide."

 

"The urban terrain, the resilience of Hamas and the people of Gaza, the balance of forces in the region and new warfare technologies posed distinct challenges for the Israeli Defence Forces, who were now fighting on multiple fronts with more ambitious goals than just recovering the hostages: destroying Hamas and then Hezbollah, controlling Southern Lebanon—in addition to making life unbearable for Palestinians in the Occupied Territories. It was the continuation of the Nakba—an uncivil war of land expropriation.

In those early days, watching with mounting anxiety the indiscriminate bombing of a defenceless population, I wondered why such an eruption of violence had not occurred in apartheid South Africa. Many had anticipated a similar Armageddon. The States of Emergency between 1984 and 1994 saw the militarization of townships, death squads, chemical warfare, assassinations, torture and detention without trial. During this period, an estimated 20,000 were killed in South Africa, the vast majority black; another 1.5 million died in South Africa’s ‘destabilization’ of neighbouring countries. How, after ten years of civil war, did this culminate in a negotiated settlement, the dismantling of the major planks of the apartheid order, and the first elections based on majority rule? Why does such an outcome—with all its problems—seem so remote when we turn to the plight of the Palestinians and the spiralling violence, internal and external, of the Israeli state? How was it that the Oslo Accords of 1993 and 1995 intensified confrontations rather than making progress towards a two-state solution? Why did Israel abandon the Abraham Accords, which outlined collaboration with Arab states, preferring the disproportionate massacre of Palestinians after Hamas’s incursion?"

https://newleftreview.org/issues/ii153/articles/michael-burawoy-palestine-through-a-south-african-lens

#SouthAfrica #Apartheid #Israel #Colonialism #LandTheft #Palestine #Gaza #SettlerColonialism #Afrikaner #Zionism

#history

[–] [email protected] 1 points 2 weeks ago

Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance
by Cindy Cohn

"A personal chronicle of three key legal privacy battles that have defined the digital age and shaped the internet as we know it.

From the very beginning, Cindy Cohn was driven by a fundamental question: can we still have private conversations if we live our lives online? Privacy’s Defender chronicles her thirty-year battle to protect our right to digital privacy and shows just how central this right is to all our other rights, including our ability to organize and make change in the world.

Shattering the hypermasculine myth that our digital reality was solely the work of a handful of charismatic tech founders, the author weaves her own personal story with the history of Crypto Wars, FBI gag orders, and the post-9/11 surveillance state. She describes how she became a seasoned leader in the early digital rights movement, as well as how this work serendipitously helped her discover her birth parents and find her life partner. Along the way, she also details the development of the Electronic Frontier Foundation, which she grew from a ragtag group of lawyers and hackers into one of the most powerful digital rights organizations in the world.

Part memoir and part legal history for the general reader, the book is a compelling testament to just how hard-won the privacy rights we now enjoy as tech users are, but also how crucial these rights are in our efforts to combat authoritarianism, grow democracy, and strengthen human rights."

https://mitpress.mit.edu/9780262051248/privacys-defender/

[–] [email protected] 1 points 2 weeks ago

"On its face, that might sound not altogether different from Google Photos, which similarly might suggest AI tweaks to your images after you opt into Google Gemini. But unlike Google, which explicitly states that it does not train generative AI models with personal data gleaned from Google Photos, Meta’s current AI usage terms, which have been in place since June 23, 2024, do not provide any clarity as to whether unpublished photos accessed through “cloud processing” are exempt from being used as training data — and Meta would not clear that up for us going forward.

And while Daniels and Cubeta tell The Verge that opting in only gives Meta permission to retrieve 30 days worth of your unpublished camera roll at a time, it appears that Meta is retaining some data longer than that. “Camera roll suggestions based on themes, such as pets, weddings and graduations, may include media that is older than 30 days,” Meta writes.

Thankfully, Facebook users do have an option to turn off camera roll cloud processing in their settings, which, once activated, will also start removing unpublished photos from the cloud after 30 days.

The feature suggests a new incursion into our previously private data, one that bypasses the point of friction known as conscientiously deciding to post a photo for public consumption. And according to Reddit posts found by TechCrunch, Meta’s already offering AI restyling suggestions on previously-uploaded photos, even if users hadn’t been aware of the feature: one user reported that Facebook had Studio Ghiblified her wedding photos without her knowledge."

https://www.theverge.com/meta/694685/meta-ai-camera-roll

[–] [email protected] 1 points 1 month ago

"Design Patterns for Securing LLM Agents against Prompt Injections (2025) by Luca Beurer-Kellner, Beat Buesser, Ana-Maria Creţu, Edoardo Debenedetti, Daniel Dobos, Daniel Fabian, Marc Fischer, David Froelicher, Kathrin Grosse, Daniel Naeff, Ezinwanne Ozoani, Andrew Paverd, Florian Tramèr, and Václav Volhejn.

I’m so excited to see papers like this starting to appear. I wrote about Google DeepMind’s Defeating Prompt Injections by Design paper (aka the CaMeL paper) back in April, which was the first paper I’d seen that proposed a credible solution to some of the challenges posed by prompt injection against tool-using LLM systems (often referred to as “agents”).

This new paper provides a robust explanation of prompt injection, then proposes six design patterns to help protect against it, including the pattern proposed by the CaMeL paper."

https://simonwillison.net/2025/Jun/13/prompt-injection-design-patterns/

[–] [email protected] 1 points 1 month ago

@[email protected] I don't think it's so obvious as that. At least AI companies are giving users the option of deleting their data. And they also allow users to make use of their services very much for free. Copyright companies don't care about that. They want total control so that every online use of the works they own must be licensed. They want everyone to pay a rent. The ideia that they value art, culture, knowledge or public enlightenment is just bullshit. Even more so for media companies such as The New York Times which are always stating that they're essential to Democracy.

[–] [email protected] 1 points 1 month ago

Bicocca, Milan:

view more: next ›