remixtures

joined 2 years ago
 

"In this article, I'll share some of the key lessons we've learned about navigating the complex world of digital security. I'll look at how to identify the right tools, services, resources, and organisations to protect your community, network, or organisation from cyber threats - and why this work is more important than ever. Consider this: almost everything we do online relies on the infrastructure and services of the 'big five' technology companies - Google, Apple, Facebook, Amazon, and Microsoft (GAFAM) + rapidly catching up with Chinese counterparts: TikTok, DeepSeek. At the same time, the regulations and policies that govern these digital spaces and their gatekeepers can be overturned overnight by shifting political agendas with the stroke of a pen, while the sophistication of surveillance and hacking tools is no match for what civil society has at its disposal. It's a precarious environment and difficult times, and understanding how to protect against these risks is more important than ever."

https://tacticaltech.org/news/insights/persistent-problems-of-digital-resilience/

#CyberSecurity #DigitalRights #Surveillance #Privacy

 

"As most people who have played with a large language model know, foundation models frequently “hallucinate,” asserting patterns that do not exist or producing nonsense. This means that they may recommend the wrong targets. Worse still, because we can’t reliably predict or explain their behavior, the military officers supervising these systems may be unable to distinguish correct recommendations from erroneous ones.
Foundation models are also often trained and informed by troves of personal data, which can include our faces, our names, even our behavioral patterns. Adversaries could trick these A.I. interfaces into giving up the sensitive data they are trained on.

Building on top of widely available foundation models, like Meta’s Llama or OpenAI’s GPT-4, also introduces cybersecurity vulnerabilities, creating vectors through which hostile nation-states and rogue actors can hack into and harm the systems our national security apparatus relies on. Adversaries could “poison” the data on which A.I. systems are trained, much like a poison pill that, when activated, allows the adversary to manipulate the A.I. system, making it behave in dangerous ways. You can’t fully remove the threat of these vulnerabilities without fundamentally changing how large language models are developed, especially in the context of military use.

Rather than grapple with these potential threats, the White House is encouraging full speed ahead."

https://www.nytimes.com/2025/01/27/opinion/ai-trump-military-national-security.html

#AI #GenerativeAI #AIWarfare #CyberSecurity

 

"Envious of the power and wealth of corporate America, the head of U.S. intelligence has issued a new directive calling on the spy agencies to “routinize” and “expand” their partnerships with private companies. Agencies are even authorized to incur “risk” in these relationships, the directive says. The move underscores the awesome power of corporations — the appistocracy, as I call them, or “non-state entities,” the directive’s euphemistic term.

Called Intelligence Community Directive 406, the order was signed on January 16 by then-President Biden’s Director of National Intelligence in the final days of the administration. It lays out new ways for spy agencies to capitalize on the information and expertise of these corporate superpowers, which could be anything from social media platforms to AI firms. It is not yet clear how the Trump administration plans to exercise these authorities.

There is an unspoken and unsettling context to this effort: these corporations have become more powerful than many nation states. Top companies are now worth more than the GDPs of most countries. Where the CIA once might have coveted the secrets of Albania, now it is Apple, whose wealth exceeds all but the four richest countries."

https://www.kenklippenstein.com/p/big-brother-becomes-little-brother

#USA #Surveillance #PoliceState #SocialMedia #BigTech #Privacy

 

"Last week, EFF, along with the Criminal Defense Attorneys of Michigan, ACLU, and ACLU of Michigan, filed an amicus brief in People v. Carson in the Supreme Court of Michigan, challenging the constitutionality of the search warrant of Mr. Carson's smart phone.

In this case, Mr. Carson was arrested for stealing money from his neighbor's safe with a co-conspirator. A few months later, law enforcement applied for a search warrant for Mr. Carson's cell phone. The search warrant enumerated the claims that formed the basis for Mr. Carson's arrest, but the only mention of a cell phone was a law enforcement officer's general assertion that phones are communication devices often used in the commission of crimes. A warrant was issued which allowed the search of the entirety of Mr. Carson's smart phone, with no temporal or category limits on the data to be searched. Evidence found on the phone was then used to convict Mr. Carson.

On appeal, the Court of Appeals made a number of rulings in favor of Mr. Carson, including that evidence from the phone should not have been admitted because the search warrant lacked particularity and was unconstitutional. The government's appeal to the Michigan Supreme Court was accepted and we filed an amicus brief."

https://www.eff.org/deeplinks/2025/01/eff-michigan-supreme-court-cell-phone-search-warrants-must-strictly-follow-fourth

#USA #Michigan #Surveillance #Cellphones #FourthAmendment #PoliceState #Privacy

 

"Last week, EFF, along with the Criminal Defense Attorneys of Michigan, ACLU, and ACLU of Michigan, filed an amicus brief in People v. Carson in the Supreme Court of Michigan, challenging the constitutionality of the search warrant of Mr. Carson's smart phone.

In this case, Mr. Carson was arrested for stealing money from his neighbor's safe with a co-conspirator. A few months later, law enforcement applied for a search warrant for Mr. Carson's cell phone. The search warrant enumerated the claims that formed the basis for Mr. Carson's arrest, but the only mention of a cell phone was a law enforcement officer's general assertion that phones are communication devices often used in the commission of crimes. A warrant was issued which allowed the search of the entirety of Mr. Carson's smart phone, with no temporal or category limits on the data to be searched. Evidence found on the phone was then used to convict Mr. Carson.

On appeal, the Court of Appeals made a number of rulings in favor of Mr. Carson, including that evidence from the phone should not have been admitted because the search warrant lacked particularity and was unconstitutional. The government's appeal to the Michigan Supreme Court was accepted and we filed an amicus brief."

https://www.eff.org/deeplinks/2025/01/eff-michigan-supreme-court-cell-phone-search-warrants-must-strictly-follow-fourth

#USA #Michigan #Surveillance #Cellphones #FourthAmendment #PoliceState #Privacy

 

""Tasks that seemed straightforward often took days rather than hours, with Devin getting stuck in technical dead-ends or producing overly complex, unusable solutions," the researchers explain in their report. "Even more concerning was Devin’s tendency to press forward with tasks that weren’t actually possible."

As an example, they cited how Devin, when asked to deploy multiple applications to the infrastructure deployment platform Railway, failed to understand this wasn't supported and spent more than a day trying approaches that didn't work and hallucinating non-existent features.

Of 20 tasks presented to Devin, the AI software engineer completed just three of them satisfactorily – the two cited above and a third challenge to research how to build a Discord bot in Python. Three other tasks produced inconclusive results, and 14 projects were outright failures.

The researchers said that Devin provided a polished user experience that was impressive when it worked.

"But that’s the problem – it rarely worked," they wrote.

"More concerning was our inability to predict which tasks would succeed. Even tasks similar to our early wins would fail in complex, time-consuming ways. The autonomous nature that seemed promising became a liability – Devin would spend days pursuing impossible solutions rather than recognizing fundamental blockers.""

https://www.theregister.com/2025/01/23/ai_developer_devin_poor_reviews/

#AI #GenerativeAI #AIAgents #Devin #Programming #SoftwareDevelopment

 

"Government must stop restricting website access with laws requiring age verification.

Some advocates of these censorship schemes argue we can nerd our way out of the many harms they cause to speech, equity, privacy, and infosec. Their silver bullet? “Age estimation” technology that scans our faces, applies an algorithm, and guesses how old we are – before letting us access online content and opportunities to communicate with others. But when confronted with age estimation face scans, many people will refrain from accessing restricted websites, even when they have a legal right to use them. Why?

Because quite simply, age estimation face scans are creepy AF – and harmful. First, age estimation is inaccurate and discriminatory. Second, its underlying technology can be used to try to estimate our other demographics, like ethnicity and gender, as well as our names. Third, law enforcement wants to use its underlying technology to guess our emotions and honesty, which in the hands of jumpy officers is likely to endanger innocent people. Fourth, age estimation face scans create privacy and infosec threats for the people scanned. In short, government should be restraining this hazardous technology, not normalizing it through age verification mandates."

https://www.eff.org/deeplinks/2025/01/face-scans-estimate-our-age-creepy-af-and-harmful

#USA #AgeVerification #AgeEstimation #Surveillance #Privacy #CyberSecurity #FaceScans

 

"A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.

“It's less like flypaper and more an infinite maze holding a minotaur, except the crawler is the minotaur that cannot get out. The typical web crawler doesn't appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too. Nepenthes generates random links that always point back to itself - the crawler downloads those new links. Nepenthes happily just returns more and more lists of links pointing back to itself,” Aaron B, the creator of Nepenthes, told 404 Media.

“Of course, these crawlers are massively scaled, and are downloading links from large swathes of the internet at any given time,” they added. “But they are still consuming resources, spinning around doing nothing helpful, unless they find a way to detect that they are stuck in this loop.”"

https://www.404media.co/developer-creates-infinite-maze-to-trap-ai-crawlers-in/

#AI #GenerativeAI #AITraining #WebCrawling #CyberSecurity

 

"Parents, students, teachers, and administrators throughout North America are smarting from what could be the biggest data breach of 2025: an intrusion into the network of a cloud-based service storing detailed data of millions of pupils and school personnel.

The hack, which came to light earlier this month, hit PowerSchool, a Folsom, California, firm that provides cloud-based software to some 16,000 K–12 schools worldwide. The schools serve 60 million students and employ an unknown number of teachers. Besides providing software for administration, grades, and other functions, PowerSchool stores personal data for students and teachers, with much of that data including Social Security numbers, medical information, and home addresses."

https://arstechnica.com/security/2025/01/students-parents-and-teachers-still-smarting-from-breach-exposing-their-info/

#USA #CyberSecurity #DataBreaches #Schools #CloudComputing

 

"So I feel the issues here are ultimately systemic policy problems that need to be fixed with regulation (such as enact national right to repair laws, de-fang the DMCA, implement US national privacy protections, somehow limit the massive seemingly untouchable influence of big tech companies, and probably tax down tech billionaires).

That’s a big ask that feels insurmountable at this moment, but it’s a movement can start now with people who are fed up with our current de facto abusive tech business models. I think eventually we will get there anyway, because the I am not sure the current extractive model is sustainable without encountering massive social unrest within the next decade. The alternative to change, if taken to an extreme, may be the collapse of personal liberty for everyone.

In the meantime, while these lofty goals simmer and take shape, you can also continue to take personal steps to preserve your own tech liberty. Support nonprofits like the EFF that fight for privacy and user rights, strong encryption, open source, use local storage, and so on. I highly encourage it.

Ultimately I hope these thoughts can be a starting point for others to pick up the torch and build off of. I will also be thinking of constructive solutions for a future follow-up."

https://www.vintagecomputing.com/index.php/archives/3292/the-pc-is-dead-its-time-to-make-computing-personal-again

#USA #Privacy #BigTech #SurveillanceCapitalism #DMCA #RightToRepair #Oligopolies

 

"The Federal Trade Commission announced a proposed settlement agreeing that General Motors and its subsidiary, OnStar, will be banned from selling geolocation and driver behavior data to credit agencies for five years. That’s good news for G.M. owners. Every car owner and driver deserves to be protected.

Last year, a New York Times investigation highlighted how G.M. was sharing information with insurance companies without clear knowledge from the driver. This resulted in people’s insurance premiums increasing, sometimes without them realizing why that was happening. This data sharing problem was common amongst many carmakers, not just G.M., but figuring out what your car was sharing was often a Sisyphean task, somehow managing to be more complicated than trying to learn similar details about apps or websites."

https://www.eff.org/deeplinks/2025/01/ftcs-ban-gm-and-onstar-selling-driver-behavior-good-first-step

#USA #FTC #GM #OnStar #Privacy #LocationData #GeoLocation #DataProtection

 

"This decision sheds light on the government’s liberal use of what is essential a “finders keepers” rule regarding your communication data. As a legal authority, FISA Section 702 allows the intelligence community to collect a massive amount of communications data from overseas in the name of “national security.” But, in cases where one side of that conversation is a person on US soil, that data is still collected and retained in large databases searchable by federal law enforcement. Because the US-side of these communications is already collected and just sitting there, the government has claimed that law enforcement agencies do not need a warrant to sift through them. EFF argued for over a decade that this is unconstitutional, and now a federal court agrees with us."

https://www.eff.org/deeplinks/2025/01/victory-federal-court-finally-rules-backdoor-searches-702-data-unconstitutional

#USA #Surveillance #PoliceState #Section702 #Backdoors #CyberSecurity #Privacy

[–] [email protected] 1 points 1 week ago

"End-to-end encryption (E2EE) has become the gold standard for securing communications, bringing strong confidentiality and privacy guarantees to billions of users worldwide. However, the current push towards widespread integration of artificial intelligence (AI) models, including in E2EE systems, raises some serious security concerns.

This work performs a critical examination of the (in)compatibility of AI models and E2EE applications. We explore this on two fronts: (1) the integration of AI “assistants” within E2EE applications, and (2) the use of E2EE data for training AI models. We analyze the potential security implications of each, and identify conflicts with the security guarantees of E2EE. Then, we analyze legal implications of integrating AI models in E2EE applications, given how AI integration can undermine the confidentiality that E2EE promises. Finally, we offer a list of detailed recommendations based on our technical and legal analyses, including: technical design choices that must be prioritized to uphold E2EE security; how service providers must accurately represent E2EE security; and best practices for the default behavior of AI features and for requesting user consent. We hope this paper catalyzes an informed conversation on the tensions that arise between the brisk deployment of AI and the security offered by E2EE, and guides the responsible development of new AI features."

https://eprint.iacr.org/2024/2086.pdf

[–] [email protected] 3 points 1 week ago (1 children)

@[email protected] "Meta’s tracking tools are embedded in millions of websites and apps, so you can’t escape the company’s surveillance just by avoiding or deleting Facebook and Instagram. Meta’s tracking pixel, found on 30% of the world’s most popular websites, monitors people’s behavior across the web and can expose sensitive information, including financial and mental health data."

[–] [email protected] 5 points 3 weeks ago (1 children)

"In just 20 minutes this morning, an automated license plate recognition (ALPR) system in Nashville, Tennessee captured photographs and detailed information from nearly 1,000 vehicles as they passed by. Among them: eight black Jeep Wranglers, six Honda Accords, an ambulance, and a yellow Ford Fiesta with a vanity plate.
This trove of real-time vehicle data, collected by one of Motorola's ALPR systems, is meant to be accessible by law enforcement. However, a flaw discovered by a security researcher has exposed live video feeds and detailed records of passing vehicles, revealing the staggering scale of surveillance enabled by this widespread technology.

More than 150 Motorola ALPR cameras have exposed their video feeds and leaking data in recent months, according to security researcher Matt Brown, who first publicised the issues in a series of YouTube videos after buying an ALPR camera on eBay and reverse engineering it."

https://www.wired.com/story/license-plate-reader-live-video-data-exposed/

[–] [email protected] 2 points 3 weeks ago

@[email protected] Yes, because they do worse... :-/

[–] [email protected] 13 points 4 weeks ago (1 children)

It's becoming increasingly difficult to differentiate some US states from Iran or Afghanistan...

[–] [email protected] 0 points 1 month ago (2 children)

@ointersexo Durante muitos anos não tive celular - só tablet. O problema é que cada vez mais muitos serviço básicos - banco, cartão de refeição, etc. - só funcionam com smartphone porque exigem uma app. Isso aí complica o cenário. Os reguladores para a concorrência deviam obrigar esses provedores a fornecerem uma versão web dessas mesmas app sem necessidade de recorrer a um celular.

[–] [email protected] 0 points 1 month ago (4 children)

@ointersexo Sim, vejo cada vez mais gente a optar por um velho "tijolo"

[–] [email protected] 1 points 1 month ago

"The utility of the activity data in risk mitigation and behavioural modification is questionable. For example, an actuary we interviewed, who has worked on risk pricing for behavioural Insurtech products, referred to programs built around fitness wearables for life/health insurance, such as Vitality, as ‘gimmicks’, or primarily branding tactics, without real-world proven applications in behavioural risk modification. The metrics some of the science is based on, such as the BMI or 10,000 steps requirement, despite being so widely associated with healthy lifestyles, have ‘limited scientific basis.’ Big issues the industry is facing are also the inconsistency of use of the activity trackers by policyholders, and the unreliability of the data collected. Another actuary at a major insurance company told us there was really nothing to stop people from falsifying their data to maintain their status (and rewards) in programs like Vitality. Insurers know that somebody could just strap a FitBit to a dog and let it run loose to ensure the person reaches their activity levels per day requirement. The general scepticism (if not broad failure) of products and programs like Vitality to capture data useful for pricing premiums or handling claims—let alone actually induce behavioural change in meaningful, measurable ways—is widely acknowledged in the industry, but not publicly discussed."

https://www.sciencedirect.com/science/article/pii/S0267364924001614

[–] [email protected] 2 points 1 month ago

"On Tuesday the Consumer Financial Protection Bureau (CFPB) published a long anticipated proposed rule change around how data brokers handle peoples’ sensitive information, including their name and address, which would introduce increased limits on when brokers can distribute such data. Researchers have shown how foreign adversaries are able to easily purchase such information, and 404 Media previously revealed that this particular data supply chain is linked to multiple acts of violence inside the cybercriminal underground that has spilled over to victims in the general public too.

The proposed rule in part aims to tackle the distribution of credit header data. This is the personal information at the top of a credit report which doesn’t discuss the person’s actual lines of credit. But currently credit header data is distributed so widely, to so many different companies, that it ends up in the hands of people who use it maliciously."

https://www.404media.co/u-s-government-tries-to-stop-data-brokers-that-help-dox-people-through-credit-data/

[–] [email protected] 2 points 1 month ago

"The United States government’s leading consumer protection watchdog announced Tuesday the first steps in a plan to crack down on predatory data broker practices that the agency says help fuel scams, violence, and threats to US national security.

The Consumer Financial Protection Bureau is proposing a rule that would allow regulators to police data brokers under the Fair Credit Reporting Act (FCRA), a landmark privacy law enacted more than a half century ago. Under the proposal, data brokers would be limited in their ability to sell certain sensitive personal information, including financial data and credit scores, phone numbers, Social Security numbers, and addresses. The CFPB says that closing the loopholes allowing data brokers to trade in this data with little to no oversight will benefit vulnerable people and the US as a whole."

https://www.wired.com/story/cfpb-fcra-data-broker-oversight/

view more: next ›