"As most people who have played with a large language model know, foundation models frequently “hallucinate,” asserting patterns that do not exist or producing nonsense. This means that they may recommend the wrong targets. Worse still, because we can’t reliably predict or explain their behavior, the military officers supervising these systems may be unable to distinguish correct recommendations from erroneous ones.
Foundation models are also often trained and informed by troves of personal data, which can include our faces, our names, even our behavioral patterns. Adversaries could trick these A.I. interfaces into giving up the sensitive data they are trained on.
Building on top of widely available foundation models, like Meta’s Llama or OpenAI’s GPT-4, also introduces cybersecurity vulnerabilities, creating vectors through which hostile nation-states and rogue actors can hack into and harm the systems our national security apparatus relies on. Adversaries could “poison” the data on which A.I. systems are trained, much like a poison pill that, when activated, allows the adversary to manipulate the A.I. system, making it behave in dangerous ways. You can’t fully remove the threat of these vulnerabilities without fundamentally changing how large language models are developed, especially in the context of military use.
Rather than grapple with these potential threats, the White House is encouraging full speed ahead."
https://www.nytimes.com/2025/01/27/opinion/ai-trump-military-national-security.html
#AI #GenerativeAI #AIWarfare #CyberSecurity