ƵWarns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections
WASHINGTON — The Biden-Harris administration today its National Security Memorandum on Artificial Intelligence (AI), establishing guidelines for how the U.S. government uses AI in national security programs, such as counterterrorism, intelligence, homeland security, and defense.
The use of AI to automate and expand national security activities poses some of the greatest dangers to people’s lives and civil liberties. Agencies are the use of AI in decisions about who to surveil, who to stop and search at the airport, who to add to government watchlists, and even who is a military target. While the government’s policy includes some important steps forward — such as requiring national security agencies to better track and assess their AI systems for risks and prohibiting a subset of dangerous AI uses — it falls far short in other critical areas, leaving glaring gaps with respect to independent oversight, transparency, notice, and redress.
The policy imposes few substantive safeguards on a wide range of AI-driven activities, by and large allowing agencies to decide for themselves how to mitigate the risks posed by national security systems that have immense consequences for people’s lives and rights. As we have repeatedly seen before, this is a recipe for dangerous technologies to proliferate in secret.
“Despite acknowledging the considerable risks of AI, this policy does not go nearly far enough to protect us from dangerous and unaccountable AI systems. National security agencies must not be left to police themselves as they increasingly subject people in the United States to powerful new technologies,” said Patrick Toomey, deputy director of ACLU’s National Security Project. “If developing national security AI systems is an urgent priority for the country, then adopting critical rights and privacy safeguards is just as urgent. Without transparency, independent oversight, and built-in mechanisms for individuals to obtain accountability when AI systems err or fail, the policy’s safeguards are inadequate and place our civil rights and civil liberties at risk.”
For years, the Ƶ has been urging far stronger safeguards and transparency about the AI tools that national security agencies are deploying, the rules constraining their use, and the dangers these systems pose to fairness, privacy, and due process.