DHS Focus on "Soft Targets" Risks Out-of-Control Surveillance
The U.S. Department of Homeland Security (DHS) is investing resources in what it calls protecting 鈥渟oft targets,鈥 which include crowded places that aren鈥檛 subject to 鈥渉ardened鈥 security measures. Examples include shopping areas, transit facilities, and open-air tourist attractions. We are all used to going through security checks at airports and some other venues, but the vast majority of events and spaces in the United States don鈥檛 have that kind of security. The question is: Do we want DHS having a role in all of those spaces?
DHS protecting soft targets as one of four goals in its mission to counter terrorism and homeland security threats. The agency says that it鈥檚 working to 鈥渁ssess soft targets and address security gaps, and investing in research and development for technological solutions.鈥 Many measures to increase the security of soft targets are uncontroversial and probably beneficial. There鈥檚 nothing wrong with assessing risk, for example, and we all want our government to be well-prepared to respond to emergencies of all kinds. But other parts of the agency鈥檚 efforts in this area raise serious questions about mass surveillance and the maintenance of an open society, including those that use AI or other technological solutions to generate or share domestic intelligence, attempt to measure 鈥渟uspiciousness,鈥 or monitor and track people in public places.
Most of these efforts are targeted at people going about their business in public places, mostly unaware of the AI technologies that are being trained upon them, raising significant privacy and constitutional issues.
One way that DHS has begun work in this area is through the funding of an industry or academic research and development center called the Soft Target Engineering to Neutralize the Threat Reality, or , which aims to create 鈥渞esources and tools for anticipating and mitigating threats to soft targets and crowded places.鈥 Among the center鈥檚 is 鈥渁dvanced sensing technologies鈥 that aims to develop 鈥渘ew sensing capabilities to detect threats,鈥 in particular, 鈥渢o establish new stand-off sensor concepts for detecting concealed threats in crowds.鈥
Part of SENTRY鈥檚 research includes developing AI tools 鈥渇or data mining of social media, geospatial data platforms, and other sources of information to extract insights on potential threats.鈥 part is looking at the application of AI 鈥渢o risk assessment, quantitative threat deterrence, development of layered security architectures; and providing methods for fusing data and other information.鈥
DHS鈥檚 Silicon Valley Innovation Program, which funds private companies to research and develop products that DHS would like to see, also has . It funds companies that are aiming to 鈥渂uild AI algorithms that link objects (e.g., unattended baggage) to people and track them,鈥 to 鈥渋dentify motion of interest from security video feeds,鈥 and to create an 鈥渁nomaly detection system that leverages activity recognition and tracking to capture multiple data points per subject.鈥
DHS has also been testing various sensors and detectors intended to be used on people in non-secure public spaces. For several years the agency has been carrying out public tests of thermal cameras designed to spot weapons underneath people鈥檚 clothing and identify medicines or other substances that people may have on their person. Various at-a-distance AI are also being by in the SENTRY program.
Most of these efforts are targeted at people going about their business in public places, mostly unaware of the AI technologies that are being trained upon them, raising significant privacy and constitutional issues. At their worst, efforts to secure soft targets may lead to the 鈥渁irportization鈥 of American life by local authorities to increase the use of security perimeters, searches, and surveillance at an ever-widening group of public gatherings and events. Such efforts could lead to the emergence of a checkpoint society, an enclosed world where people are scanned, vetted, and access-controlled at every turn.
Of course, security technology does not operate itself; people will be subject to the petty authority of martinet guards who are constantly stopping them based on some AI-generated flag of suspicion. And, inevitably, doing so in discriminatory ways. AI-facilitated surveillance of public venues may lead to the harassment, investigation, and arrest of people who are already disproportionately singled out for scrutiny, such as protestors, communities of color, and immigrants. The use of AI machine vision to monitor people, even when done in an ostensibly anonymous manner, has the potential to significantly change the experience of being in public in the United States.
Such efforts may lock down American life in ways that impose not only direct costs 鈥 the price of equipment and personnel 鈥 but also introduce inefficiencies 鈥 wait times and efforts by members of the public to avoid false alarms. These efforts also perpetuate the intangible social and psychological costs that come from surveillance, submission to authority, and the lack of an open society.
We recently filed with the Privacy and Civil Liberties Oversight Board (PCLOB), an agency created by Congress to serve as a check and balance on our security agencies, urging the agency to keep a close eye on these activities, among others. Given the sensitivity around surveilling people in public places, those activities bear close watching 鈥 something we will be doing as well.