2 Under another, adversaries can poison AI systems, installing backdoors that can be used at a time and place of their choosing to destroy the system. Under one type of attack, adversaries can gain control over a state-of-the-art AI system with a small but carefully chosen manipulation, ranging from a piece of tape on a stop sign 1 to a sprinkling of digital dust invisible to the human eye on a digital image. This vulnerability is due to inherent limitations in the state-of-the-art AI methods that leave them open to a devastating set of attacks that are as insidious as they are dangerous. Call it an “artificial intelligence attack” (AI attack). What we see as a slightly vandalized stop sign, a compromised artificial intelligence system sees as a green light. The artificial intelligence algorithms that are being called upon to deliver this future have a problem: by virtue of the way they learn, they can be attacked and controlled by an adversary. This is a study of how an obscure problem within artificial intelligence-currently the concern of a tiny subfield of yet another subfield of computer science-is on a dangerous collision course with the economic, military, and societal security of the future, and what can be done about it. It’s hard to argue with that type of return on a $1.50 investment in tape. Done at the largest intersections in leading metropolitan areas, it would bring the transportation system to its knees. Done at one sleepy intersection, this would cause an accident. Placing a few small pieces of tape inconspicuously on a stop sign at an intersection, he can magically transform the stop sign into a green light in the eyes of a self-driving car. He will need only electrical tape and a good pair of walking shoes. The terrorist of the 21st century will not necessarily need bombs, uranium, or biological weapons. In the private sector, regulators should make compliance mandatory for high-risk uses of AI where attacks would have severe societal consequences, and optional for lower-risk uses in order to avoid disrupting innovation. Regulators should require compliance both for government use of AI systems and as a pre-condition for selling AI systems to the government. Regulators should mandate compliance for governmental and high-risk uses of AI. ![]() This program is modeled on existing compliance programs in other industries, such as PCI compliance for securing payment transactions, and would be implemented by appropriate regulatory bodies for their relevant constituents. Compliance programs would accomplish this by encouraging stakeholders to adopt a set of best practices in securing systems against AI attacks, including considering attack risks and surfaces when deploying AI systems, adopting IT-reforms to make attacks difficult to execute, and creating attack response plans. Public policy creating “AI Security Compliance” programs will reduce the risk of attacks on AI systems and lower the impact of successful attacks. This report proposes “AI Security Compliance” programs to protect against AI attacks. These areas are attractive targets for attack, and are growing more vulnerable due to their increasing adoption of artificial intelligence for critical tasks. There are five areas most immediately affected by artificial intelligence attacks: content filters, the military, law enforcement, traditionally human-based tasks being replaced by AI, and civil society. Data can also be weaponized in new ways using these attacks, requiring changes in the way data is collected, stored, and used.Ĭritical parts of society are already vulnerable. For the first time, physical objects can be now used for cyberattacks (e.g., an AI attack can transform a stop sign into a green light in the eyes of a self-driving car by simply placing a few pieces of tape on the stop sign itself). Further, AI attacks fundamentally expand the set of entities that can be used to execute cyberattacks. Unlike traditional cyberattacks that are caused by “bugs” or human mistakes in code, AI attacks are enabled by inherent limitations in the underlying AI algorithms that currently cannot be fixed. These “AI attacks” are fundamentally different from traditional cyberattacks. As artificial intelligence systems are further integrated into critical components of society, these artificial intelligence attacks represent an emerging and systematic vulnerability with the potential to have significant effects on the security of the country. The methods underpinning the state-of-the-art artificial intelligence systems are systematically vulnerable to a new type of cybersecurity attack called an “artificial intelligence attack.” Using this attack, adversaries can manipulate these systems in order to alter their behavior to serve a malicious end goal. Artificial intelligence systems can be attacked.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |