Robotic and Autonomous Systems (RAS) and artificial intelligence (AI) are fundamental to the future Joint Force realizing the full potential of Multi-Domain Operations (MDO 1.5). These systems, in particular AI, offer the ability to outmaneuver adversaries across domains, the electromagnetic (EM) spectrum, and the information environment. The employment of these systems during competition allows the Joint Force to understand the operational environment (OE) in real time, and thus better employ both manned and unmanned capabilities to defeat threat operations meant to destabilize a region, deter escalation of violence, and turn denied spaces into contested spaces. In the transition from competition to armed conflict, RAS and AI maneuver, fires, and intelligence, surveillance, and reconnaissance (ISR) capabilities provide the Joint Force with the ability to deny the enemy’s efforts to seize positions of advantage.
Tag Archive for Artificial Intelligence
Department of Homeland Security
DHS Report: Artificial Intelligence Risk to Critical Infrastructure
Artificial Intelligence (AI) is an emerging risk that will affect critical infrastructure (CI) as it becomes common throughout the United States. The purpose of this research paper is to analyze the narratives about AI to understand the prominence of perceived key benefits and threats from AI adoption and the resulting implications for infrastructure security and resilience. Narratives are strongly held beliefs, and understanding them will help decision makers mitigate potential consequences before they become significant problems.
White House
National Science and Technology Council Report: Preparing for the Future of Artificial Intelligence
AI has applications in many products, such as cars and aircraft, which are subject to regulation designed to protect the public from harm and ensure fairness in economic competition. How will the incorporation of AI into these products affect the relevant regulatory approaches? In general, the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk that the addition of AI may reduce alongside the aspects of risk that it may increase. If a risk falls within the bounds of an existing regulatory regime, moreover, the policy discussion should start by considering whether the existing regulations already adequately address the risk, or whether they need to be adapted to the addition of AI. Also, where regulatory responses to the addition of AI threaten to increase the cost of compliance, or slow the development or adoption of beneficial innovations, policymakers should consider how those responses could be adjusted to lower costs and barriers to innovation without adversely impacting safety or market fairness.