Artificial intelligence and machine learning aim to boost tempo of military operations
Technology development in artificial intelligence (AI) and machine learning are among the highest research and development priorities for the U.S. Department of Defense (DOD) to provide enabling technologies for applications like, command, control, and situational awareness; machine autonomy and robotics; munitions guidance and targeting; image recognition; electronic warfare (EW) and communications; human and machine teaming; technology assessment, and even weather forecasting and space observation.
The past year has seen many AI and machine learning technology initiatives and development contracts to build on progress in AI made over the past several decades. While these technology still fall far short of duplicating human intelligence, they are making great strides in processing vast amounts of data quickly and efficiently; enabling unmanned vehicles to operate autonomously on the ground in the ocean, and in the air; making sense of sensor data from many different sources and synthesizing this data into actionable intelligence; quickly recognizing targets in intelligence imagery; automating spectrum warfare tasks; and helping humans and computers work together.
Among the enabling technologies that are crucial to making the AI and machine learning vision a reality are advances in artificial neural networks; powerful general-purpose graphics processing units (GPGPUs); field-programmable gate arrays (FPGAs), advanced software-engineering tools ;and intelligent networking of disparate sensors, data processors, and communications nodes.
AI in command and control
U.S. Air Force researchers kicked-off the Artificial Intelligence and Next Generation Distributed Command and Control project last March to apply artificial intelligence (AI) to distributed command and control in contested environments.
This project's eight technical areas are: command and control of AI to achieve mission-tailored AI; federated, composable autonomy and AI toolbox; advanced war gaming agents; interactive learning for C4I; command and control complexity dominance generative AI C4I; software defined distributed command and control; and tactical AI.
Air Force researchers are trying to apply AI to command and control, and to consider enemy AI use in mission planning by pursuing a switch from monolithic command and control node to distributed command and control. The project focuses on adapting AI models to specific problems quickly, and to define the roles, responsibilities, and supporting infrastructure. It also seeks to develop battle management tools that bring together a distributed team of specialists to train and deploy mission-tailored AI.
The Artificial Intelligence and Next Generation Distributed Command and Control project should spend about $99 million over the next four years, and several contract awards are expected. The project is accepting white papers until March 2027.
Companies interested should email white papers to each technical area's technical contact, and to the Air Force's Gennady Staskevich at gennady.staskevich@us.af.mil. Those submitting promising white papers may be asked to submit full proposals. Email technical questions to Gennady Staskevich at gennady.staskevich@us.af.mil, and business questions to Amber Buckley at Amber.Buckley@us.af.mil. More information is online at https://sam.gov/opp/d8eb1d7f980d4c02b080d87747297ee6/view.
Last September the Air Force kicked-off the Geospatial Intelligence Processing and Exploitation (GeoPEX) project, which proposes spending nearly $100 million over the next two years to apply artificial intelligence and machine learning to geospatial intelligence (GEOINT) from imagery, imagery intelligence, or geospatial data and information.
GeoPEX seeks to develop enabling technologies for providing GEOINT from imagery, imagery intelligence, or geospatial data and information for military mission planning and decision-making. The project encompasses aspects of imagery and includes data ranging from the ultraviolet through the microwave portions of the electromagnetic spectrum, as well as information derived from imagery; geospatial data; georeferenced social media; and spectral, spatial, temporal, radiometric, phase history, polarimetric data.
The project also seeks to develop analytic techniques for advanced geospatial sensor data. The goal is to take advantage of all available geospatial data from traditional and non-traditional sources to create cost-efficient actionable intelligence.
Data may come from GEOINT data from several different sources, and correlated to provide actionable intelligence for mission decisions. Sources and technologies may include knowledge-based processing; panchromatic imagery; synthetic aperture radar; bistatic radar processing; long-wave infrared sensors; multi and hyper-spectral; video; overhead persistent infrared; 3D point clouds; artificial intelligence (AI); and machine learning.
Technologies of interest include AI; machine learning; cloud-based high performance computing; artificial intelligence acceleration technologies; 3D point cloud generation; modeling and visualization; and photogrammetry technologies. Cyber security should be part of all proposals. This solicitation will be open until September 2026, and now is soliciting white papers. Companies interested should email white papers no later than 30 Sept. 2025 to the Air Force's Bernard J. Clarke at Bernard.Clarke@us.af.mil. Email questions or concerns to the Air Force's Amber Buckley at Amber.Buckley@us.af.mil. More information is online at http://www.fbodaily.com/archive/2023/09-September/16-Sep-2023/FBO-06831325.htm.
Machine autonomy and robotics
Artificial intelligence and machine learning play central roles in the latest technology developments in machine autonomy and robotics. Peraton Labs Inc. in Basking Ridge N.J., won a U.S. Defense Advanced Research Projects Agency (DARPA) contract last fall for the Learning Introspective Control (LINC) project. LINC seeks to enable AI systems to respond well to conditions and events that these systems have never seen before.
LINC aims to develop AI- and machine learning-based technologies that enable computers to examine their own decision-making processes in enabling military systems like manned and unmanned ground vehicles, ships, drone swarms, and robots to respond to events not predicted at the time these systems were designed. LINC technologies will update control laws in real time while providing guidance and situational awareness to the operator, whether that operator is human or an autonomous controller.
Today's control systems seek to model operating environments expected at design time. Yet these systems can fail when they encounter unexpected conditions and events. Instead, LINC will develop machine learning and introspection technologies that can characterize unforeseen circumstances like a damaged or modified military platform from its behavior, and then update the control law to maintain stability and control.
A LINC-equipped platform will compare the behavior of the platform, as measured by on-board sensors, continually with a learned model of the system, determine how the system's behavior could cause danger or instability, and implement an updated control law when required. This could be an improvement of today's approaches to handling platform damage, which places the burden of recovery and control on the operator, whether that operator is human or an autonomous controller.
LINC will help operators maintain control of military platforms that suffer damage in battle or have been modified in the field in response to new requirements. LINC-enabled control systems will build models of their platforms by observing behavior, learning behavioral changes, and modifying how the system should respond to maintain uninterrupted operation.
LINC focuses on two technical areas: learning control by using onboard sensors and actuators; and communicating situational awareness and guidance to the operator. Learning control by using onboard sensors and actuators will perform cross-sensor data inference to characterize changes in system operation, rapidly prune possible solutions to reconstitute control under changed dynamics, and identify an area of nondestructive controllability by continually recalculating operating limits. Communicating situational awareness and guidance to the operator involves informing the operator of changes in system behavior in a concise, usable form by developing technologies to provide guidance and operating cues that convey details about the new control environment and its safety limitations. LINC is a four-year, three-phase program. Initial work involves an iRobot PackBot and a remote 24-core processor.
The remote processor has an Nvidia Jetson TX2 general-purpose graphics processing unit (GPGPU), dual-core NVIDIA Denver central processor, Quad-Core ARM Cortex-A57 MPCore processor; 256 CUDA software cores, eight gigabytes of 128-bit LPDDR4 memory, and 32 gigabytes of eMMC 5.1 data storage. A key goal of the program is to establish an open-standards-based, multi-source, plug-and-play architecture that allows for interoperability and integration -- including the ability to easily add, remove, substitute, and modify software and hardware components quickly.
Ethics in AI
The entire topic of robotics and machine automation can become controversial when people worry that these technologies might evolve to become better than human intelligence. Some people believe that AI eventually may pit humans and machines against each other in a battle of survival.
U.S. military researchers are sensitive to this issue. DARPA began the Autonomy Standards and Ideals with Military Operational Values (ASIMOV) project last February to explore the ethics and technical challenges of using artificial intelligence (AI) and machine autonomy in future military operations. ASIMOV aims to develop benchmarks to measure the ethical use of future military machine autonomy, and the readiness of autonomous systems to perform in military operations.
The rapid development of machine autonomy and artificial intelligence (AI) technologies needs ways to measure and evaluate the technical and ethical performance of autonomous systems. ASIMOV will develop and demonstrate autonomy benchmarks, and is not developing autonomous systems or algorithms for autonomous systems. The ASIMOV program intends to create the ethical autonomy language to enable the test community to evaluate the ethical difficulty of specific military scenarios and the ability of autonomous systems to perform ethically within those scenarios.
Unmanned aerial vehicles equipped with specialized software, artificial intelligence at an experiment last winter that showcased systems designed to enhance amphibious operations. Navy photo
ASIMOV will autonomy benchmarks -- not autonomous systems or algorithms for autonomous systems -- will include an ethical, legal, and societal implications group to advise the performers and provide guidance throughout the program. ASIMOV will use the Responsible AI (RAI) Strategy and Implementation (S&I) Pathway published in June 2022 as a guideline for developing benchmarks for responsible military AI technology. This document lays out the five U.S. military responsible AI ethical principles: responsible, equitable, traceable, reliable, and governable. Email questions or concerns about the ASIMOV project to DARPA at HR001124S0011@darpa.mil. More information is online at https://sam.gov/opp/bebfb61ed56e4d78bdefde9575b2d256/view.
Trust in AI
AI also can be a touchy subject when it comes to creating teams of humans and AI computers. The core issue: can humans really trust machine intelligence, and how can humans be sure that AI is making the best decisions?
DARPA launched the Exploratory Models of Human-AI Teams (EMHAT) project last January to help answer some of these questions. This project seeks to develop modeling and simulation of teaming humans with AI to evaluate understand capabilities and limitations of such teams. EMHAT seeks to create a human-AI modeling and simulation framework that provides data that helps evaluate human-machine teams in realistic settings. The project will use expert feedback, AI-assembled knowledge bases, and generative AI to represent a diverse set of human teammate simulacra, analogous to digital twins.
Teams are critical to accomplishing tasks that are beyond the ability of any one individual, researchers explain. Insights in human teaming have come from observing team dynamics to identify processes and behaviors that result in success or failure. Comparatively little progress has been made, however, in applying human team analysis or in developing new ways of evaluating human-machine teams; machines traditionally have not been considered as equal members.
EMHAT researchers will capitalize on digital twins to model human interaction with AI systems in human-machine task completion; and adapting AI to simulated human behavior. While the U.S. Department of Defense (DoD) has forecast the importance of human-machine teaming, significant gaps remain in understanding and evaluating the expected behaviors of human-AI teams. The project seeks to define when, where, why, and how humans and machines can function together productively as teammates. Email questions or concerns to William Corvey, the EMHAT program manager, at EMHAT@darpa.mil.
Just last June DARPA began the Artificial Intelligence Quantified (AIQ) project to find ways to guarantee the performance and accuracy of artificial intelligence (AI) in future aerospace and defense applications, and stop relying on what amounts to be ad-hoc guesswork.
AIQ seeks to find ways of assessing and understanding the capabilities of AI to enable mathematical guarantees on performance. Successful use of military AI requires ensuring safe and responsible operation of autonomous and semi-autonomous technologies. Still, methods for guaranteeing the capabilities and limitations of AI do not exist today. That's where the AIQ program comes in. AIQ will develop technology to assess and understand the capabilities of AI to enable guaranteed performance and accuracy, which up to now has not been possible.
The program will test the hypothesis that mathematical methods, combined with advances in measurement and modeling will enable guaranteed quantification of AI capabilities. The program will address three interrelated capability levels: specific problem level; classes of problem level; and natural class level. The state-of-the-art methods for assessment are ad hoc, deal with the simplest of capabilities, and are not properly grounded in a rigorous theory.
AIQ brings together two technical areas: providing rigorous foundations for understanding and guaranteeing capabilities; and finding ways to evaluate AI models. This program to guarantee the performance of AI has two 18-month phases -- one that focuses on specific problems; and the other that focuses on compositions of classes and architectures. Email questions or concerns to DARPA at AIQ@darpa.mil. More information is online at https://sam.gov/opp/78b028e5fc8b4953acb74fabf712652d/view.
Munitions control, guidance, and targeting
Among the chief goals of military AI and machine learning are to enable smart munitions to navigate, maneuver, detect targets, and carry out attacks with little or no human intervention. The U.S. Air Force Research Laboratory has reached out to industry for enabling technologies that would do just this.
The 2024 Air Dominance Broad Agency Announcement program, launched in January, seeks to develop modeling and simulation; aircraft integration; target tracking; missile guidance and control; and artificial intelligence (AI) for swarming unmanned aircraft. This project seeks to uncover the state of the art in 13 air munitions-research areas: modeling, simulation, and analysis; aircraft integration technologies; find fix target track and datalink technologies; engagement management system technologies; high velocity fuzing; missile electronics; missile guidance and control technologies; advanced warhead technologies; advanced missile propulsion technologies; control actuation systems; missile carriage and release technologies; missile test and evaluation technologies; and artificial intelligence and machine autonomy.
The technical contact for each topic is Terrance Dubreus, whose email address is terrance.dubreus@us.af.mil. The technical contact is Sheli Plenge, whose email is sheli.plenge@us.af.mil. Companies interested were asked to email white papers describing their capabilities and expertise, relevant past experience no later than 2 Feb. 2024 to the Air Force's Misti DeShields at misti.deshields.1@us.af.mil. Email questions or concerns to Misti DeShields at misti.deshields.1@us.af.mil. More information is online at https://sam.gov/opp/f7fac729dbf543ee8d31256c5c71bba5/view.
The U.S. Army also is interested in AI-aided target recognition and detection. The Army Tank-Automotive & Armaments Command (TACOM) in Warren, Mich., sent out a request for information last December for the Aided Target Detection and Recognition (AiTDR) project, which seeks to develop machine-learning algorithms to reduce the time it takes to detect, recognize, and attack enemy targets. AiTDR seeks to shorten sensor-to-shooter engagement time with machine learning algorithms. The RFI seeks to understand the state of aided target recognition technology to detect trained and untrained new targets.
Traditional machine learning techniques focus on aided target recognition, Army researchers say. This requires a large training image database of target images captured under conditions such as background terrain, target pose, lighting, and partial occlusion. This limits the ability to detect new targets or trained targets under untrained new conditions. The emphasis of the AiTDR project is on detecting generic classes of targets, rather than on identifying specific targets with the risk of missing a target because of insufficiently trained algorithms. Achieving this will help accelerate engagement times and optimize crew performance by developing reliable, intuitive, and adaptive automated target detection for crewed vehicles by no later than 2026.
An Air Force technician observes Atom the artificially intelligent robotic dog as teammates operate it via remote control training at Barksdale Air Force Base, La., last fall. Air Force photo
Companies interested were asked to respond by last January to the Army's Ashraf Samuel at ashraf.i.samuel.ctr@army.mil and Edlira Willer at edlira.willer.civ@army.mil. More information is online at https://sam.gov/opp/d3ddaa9d736a4fcab76a0ae5cdf5a6cd/view.
AI in communications and electronic warfare
Electronic warfare (EW) and communications present important opportunities for AI and machine learning. To the point, AI offers the potential to speed-up EW and communications, and enable U.S. and allied forces to carry out operations much more quickly than the enemy.
Last fall SRI International in Menlo Park, Calif., and the University of Southern California (USC) in Los Angeles won DARPA contracts for the Processor Reconfiguration for Wideband Sensor Systems (PROWESS) project to develop high-throughput streaming-data processors that reconfigure themselves within 50 nanoseconds for advanced RF applications in radar, communications, and EW.
SRI International and USC researchers are developing reconfigurable processors that provide autonomous RF and microwave systems with situational awareness about complex and uncertain electromagnetic environments. PROWESS aims at RF autonomy, where radios use AI to sense the spectrum and adapt to the environment. RF autonomy can help resist the effects of radio interference and improve the capacity of the spectrum to accommodate an increasing number of transceivers.
Although the preferred processors for today’s autonomous radios are field programmable gate arrays (FPGAs), signal environments can change in nanoseconds, which is far faster than FPGAs can reprogram. What are necessary are new classes of receiver processors. PROWESS aims to develop high-throughput, streaming-data processors that reconfigure in real time to detect and characterize RF signals. Through processors that self-reconfigure within 50 nanoseconds, PROWESS will enable real-time synthesis of processing pipelines in uncertain environments. PROWESS will help enable future radio receivers to optimize performance to measured spectrum conditions and the needs of cognitive RF decision logic.
High-throughput streaming-data processors can enable just-in-time synthesis of receiver processing pipelines in uncertain environments where pre-programmed solutions are likely to fail, DARPA researchers say. PROWESS is expected to combine emerging high-density reconfigurable processing arrays with embedded real-time schedulers to expose new architectural tradeoffs that deliver fast program switching and high-compute density.
The PROWESS project seeks create reconfigurable processors to improve RF autonomy by enhancing spectrum sensing, which enables RF systems to optimize to actual spectrum conditions and react to interference in real time, DARPA researchers say. These kinds of computer architectures potentially offer significant benefits for spectrum sensing and related applications, particularly when systems must operate in dynamic and sometimes-confusing environments. PROWESS expects to focus on the development of runtime reconfigurable processing hardware and support software.
Just last June Geon Technologies LLC in Columbia, Md., won a $9.9 million order from the U.S. Air Force Research Laboratory to develop small and lightweight real-time 5G communications signal processing for command and control. Geon experts will develop real-time signal processing for command and control, and size-, weight-, and power-constrained systems to capitalize on next-generation 5G communications waveforms and technologies.
Geon will focus on developing a 5G scanner to map out the 5G radio frequency environment and develop cyber security technologies for 5G communications. Geon specializes in RF communications for military and intelligence applications. The company's expertise revolves around software-defined radio applications; field-programmable gate array (FPGA) and digital signal processing (DSP) chips; signal processing, and geolocation techniques.
Last fall Vadum Inc. in Raleigh, N.C., won a contract from the U.S. Naval Surface Warfare Center Crane Division in Crane, Ind., for the Reactive Electronic Attack Measures (REAM) project to develop detection and classification techniques that identify new or waveform-agile radar threats using AI and machine learning to respond automatically with an EW attack.
Waveform-agile radar is able to change the time, frequency, space, polarization, and modulation of its signal from pulse to pulse to enhance its sensitivity, or to confuse potential adversaries about its design and use. The company is looking into software algorithms that provide EW protection against new and unknown threats, as well as the capability to characterize unknown radar threats, and scalable and modular capability to support additional platforms.
An Army technician assigned to the Army Futures Command's Artificial Intelligence Integration Center, conducts field testing with the Inspired Flight 3 drone at Fort Irwin, Calif. in October 2022, to demonstrate autonomy, augmented reality, tactical communications, advanced manufacturing, unmanned aerial systems, and long-range fires. Army photo
Today's airborne EW systems are proficient at identifying analog radar systems that operate on fixed frequencies. Once they identify a hostile radar system, EW aircraft can apply a preprogrammed countermeasure technique. Yet the job of identifying modern digitally programmable radar variants using agile waveforms is becoming more difficult. Modern enemy radar systems are becoming digitally programmable with unknown behaviors and agile waveforms, so identifying and jamming them is becoming increasingly difficult.
New approaches like REAM seek to enable systems to generate effective countermeasures automatically against new, unknown, or ambiguous radar signals in near real-time. They are trying to develop new processing techniques and algorithms that characterize enemy radar systems, jam them electronically, and assess the effectiveness of the applied countermeasures.
The Northrop Grumman Mission Systems segment in Bethpage, N.Y., won a $7.3 million contract in 2018 to develop machine-learning algorithms for the REAM program. The company is moving machine-learning algorithms to the EA-18G carrier-based electronic warfare jet to counter agile, adaptive, and unknown hostile radars or radar modes. REAM technology is expected to join active Navy fleet squadrons around 2025.