Welcome to IoTForums.com Community
Register now to participate in discussions, ask questions and share your knowledge on internet of things and automation! Feel free to sign up today.
Register Now

An AI Just Beat a Human F-16 Pilot In a Dogfight — Again


Just Hatched
Jan 28, 2020
In five rounds, an artificially-intelligent agent showed that it could outshoot other AI’s, and a human. So what happens next with AI in air combat?

The never-ending saga of machines outperforming humans has a new chapter. An AI algorithm has again beaten a human fighter pilot in a virtual dogfight. The contest was the finale of the U.S. military’s AlphaDogfight challenge, an effort to “demonstrate the feasibility of developing effective, intelligent autonomous agents capable of defeating adversary aircraft in a dogfight. “

Last August, Defense Advanced Research Project Agency, or DARPA, selected eight teams ranging from large, traditional defense contractors like Lockheed Martin to small groups like Heron Systems to compete in a series of trials in November and January. In the final, on Thursday, Heron Systems emerged as the victor against the seven other teams after two days of old school dogfights, going after each other using nose-aimed guns only. Heron then faced off against a human fighter pilot sitting in a simulator and wearing a virtual reality helmet, and won five rounds to zero.

The other winner in Thursday’s event was deep reinforcement learning, wherein artificial intelligence algorithms get to try out a task in a virtual environment over and over again, sometimes very quickly, until they develop something like understanding. Deep reinforcement played a key role in Heron System’s agent, as well as Lockheed Martin’s, the second runner up.

Matt Tarascio, vice president of artificial intelligence, and Lee Ritholtz, director and chief architect of artificial intelligence, from Lockheed Martin told Defense One that trying to get an algorithm to perform well in air combat is very different than teaching software simply “to fly,” or maintain a particular direction, altitude, and speed. Software will begin with a complete lack of understanding about even very basic flight tasks, explained Ritholtz, putting it at a disadvantage against any human, at first. “You don’t have to teach a human [that] it shouldn’t crash into the ground… They have basic instincts that the algorithm doesn’t have,” in terms of training. “That means dying a lot. Hitting the ground, a lot,” said Ritholtz.

Tarascio likened it to “putting a baby in a cockpit.”

Please, Log in or Register to view URLs content!

Log in

or Log in using