Scroll Top

US Army unveils robotics project that lets AI’s ask soldiers clarifying questions

WHY THIS MATTERS IN BRIEF

In battle the slightest miscommunication can have catastrophic consequences, so soon AI’s will be able to ask soldiers questions to bolster their own understanding.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, read our codexes, or browse my blog.

As Artificial Intelligence (AI) firmly cements its role on the battlefield one problem that the US military faces is trying to develop it to a point where AI’s and soldiers can collaborate effectively with one another, and that means their being able to communicate and quiz one another when the need arises.

 

See also
OpenAI's GPT-4o Chinese training data was riddled with porn and spam

 

Now US Army researchers have announced they’ve developed a “novel AI that allows robots to ask clarifying questions to soldiers, enabling them to be more effective teammates in tactical environments.”

In other words, if the AI’s and robots aren’t sure about something, or the context of something, now all they have to do is ask a question. And bearing in mind where AI is today in it’s overall evolution that’s an incredibly interesting development and one that could eventually have a positive impact on helping develop both conversational and explainable AI systems.

There’s no doubting that future Army missions will have autonomous agents, such as robots, embedded in human teams making decisions in the physical world. And one major challenge toward this goal is maintaining performance when a robot encounters something it has not previously seen — for example, a new object or location.

 

See also
A dead gradma story fooled Bing chat into helping solve a CAPTCHA

 

Robots will need to be able to learn these novel concepts on the fly in order to support the team and the mission.

“Our research explores a novel method for this kind of robot learning through interactive dialogue with human teammates,” said Dr. Felix Gervits, researcher at the US Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory. “We created a computational model for automated question generation and learning. The model enables a robot to ask effective clarification questions based on its knowledge of the environment and to learn from the responses. This process of learning through dialogue works for learning new words, concepts and even actions.”

Researchers integrated this model into a cognitive robotic architecture and demonstrated that this approach to learning through dialogue is promising for Army applications.

 

See also
Two world firsts as researchers hack an unhackable quantum network

 

This research represents the culmination of a multi-year DEVCOM ARL project funded under the Office of the Secretary of Defense Laboratory University Collaboration Initiative, or LUCI, program for joint work with Tufts University and the Naval Research Laboratory.

In previous research, Gervits and team conducted an empirical study to explore and model how humans ask questions when controlling a robot. This led to the creation of the Human-Robot Dialogue Learning, or HuRDL, corpus, which contains labelled dialogue data that categorizes the form of questions that study participants asked.

The HuRDL corpus serves as the empirical basis for the computational model for automated question generation, Gervits said.

 

See also
NASA inspired tech designed for outer space triples crop yields on Earth

 

The model uses a decision network, which is a probabilistic graphical model that enables a robot to represent world knowledge from its various sensory modalities, including vision and speech. It reasons over these representations to ask the best questions to maximize its knowledge about unknown concepts.

For example, he said, if a robot is asked to pick up some object that it has never seen before, it might try to identify the object by asking a question such as “What color is it?” or another question from the HuRDL corpus.

The question generation model was integrated into the Distributed Integrated Affect Reflection Cognition, or DIARC, robot architecture originating from collaborators at Tufts University.

 

See also
Researchers use acoustic tractor beams to make complex structures in mid air

 

In a proof-of-concept demonstration in a virtual Unity 3D environment, the researchers showed a robot learning through dialogue to perform a collaborative tool organization task.

Gervits said while prior ARL research on Soldier-robot dialogue enabled robots to interpret Soldier intent and carry out commands, there are additional challenges when operating in tactical environments.

For example, a command may be misunderstood due to loud background noise, or a Soldier can refer to a concept to which a robot is unfamiliar. As a result, Gervits said, robots need to learn and adapt on the fly if they are to keep up with Soldiers in these environments.

“With this research, we hope to improve the ability of robots to serve as partners in tactical teams with Soldiers through real-time generation of questions for dialogue-based learning,” Gervits said. “The ability to learn through dialogue is beneficial to many types of language-enabled agents, such as robots, sensors, etc., which can use this technology to better adapt to novel environments.”

 

See also
Take a glorious Hi Def tour of the International Space Station

 

Such technology can be employed on robots in remote collaborative interaction tasks such as reconnaissance and search-and-rescue, or in co-located human-agent teams performing tasks such as transport and maintenance.

This research is different from existing approaches to robot learning in that the focus is on interactive human-like dialogue as a means to learn. This kind of interaction is intuitive for humans and prevents the need to develop complex interfaces to teach the robot, Gervits said.

Another innovation of the approach is that it does not rely on extensive training data like so many deep learning approaches.

Deep learning requires significantly more data to train a system, and such data is often difficult and expensive to collect, especially in Army task domains, Gervits said. Moreover, there will always be edge cases that the system hasn’t seen, and so a more general approach to learning is needed.

 

See also
China shows off its revolutionary silent submarine engine

 

Finally, this research addresses the issue of explainability.

“This is a challenge for many commercial AI systems in that they cannot explain why they made a decision,” Gervits said. “On the other hand, our approach is inherently explainable in that questions are generated based on a robot’s representation of its own knowledge and lack of knowledge. The DIARC architecture supports this kind of introspection and can even generate explanations about its decision-making. Such explainability is critical for tactical environments, which are fraught with potential ethical concerns.”

“I am optimistic that this research will lead to a technology that will be used in a variety of Army applications,” Gervits said. “It has the potential to enhance robot learning in all kinds of environments and can be used to improve adaptation and coordination in Soldier-robot teams.”

Source: US Army

Related Posts

Leave a comment

FREE! 2024 TRENDS AND EMERGING TECHNOLOGY CODEXES
+

Awesome! You're now subscribed.

Pin It on Pinterest

Share This