In This Issue

Acceptance of advanced autonomous systems: A call for research

Take a look at the future within the DoD and explore the domain of autonomous systems. A cause for concern or excitement?

By Joseph B. Lyons, PhD, and Michelle A. Grigsby
The Robotic Revolution Has Begun

Between autonomous cars, digital concierges at hotels or robotic assistants at warehouse stores, advanced autonomous systems are part of the present and the future. This is no less evident than within the Department of Defense (DoD). Advanced autonomous systems, henceforth referred in this paper as autonomy, represent a significant investment area for the DoD. The potential benefits of autonomy for the military include extended reach for distributed operations, access to hazardous areas or disaster zones where it would be dangerous or extremely difficult for humans to explore, freedom from certain human limitations/biases (workload, fatigue, stress, emotions [anger, fear, etc.]), greater processing speed (albeit for some tasks but not others) and improved performance for the airman/soldier–machine team (Defense Science Board, 2012). Further, capable autonomy may create opportunities to shed risk from the human operators/pilots to the technology alleviating some of the dangers of battle. For instance, an autonomous wingman could be used in a forward position to identify enemy integrated air defense systems, reducing the risk to manned platforms. Robotic sentries could be used in hostile regions in collaboration with or lieu of human soldiers. Security checkpoints of the future could be staffed by digital devices that dialogue with indigenous personnel in their native language. These types of systems are plausible. In February 2016, the Defense Secretary Ashton B. Carter hinted that drone swarms are on the horizon and that the Pentagon's Strategic Capabilities Office involvement would indicate the probable incorporation of commercial equipment for U.S. military use (Lamothe, 2016).

While combat-oriented autonomy tends to dominate the contemporary zeitgeist, it is equally plausible (and perhaps more so) that such systems will be created and implemented to augment humans in noncombat roles. Autonomy in this sense may materialize in agent-based systems for cyber security, logistics and maintenance robots, autonomous transport systems (both terrestrial as well as aerial), emergency response systems, medical systems, and intelligent aids for intelligence analysis.

Whatever the role, be it combat or noncombat, autonomy will be part of our future within the DoD and within society more broadly. Yet, autonomy (particularly autonomy within the DoD) has been met with considerable resistance from the general public and with some good reason for concern. In 2011, Iran alleged that they were able to capture a RQ-170 by jamming its control system (Axe, 2011). This loss of accountability by the DoD would only be compounded if the asset would have had lethal capabilities. Deciding the levels of control of any weapons system is warranted, and exponentially so, when the system possesses lethal capabilities. Patriot missile batteries that have some level of automation have been criticized as having less than perfect reliability as evidenced by the occurrence of friendly fire incidences (Knefel, 2015). Clearly, the use of semiautomated or semiautonomous systems in the context of kinetic actions is complex. The increased potential for autonomy in DoD operations requires that we establish a stronger understanding of the potential limitations and concerns of these systems both from the perspective of the DoD (inclusive of operators and stakeholders of these systems) as well as from the perspective of society more broadly.

The extreme thoughts of autonomy within the DoD comes with the moniker of “killer robot” and instantly postapocalyptic images of Skynet enslaving humanity surface. Much of this, no doubt, relates to the characterization of autonomy as being without supervision or without control of a human. This misnomer is unfortunate, particularly for DoD systems, because the vision for autonomy within the DoD seeks to consider the autonomy as part of the overall man-machine system and to operate as a collaborative partner with other humans as opposed to being set free to wreak havoc on unsuspecting others. Even the infamous “drones,” which are often the target of public discontent, are designed to be teleoperated, and, hence, still under human control. Yet, the ethical implications of autonomous systems are real, and as technology advances the incorporation of adequate human control aspects such as accountability, moral responsibility and controllability must be clearly defined and understood by the human operators (Horowitz & Scharre, 2015). Today, DoD is focused on issues associated with human-machine teaming that emphasizes the technology as part of a human-machine system rather than viewing it as a means in and of itself. Yet, understandably, the greater decision authority afforded technology in any domain, the higher the potential risk and less able humans are to predict the behavior of the system. Ultimately, what we are faced with is an issue of how to understand trust of autonomy.

Trust of Autonomy Through Transparency

Principally, trust represents one's willingness to be vulnerable to another entity (Mayer, Davis, & Schoorman, 1995), and trust of automated systems and robotic systems is a significant topic for researchers (Hancock et al., 2011; Hoff & Bashir, 2015). Trust is important because it will impact decisions and behaviors related to reliance in critical situations (i.e., use or disuse of the system when it matters most). A critical facet of the trust process is the notion of appropriate reliance—meaning we should not aim to increase trust absent a trustworthy system. In other words, we should not aim to increase trust of an unreliable system. In contrast, calibrated trust exists when users make appropriate decisions to rely on the autonomy when reliance is warranted (e.g., high trustworthiness) and avoid relying on it when it is not warranted. This notion of calibrated trust is essential for autonomy as there may be times when operators should or should not rely on these systems. If autonomy is designed to promote effective teaming with humans, then their human counterparts will be equipped with the necessary information/knowledge to make appropriate reliance decisions. While there is a dearth of human-machine teaming conceptual models, one method to help promote effective teaming between humans and autonomy is to design autonomy and training for human-machine teams in such ways that facilitate shared awareness and shared intent between the humans and the autonomy.

Lyons (2013) discusses the concept of human-robot transparency as a method for establishing shared awareness and shared intent between humans and machines and suggests that transparency is one method to establish calibrated trust of autonomy. Historically, transparency has been operationalized as understanding the analytical underpinning of an automated system or robot. Clearly, knowing how autonomy works and why it selects one action over another is a critical factor; however, this will be inadequate to cover the gamut of intentional and awareness-based needs of the human. Lyons (2013) discusses seven forms of transparency that may have relevance for human-robot transparency: intent, environment, task, analytic, team, human state and social intent. Further, the primary affordances for invoking these dimensions of transparency include training, design or interfaces.

The intent transparency facet represents the overall purpose and expectations related to the system. This element of transparency is improved when form (i.e., how the autonomy looks and moves) matches function (e.g., the desired use of the autonomy). Expectations of capabilities and intent are often related to form, suggesting that mismatches can be detrimental to calibrated trust. Symbols and naming schemes may also play into this facet of transparency.

The environment dimension of transparency describes how the autonomy senses its surroundings. What sensors does it use, how does it integrate novel information about the context, is it capable of detecting changes in the environment and reacting accordingly? Knowing how the autonomy interacts with the environment is crucial for making appropriate trust-based decisions in dynamic environments. Imagine for instance, the potential problems with automated lane-keeping technologies in cars that have degraded capabilities in rain or snow, but that lack the ability to communicate that limitation to their human drivers/passengers. Appropriate trust of autonomy will require that these systems are given adequate sensing capabilities and artificial intelligence to understand when environmental conditions are degraded or suboptimal. Further, human partners for autonomy must be knowledgeable about the capabilities of the autonomy in varying conditions.

In a related sense, the autonomy and the human partner must understand the task-based capabilities and limitations of the autonomy. This suggests that the autonomy should have some capability for self-monitoring within a task context. Likewise, the human partner should have both historical knowledge of the autonomy and its capabilities/limitations but also, where possible, real-time indicators of performance linked to the task at hand.

Analytics, as noted above, are still key for making autonomy somewhat predictable to their human partners. Predictability is a core ingredient for trust (Hancock et al., 2011) and one of the ways to promote predictability of autonomy is to ensure that the human partners understand how the systems works, the rationale for behaviors, and when it might fail. Advances in artificial intelligence often complicate matters from a transparency standpoint as more sophisticated algorithms/methods may be more difficult to understand by non-computer scientists. Thus, designers must consider how to ensure that humans understand the analytical underpinning of advanced autonomy.

In addition to understanding to the analytical side of autonomy, human and machines must be able to understand the division of labor between the human and the machine. Transfer of authority between humans and machines remains a significant challenge for researchers working on approaches for autonomy. Teams that have a greater shared awareness (e.g., mental models of the teamwork and coordination activities for the task) evidence better performance and this knowledge can be trained (Marks, Sabella, Burke, & Zaccaro, 2002). The human-machine team must be able to understand who has what role, at what time and why. This type of transparency will be required for both the human and the autonomy.

The next transparency facet involves an understanding of the human state (e.g., stress, workload, emotion, motivation). Future autonomy must be able to gauge the human state to evaluate potential performance degradations before they occur. For autonomy to have this knowledge, the systems must have the capability to sense the states, assess the meaning of the states in that particular task context and augment the human in ways that are consistent with the team's goals (Galster & Johnson, 2013).

The final transparency facet, and perhaps the most controversial, is the notion of social intent. Social intent, in the form of benevolence, has been shown to be a foundational antecedent of trust (Mayer at al., 1995). The same may be true for autonomy, particularly for autonomy that has (or is perceived to have) agency. The greater the decision authority given to autonomy, the more likely that the social intent of the autonomy will be an important trust antecedent. Social intent can also involve things like etiquette, emotional interaction and social bonding, which impact beliefs and attitudes toward the system. 

A Call for Research

The following topics would be useful in helping researchers understand the trust process as the DoD moves toward approaches for autonomy:

  • Examination of methods to establish transparency for the various aspects of transparency in human-machine contexts.
  • Evaluation of the impact of transparency on trust and performance in human-machine contexts.
  • Development of methods for human state sensing and the accompanying methods to evaluate the impact of these methods on human trust in human-machine contexts.
  • Research to examine the antecedents of trust among the public for DoD autonomy.
  • What facets of transparency drive public trust or distrust of DoD autonomy?
  • How is trust of autonomy for the general public different for trust among military personnel?
  • Development of methods to verify and validate autonomy according to social and task-based rules/policies.
Closing Thoughts 

Autonomy has promise to improve human performance both within the DoD and in society more broadly. Automated systems (as opposed to autonomy) already have a significant impact on the DoD. For instance, the Air Force's Automatic Ground Collision System (AGCAS) has been credited with saving lives in operations (Norris, 2015), thus advanced technology can help—but it is no panacea. Human operators will likely always play a critical role with DoD operations, and as such, future autonomy must be designed and implemented to consider the human partners with which they will interact. Efforts should be made to ensure that autonomy is as transparent as feasible to promote calibrated trust. The movement toward advanced autonomy is a very real trend within the DoD and interested readers are encouraged to respond to this paper with ideas, opinions and research that can help the DoD to facilitate appropriate trust and acceptance of autonomy both within the military echelons and within society.

This paper does not reflect an official position of the DoD but rather it represents the opinion of the authors.

For further information, please contact Joseph B. Lyons.

References

Axe, D. (2011, December 5). Did Iran capture a U.S. stealth drone intact? Wired Magazine. Retrieved from http://www.wired.com/2011/12/did-iran-capture-a-u-s-stealth-drone-intact/

Defense Science Board. (2012). The role of autonomy in DoD systems (Task force report by the Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics). Washington, DC: Office of the Secretary of Defense.

Galster, S. M., & Johnson, E. M. (2013). Sense-assess-augment: A taxonomy for human effectiveness (AFRL-RH-WP-TM-2013–0002). Wright-Patterson AFB, OH: Air Force Research Lab Human Effectiveness Directorate.

Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53, 517–527.

Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57, 407–434.

Horowitz, M., & Scharre, P. (2015). Meaningful human control in weapons systems: A primer (Center for a New American Security—working paper). Retrieved from http://www.cnas.org/human-control-in-weapon-systems#.Vr40IjZf0fc

Knefel, J. (2015, August 14). The Air Force wants you to trust robots. Should you? Vocativ Magazine. Retrieved from http://www.vocativ.com/news/224779/the-air-force-wants-you-to-trust-robots-should-you/

Lamothe, D. (2016, February 8). Why drones swarms will buzz to the forefront in the new Pentagon budget. The Washington Post. Retrieved from https://www.washingtonpost.com/news/checkpoint/wp/2016/02/08/why-drone-swarms-will-buzz-to-the-forefront-in-the-new-pentagon-budget/

Lyons, J. B. (2013). Being transparent about transparency: A model for human-robot interaction. In D. Sofge, G. J. Kruijff, & W. F. Lawless (Eds.), Trust and autonomous systems: Papers from the AAAI Spring Symposium (Tech. Rep. No. SS-13–07; pp. 48–53). Menlo Park, CA: AAAI Press.

Marks, M. A., Sabella, M. J., Burke, C. S., & Zaccaro, S. J. (2002). The impact of cross-training on team effectiveness. Journal of Applied Psychology, 87, 3–13.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrated model of organizational trust. Academy of Management Review, 20, 709–734.

Norris, G. (2015, February 5). Ground Collision Avoidance System “saves” first F-16 in Syria. Aerospace Daily and Defense Report. Retrieved from http://aviationweek.com/defense/ground-collision-avoidance-system-saves-first-f-16-syria