Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 24 May 2016
Sec. Cognitive Science
This article is part of the Research Topic Macrocognition: The Science and Engineering of Sociotechnical Work Systems View all 12 articles

Supervising and Controlling Unmanned Systems: A Multi-Phase Study with Subject Matter Experts

\r\nTalya Porat,*Talya Porat1,2*Tal Oron-GiladTal Oron-Gilad2Michal Rottem-HovevMichal Rottem-Hovev3Jacob SilbigerJacob Silbiger4
  • 1Department of Primary Care & Public Health Sciences, King's College London, London, UK
  • 2Department of Industrial Engineering and Management, Ben Gurion University of the Negev, Beer Sheva, Israel
  • 3HFE Independent Consultant, Tel-Aviv, Israel
  • 4Synergy Integration Ltd., Tel-Aviv, Israel

Proliferation in the use of Unmanned Aerial Systems (UASs) in civil and military operations has presented a multitude of human factors challenges; from how to bridge the gap between demand and availability of trained operators, to how to organize and present data in meaningful ways. Utilizing the Design Research Methodology (DRM), a series of closely related studies with subject matter experts (SMEs) demonstrate how the focus of research gradually shifted from “how many systems can a single operator control” to “how to distribute missions among operators and systems in an efficient way”. The first set of studies aimed to explore the modal number, i.e., how many systems can a single operator supervise and control. It was found that an experienced operator can supervise up to 15 UASs efficiently using moderate levels of automation, and control (mission and payload management) up to three systems. Once this limit was reached, a single operator's performance was compared to a team controlling the same number of systems. In general, teams led to better performances. Hence, shifting design efforts toward developing tools that support teamwork environments of multiple operators with multiple UASs (MOMU). In MOMU settings, when the tasks are similar or when areas of interest overlap, one operator seems to have an advantage over a team who needs to collaborate and coordinate. However, in all other cases, a team was advantageous over a single operator. Other findings and implications, as well as future directions for research are discussed.

Introduction

The continuing proliferation in the use of UASs in both civil and military operations has presented a multitude of human factors challenges, including assessing the cognitive capabilities of one operator to simultaneously supervise and control multiple platforms, evaluating the advantages and disadvantages of an individual operator vs. a team, and finding meaningful ways to organize and present data. Underlying many of these challenges is the issue of how automation capabilities can best be utilized to assist human operators in handling increasing complexity and workload (Fern et al., 2011).

When the first unmanned aerial systems (UASs) were introduced in the 1980s, engineers and military leaders were content with their ability to extend capabilities of intelligence perception beyond the capacities that were there before. Once these technological advancements became part of a routine, it became evident that the ratio of personnel vs. crafts issue will rise. There are multiple reasons why managers and leaders are interested in reducing the man-machine control ratio, only to mention a few: fewer operators mean less need for training, less diversity in training, and reduced costs of manpower and training.

The focus on operator-UAS ratio corroborated even more in light of the US Office of the Secretary Defense Roadmap for unmanned aircraft systems (UASs: 2005-2030)1, which delineates the need to investigate the “appropriate conditions and requirements under which a single pilot would be allowed to control multiple airborne UA [unmanned aircraft] simultaneously.” Since then, till today the question of how many UASs or UAVs (Unmanned Aerial Vehicles) can one operator control or supervise has become a vital question that many researchers try to answer (e.g., Chen et al., 2013; Goodrich and Cummings, 2014).

Cummings et al. (2007a) proposed a hierarchical control model to portray control loops for a single operator in control of one UAV or multiple systems. In this three-level model, the innermost loop (Flight controls) represents the need for basic guidance and motion control (i.e., keeping the aircraft in stable flight) and is the most critical. If operators must interact in this loop, the cost will be very high since this loop requires significant cognitive resources. The second loop (Navigation) represents the actions that should be executed to meet mission constraints, such as routes to waypoints, time on targets, and avoidance of threat areas. The outermost loop (Mission and payload management) represents the highest levels of control—decisions which require knowledge-based reasoning that must be made to meet overall mission requirements. Health and status monitoring are tasks that cross all three loops, where the operator is required to perform continuous supervision to ensure that all systems are operating within normal limits. Hence, in order for one operator to be able to control multiple systems, operators will need to interact primarily at the outermost loop via a mission and payload manager while relegating routine navigation and motion control tasks to the automation. For example, given such significant autonomy, one operator could control 4–5 vehicles (Cummings et al., 2007a) and apply supervisory control for up to 12 vehicles (Cummings and Guerlain, 2007).

Higher levels of automation will enable operators to increase the number of unmanned systems they control and supervise, however, extensive use of automation can also introduce human performance costs such as loss of situation awareness, skill degradation, complacency, increased mental workload (Parasuraman et al., 2000) and automation bias (Mosier and Skitka, 1996). Hence, supervisory control of multiple UASs raises questions concerning how to balance system autonomy and human interaction (Calhoun et al., 2011, 2013). Furthermore, the challenge of incorporating automation in one vehicle is replaced by the need to keep the human “in the loop” of the activities for all vehicles (Ruff et al., 2002). Careful system design can mitigate performance costs and can be achieved by: allowing flexibility in the design of function allocation (i.e., which tasks will be performed by the human and which will be performed by the system), the level of automation to be implemented within each function (Parasuraman et al., 2000; Chen et al., 2013; Gu et al., 2014), and the operators' level of trust in the automation (Clare et al., 2015). Eventually, when flight control becomes fully automated, operators will manipulate the payloads rather than fly the vehicles (e.g., Cooper and Goodrich, 2008).

Ruff et al. (2002) compared the effects of automation level and decision-aid fidelity on the number of simulated remotely operated vehicles (ROVs) that could be successfully controlled by a single operator during a target acquisition task. Their results indicated that an automation level incorporating management-by-consent had clear performance advantages over the more autonomous (management-by-exception) and less autonomous (manual control) levels of automation. Calhoun et al. (2011) used a UAV simulation environment to evaluate two applications of autonomy levels across two primary control tasks: allocation (assignment of sensor tasks to vehicles) and router (determining vehicles' flight plans). Their results showed that performance on both primary tasks and many secondary tasks was better when the level of automation was the same across the two sequential primary tasks. Thus, having the level of automation similar across closely coupled tasks reduced mode awareness problems, which can negate the intended benefits of a fine-grained application of automation.

Adaptive automation (AA) alters the level of automation dynamically during operation. This allows the automation to account for individual differences and allows the automation to be more flexible, context-dependent, and user-specific (Saqer et al., 2011). Wilson and Russell (2007) demonstrated that the customization of automation and difficulty level to the individual operator had greater potential benefit than AA developed based on group performance means. Cummings et al. (2010) examined the impact of increasing automation re-planning rates on operator performance and workload when supervising a decentralized network of heterogeneous unmanned vehicles. They claimed that the future of one operator controlling multiple UVs requires automated planners, which are faster than humans at path planning and resource allocation. They examined three increasing levels of re-planning, and showed that rapid re-planning can cause high operator workload, ultimately resulting in poorer overall system performance. Calhoun et al. (2013) designed an interface enabling pilots to flexibly change the role of automation during the mission, transitioning between four control modes ranging from manual to high level “plays.” Their results showed that this approach is promising for single operator supervisory control of multiple UASs, however participants claimed that flexibility should be increased even more, enabling the operator to employ multiple control modes in a single task.

While automation can definitely increase the number of UASs a single operator can supervise and control, Hancock et al. (2007) raised a concern with the ongoing debate over how many UASs should or can a single operator control. The functional design questions that were raised were: (a) should researchers and designers continue to strive for a higher ratio, and, (b) if they decide to go forward in this direction, what is the modal number? As with all design questions, the immediate answer was simple: It depends. To be sure, the human being as the ultimate adaptive system may be able to demonstrate multiple UAS control, but we consider this an instance of what design can do, not what design should do. In response, John Senders commented that “with appropriate control and display systems, the handling of more than one machine remains both useful and practical. Simultaneous (actually, appropriately sampled) control of many high-order systems by one operator was demonstrated to be feasible when the displays of attitude are appropriately quickened. Henry P. Birmingham demonstrated this many decades ago by showing excellent simultaneous control of 2 two-dimensional, third-order systems (Birmingham and Taylor, 19542) …. Even modestly intelligent design would allow multiple UAVs and multiple displays to be searched or monitored efficiently with good connectivity between the displays. The individual operator is therefore the appropriate unit of analysis only when such bottlenecks occur at that level. More generally, if one views the collective team as an integrated, flexible system, then the very question of the UAV:Operator ratio itself becomes irrelevant.”

After decades of field practice, the importance of operational use of UASs in combat and in civil operations has increased tremendously. Different team configurations consisting of Multiple Operators and Multiple UASs (MOMU) are nowadays evaluated (e.g., Mekdeci and Cummings, 2009; Gao et al., 2014), implying that indeed the operator to UAS ratio has become an outcome but not a target of its own.

MOMU is a relatively new operational setup for covering areas of interest, particularly in reconnaissance missions. It is highly relevant for homeland security and surveillance operations. A mode of one operator controlling multiple UASs can often increase the cognitive burden of its operators. MOMU setups aim to prevent high operator workload and low situation awareness, and can be very advantageous in offloading tasks to distribute workload among operators. Furthermore, MOMU setups can be advantageous also in terms of utilization of assets, as they contribute to increasing payload efficiency and system effectiveness. However, MOMU settings initiate new challenges for operators as they require switching of information sources, i.e., tasks, missions, video feeds, or camera manipulations and responsibilities among operators.

Switching is a time-critical, cognitively demanding task. Cognitive costs of switching may be loss of orientation and situation awareness (SA), increase in workload, and decrease in efficiency of verbal team communication and coordination. Consequently, switching between sources can disrupt operator performance (Draper et al., 2008; Squire and Parasuraman, 2010), and generate slower and less accurate responses compared to performing a single type of task (Allport et al., 1994; Monsell, 2003). In MOMU environments, where operators need to handoff aircrafts, payloads, targets, or missions to each other, switches may have a vital effect on mission accomplishment.

Over the past decades our team has advanced and improved operational concepts for UASs operators in surveillance and recon missions. Like most others, our studies began with examining the UAS to operator ratio, then to how to increase capacity of a single operator by utilizing tools and automation modes, which gradually shifted toward the MOMU framework. Here we report and revisit these multi-phase studies. Our goal is to demonstrate how the focus of research and practice moved toward a more collaborative operational concept that enables distribution of work and assets among multiple operators. We demonstrate the progress that has been occurring in this human-unmanned system research and how we perceive it should be further directed. We begin with operator to UAS ratio studies. Then, we demonstrate how the MOMU concept evolved. Lastly, we discuss why the changes in UASs control concepts are relevant for other less mature human-robot control domains.

The series of studies has been utilizing the Design Research Methodology (DRM; Blessing and Chakrabarti, 2009). DRM is sometimes called “Improvement Research” emphasizing the problem solving/performance-improving nature of the activity. It enables researchers and analysts to rapidly develop and test prospective improvements, deploy what they have learned about what works, and add to their knowledge to continuously improve the performance of the system (Vaishnavi and Kuechler, 2004). Our aim was to look at the problem from different levels of activity (e.g., supervise, control, mission management), settings (individual vs. team), resources (number of operators, number of vehicles), and automation levels.

In this paper, we do not portray details of every single step in each individual study. Our focus was on the design implications that stemmed from each study phase. This was a conscious strategy, not to be reductionists per se, but to allow examination of the operational concept issues from a higher perspective. All the evaluations that are presented were conducted with highly experienced UAS operators (subject matter experts; SMEs) which is necessary for DRM.

Methods

We utilized the DRM, with SMEs which focuses on what works, for whom, and under what conditions. In this model (see Figure 1) all designs begin with Awareness of a problem; then usually from the existing knowledge of the problem area, solutions are suggested, after the suggestion phase, there is an attempt to implement an artifact according to the suggested solution—the Development phase. Partially or fully successful implementations are then evaluated with potential users. Development, Evaluation and further Suggestions are frequently iteratively performed in the course of the research (design) effort. The basis of the iteration, the flow from partial completion of the cycle back to Awareness of the Problem, is indicated by the Circumscription arrow. Conclusion indicates termination of a specific design project. New knowledge production is indicated by the arrows labeled Circumscription and Operation and Goal Knowledge (Vaishnavi and Kuechler, 2004; Kuechler and Vaishnavi, 2008). The goal of DRM is to help design research become effective and efficient by making the most out of valuable resources and applying gathered knowledge “on the move.” It is particularly suitable for complex interactive systems.

FIGURE 1
www.frontiersin.org

Figure 1. Reasoning in the Design Research Cycle (cf. Kuechler and Vaishnavi, 2008).

The studies took place at a designated laboratory at Synergy Integration Ltd. which was set up to resemble a typical UAS control room (see Figure 2). The work environment was simulated, but “true to life,” mimicking UAS military operators' work, who need to operate UASs while placed in a remote designated cabin. The lab consisted of several connected workstations containing a simulation system, which could be configured according to the task and needs (i.e., number of vehicles, individual vs. team operation, time limitations, use of decision support tools, etc.). In this setting, cognitive tasks such as planning, detecting problems, and managing uncertainty (macro-cognitive processes) could be evaluated. Level of automation and mission components were chosen using arrangements similar to the control loops of Cummings et al. (2007a).

FIGURE 2
www.frontiersin.org

Figure 2. The simulated environment. In the configuration shown here three operators are collaboratively operating three UASs at the same time.

Studies

In the following we describe four studies, with their sub-conditions. The earlier two studies examined the operator/platform ratio in several operational scenarios and tasks. The first study examined the number of UASs one operator can supervise (health and status monitoring). The second study examined the number of UASs one operator can control (Mission and payload management) at a single instance. Studies 3 and 4 compared performance of one operator vs. a team of operators controlling the same number of UVs (MOMU studies). Study 3 took place in the UAS environment, while Study 4 took place in the UGV (unmanned ground vehicle) environment. This enabled us to further examine commonalities between the domains of operation. In the following, each study with its different experimental conditions is described.

Study 1

Problem: Starting the project, in what may now seem archaic for the UAS domain, health monitoring was identified as the main attention pitfall for operators. Back then, operators had to check the system's health repeatedly while they were performing the flight mission. Displayed health data had to be compared manually against a manufacturer checklist, an error prone process with heavy reliance on memory and specifically prospective memory (see Figure 3).

FIGURE 3
www.frontiersin.org

Figure 3. Study 1 illustration. Left: the original health data form; Mid: The modified health data form with two-step (orange and red) fault indicators (condition B); Right: Graphic presentation of a trend in the zoom-in view of one health parameter (condition C).

The first study aimed to facilitate the health monitoring task, using automation and tools in order to increase the efficiency and the number of UAV's that one operator could supervise simultaneously.

Study question: How many UAVs can one operator supervise (health monitoring) efficiently?

Participants: Five highly experienced male operators. All are reserve soldiers in active duty. They had 4–7 years of experience in operating military UASs (mean: 5.2), and their age ranged from 23 to 30 (mean: 26.6). SMEs were compensated for their time. The same five participants performed each one of the study conditions, hence a within-subject design was used. Since in DRM one makes incremental design changes, and this process takes time, there was a significant time gap between the different conditions (at least 1 month).

Initial State—Manual, Sequential Supervising

Task

1:5—one operator manually supervised five UAVs of the same kind (utilizing a paper-based checklist).

Procedure

For each UAV, 13 health indices were displayed numerically on a form. In addition, two location indices were displayed on a map (X-Y coordinates, related to the pre-defined route). To evaluate the health status of the UAV, participants had to compare the values on the on-line form to a paper-based checklist with the appropriate value ranges. On the screen, the operator could view the health data of only one UAV at a time (i.e., the task required sequential browsing of the health forms). Operators performed continuous manual health monitoring by comparing each index in each form to the desired values written in the hard-copy. While doing this, operators had to relate to different flight stages, as health values varied as a function of flight stage.

Results

The cycle time to supervise one UAV was very long—5 min (SD = 0.7). The time to detect a fault depended on its location on the form and most faults were detected in late stages of the flaw. Detecting the fault source was almost impossible and took on average 13 min (SD = 6). Deviations from the planned route were detected late, after an average of 3 min (SD = 0.2), hence, only after there was a meaningful deviation from the route on the map (scale: 1:50,000).

Operators indicated that the task was difficult and exhaustive within less than 1 h of supervising. They complained on high workload and that they could not imagine succeeding in supervising another (6th) UAV.

Condition A—Simultaneous Supervising

Task

1:5—one operator manually supervised five UAVs with two changes relative to the initial state condition.

Suggestion—design change from initial state

To facilitate manual health monitoring, two design implementations were introduced: (1) for each data item an intact indication was added, depending on the flight stage: Intact, Warning (5% lower or higher than the intact value), or Fault; (2) all UAV health data forms were displayed simultaneously.

Results

The cycle time to supervise one UAV has decreased from 5 to 2 min (SD = 0.4). Most faults were detected in early stages (an average of 5 s to detect a fault). Detection of fault source and route deviations did not improve or differ from the initial state.

Condition A+—Like A but with More Systems

Task

1:10—one operator supervised manually 10 UAVs with the same design as in Condition A.

Suggestion—design change from condition A

Five additional UAVs were added to the supervising task. The limitation to 10 was due to screen size (which enabled displaying up to 10 UAV health forms simultaneously).

Results

Similar results to condition A—the cycle time to supervise one UAV remained 2 min on average (SD = 0.4). Most faults were detected in early stages (average of 5 s to detect a fault). Detection of fault source and route deviations did not improve from the initial state.

Condition B—Grouping the Health Indices

Task

1:10–1:20—Operators started with supervising 10 UAVs. During the evaluation, UAVs were added gradually until a single operator was supervising 20 UAVs at a time. To facilitate supervising, the 12 health indices were grouped into four categories.

Suggestion—design change from condition A+

There was a change in the display design: the two location indices and one health index were removed (the focus was now only on health parameters). The remaining 12 health indices were grouped into four meaningful categories (e.g., engine, communication, etc.). For each of them, three intact indications were displayed: Intact, Warning and Fault. The shape of the indication icon implied on the contained data in each category. For example, the group containing communication measures (increase/decrease) had an indication icon of arrows pointing up or down.

For each UAV only group indications were displayed on the health data form. The operator could open the full form by clicking on the indication group.

Results

Results were similar to the ones in condition A. Operators reported upon high workload and a feeling of losing control once the 17th UAV was added.

Condition B+—Single Indicator for Each System

Task

1:10–1:20—the operator started supervising 10 UAVs. During the study UAVs were added gradually until stopped at one operator supervising 20 UAVs with a change in the way intact indications were displayed.

Suggestion—design change from condition B

The four group indications used in Condition B for each UAV were replaced with one intact indication (icon) for each UAV placed on the command and control map. The operator could click on the icon and view the detailed form. In addition, an alert was added for location deviation.

Results

Results were similar as in condition A, except for the time to detect deviations from route which was dramatically shortened to 5 s on average (instead of 3 min in previous conditions). Operators succeeded in supervising 15-17 UAVs.

Condition C—Addition of Malfunction/Health Problem Trends

Task

1:10–1:20—Operators started with supervising 10 UAVs. During the study UAVs were added gradually until stopped at one operator supervising 20 UAVs. The major change was the addition of a graph display to identify trends in health measures.

Suggestion—design change from condition B+

For each indicator, a graph displaying its measured values and intact indications was added. The graph was displayed once the user clicked on the measure value from the health data form. The purpose of this condition was to evaluate if time based information on any specific indication could decrease the time it took for operators to detect the fault source (i.e., aimed to facilitate better malfunction source detection, see Figure 3).

Results

Results were the same as in condition A, except for the major improvement in the time to detect the fault source, which decreased to less than 5 min in 95% of the cases (instead of an average of 13 min in all previous conditions). The ability to view the behavior of the health-related measure over time has helped the operators to understand and detect the source of the fault. The downside of this measure is that it is only suitable for mature systems where the number of faults is relative small, and there is a clear well established link between the health-related measure and its source. Operators succeeded in supervising up to 10 UAVs, mainly because here, more attention was allocated to detecting the source of the fault than previously, and there was not enough time for all the faults to be further examined.

Study 1 Summary

After performing the first study with its three main conditions, it is possible to claim that one experienced operator can supervise up to 15 UAVs efficiently using the level of automation, the indication tools and the task characteristics described in conditions B and B+. Nevertheless, since health monitoring is only part of mission demands, it was necessary to further investigate the issue of mission and payload management control in Study 2.

Study 2

Problem: The “classical” ratio concern; there was a requirement to increase the number of UAVs that one operator can control.

Study question: How many UAVs can one operator control (mission and payload management) efficiently and how can this ratio be improved.

Participants: Ten highly experienced male operators (SMEs) with similar military background and skills. They had 3–10 years of experience (mean: 5.6) —7 SMEs in operating military UASs and 3 SMEs in operating other types of military electro-optical sensors. Their age ranged from 23 to 30 (mean: 26). SMEs were compensated for their time.

Condition A—One to One vs. One to Two

Task

1:1 vs. 1:2—One operator tracks a moving target with one UAV vs. two UAVs.

Comparing performance of tracking a moving target with two UAVs (Twin UAV setup) vs. a single UAV, in an urban environment. Twin UAV is a “pair of UAVs” handled and operated as one system by one operator (see Figure 4). Either UAV can serve as the master while the other one is slaved and vice versa. Hence, only one payload needs to be controlled at a time, and the enslaved UAV positions itself relative to the master. The UAVs control is at a high level of automation via payload management. Various parameters need to be set by the operator for each UAV, prior to each sortie and can be changed during the sortie (altitude, turn radius, camera field-of-view and position shift angle between the UAVs).

FIGURE 4
www.frontiersin.org

Figure 4. Twin UAV operation screen configuration and operational device (mouse).

Procedure

The experiment consisted of six experimental scenarios. Each scenario was performed twice, once with one UAV and once with the Twin UAV configuration. The order was counterbalanced among participants. Each trial began with the target vehicle in a specified position. The vehicle then started moving and the operator was asked to keep it in sight as continuously as possible (a lock-on Target feature could be used when the target was visible). Task difficulty depended on the number of similar vehicles in the scene (varied from 5 to 9) and on obstructions when buildings occluded the target. The target vehicle looked similar to other vehicles but had a unique mark. The four easier scenarios lasted 3 min each and the two more difficult ones lasted 4 min. Instructions about the user interface and the task, a demonstration, and four Twin UAV and one single UAV training trials preceded the experimental phase.

Results

Sampling ratio (time spent in “Lock-on target” mode relative to the total duration of the scenario) was significantly (p < 0.05) higher when participants used the Twin UAV (average 0.42, SD 0.12) than the single UAV (average 0.31, SD 0.04). No significant interaction was found between scenario and UAV setup (twin vs. single). Figure 5 shows the results for each participant.

FIGURE 5
www.frontiersin.org

Figure 5. Comparison of lock-on time (i.e., the proportion of time during which the target was visible and locked by at least one UAV) with “Twin UAV” setup and with a single UAV, by participant.

Condition B —One to Three

Task

1:3—Here a more complex operational mission was used; one operator was required to guard a building, track a suspicious vehicle, and scan the shoreline using three UAVs (Tri-UAV), see Figure 6.

FIGURE 6
www.frontiersin.org

Figure 6. Tri-UAV display, the screen is divided into four areas: video feeds of the three UAVs marked with a colored frame for identification (upper left, lower left, and lower right windows), and a command and control map (upper right window). Note that all three UAVs are shown on the map.

Procedure

The Tri-UAV display contained video feed windows for each payload and a common map. The operator controlled the display using a mouse and a keyboard. The mouse enabled the operator to move the cursor between the map window and the video feed windows, and point to a specific location.

The task took place in a densely built urban environment. The operator had to: (a) guard a building with several entrances, (b) track a suspicious vehicle, and (c) scan the shoreline. All entrances and exits from and to the building were to be reported. When a suspect vehicle exited the building, the operator had to track it. Two UAVs were allocated to supervising the building entrances while one UAV was used for surveillance (lock-on target to track moving targets could be used). Each scenario was 4 min long and contained eight events that the participant had to attend to, events did not appear at the same time in the scene.

Results

Operators demonstrated difficulties in simultaneously processing information from three separate locations/video feed sources and failed to succeed in guarding the building and performing additional tasks such as tracking the moving vehicle or scanning the beach line. Only three operators out of the 10 were able to complete the scenarios at some degree of success the remaining seven had difficulties in performing the task and quit before the scenarios ended.

Study 2 Summary

Experienced operators seemed to cope well with two video feed windows when using the Twin UAV setup. Interestingly, without being instructed to do so, operators intuitively enhanced their performance by utilizing the dual setup. One method that was used frequently by the operators was to choose a wide field-of-view (FOV) angle in one UAV for overview, and a narrow angle on the other UAV for recognition and tracking of the target. Furthermore, in this type of configuration, since the area of operation was limited, operators rarely used the map. In general, operators thought that handling two sources was difficult enough and that handling three devices may be too demanding. This proved itself correct in condition B, when operators had difficulties processing the information from the three video feed sources. Note also that the area of operation in condition B was wider. In order to succeed, operators stated that there was a need for automated supporting tools. Following these results, in study 3 an attempt was made to facilitate the task by providing the operators with a toolkit containing situation awareness enhancing indicators and decision-support tools.

Table 1 summarizes studies 1 and 2 as described above. For each study, cognitive task demand, and automation level was added in a separate column (in line with Cummings et al., 2007a). See Table 2 for the levels of automation legend.

TABLE 1
www.frontiersin.org

Table 1. Summary of studies 1 and 2.

TABLE 2
www.frontiersin.org

Table 2. Levels of Automation (LOA) (cf Cummings et al., 2007a).

In the following studies performance of a team vs. a single operator was compared in an attempt to understand the feasibility and advantage of each mode, in the UAS domain (Study 3) and in the UGV (unmanned ground vehicle) domain (Study 4). Utilizing the DRM, and based on the findings of the previous studies, tools and visual aids were added to the interface, as specified in each study.

Study 3

Problem: Identify advantages and disadvantages of an individual operator vs. a team. Performance of one operator was compared to a team of (2–4) operators controlling the same number of UAVs (up to four UAVs). Operators had to observe a building and report of vehicles entering and existing the building. Vehicles exiting the building that had specific characteristics had to be further processed.

Study Question: Will a team of operators controlling a number of UAVs perform better than one operator controlling the same number of UAVs?

Condition A—Two Operators vs. One

Task

2:2: vs. 1:2—Two operators sharing control of two UAVs compared to one operator controlling two UAVs.

Participants

Six highly experienced male operators (SMEs) with similar military background and skills participated in this condition. They had 2–7 years of experience in operating military UASs (mean: 4), and their age ranged from 23 to 27 (mean: 24.8).

Procedure

Operators had to observe a building and report of vehicles entering and exiting the building. Vehicles exiting the building that had specific characteristics (i.e., suspicious vehicle) had to be further processed (track and report). Two phases were conducted, in the first phase no additional unique interaction tools were provided. After the first phase, based on the findings from study 2 and the difficulties operators had in performing the task, supportive tools were provided, only to the single operator in a form of a toolkit. The toolkit consisted of spatial anchoring capabilities like “sketch” and “revisit,” which enabled the operator to request the system to automatically follow a pattern (perform a sketch) or a jump through a list of points (perform a revisit cycle) by generating (using mouse clicks) a list of points on top of the payload image. In a similar way to Study 2's Twin UAV setup “Payload coupling” enabled the operator to enslave one UAV to the other. Finally, “Camera guide” enabled the operator to fly the UAV by following its camera (See Oron-Gilad et al., 2011 for detailed description of several tools).

In phase 2 of the study, it was aimed to examine whether the toolkit could support the single operator's performance to a degree superior to the team of two operators.

Results

Results are displayed in Table 3.

TABLE 3
www.frontiersin.org

Table 3. Performance measures—Team of 2 vs. one operator controlling two UAVs.

The team reported that the mission was calm up to a degree of being boring. The single operator reported that the mission was challenging but not overloading. The results of the team were similar to the results of the single operator using a toolkit. Multiple reporting of the same incident and longer mission stabilization time occurred in the team condition.

Condition B—Three Operators vs. One

Task

3:3 vs. 1:3—A team of three operators sharing control over three UAVs were compared to one operator controlling three UAVs. The same scenarios as in Condition A, the individual operator could use the toolkit and the operators in the team could not.

Participants

Eight highly experienced male operators (SMEs) with similar military background and skills participated in this condition. They had 4–8 years of experience in operating military UASs (mean: 5.4), and their age ranged from 25 to 30 (mean: 26.9). SMEs were compensated for their time.

Results

Results are displayed in Table 4.

TABLE 4
www.frontiersin.org

Table 4. Performance measures—Team of 3 vs. one operator controlling 3 UAVs.

The team performed significantly better (p < 0.01) than the single operator, however they again had more occasions of multiple reporting of the same incident, and increased stabilization time.

Condition B+—Four Operators vs. One

Task

4:4 vs. 1:4—A team of four operators sharing control of four UAVs were compared to one operator controlling four UAVs.

Participants

Five highly experienced male operators (SMEs) with similar military background and skills participated in this condition. They had 3–5 years of experience in operating military UASs (mean: 3.83), and their age ranged from 25 to 27 (mean: 25.5).

Results

This setup was problematic to analyze. In the one operator condition, single operators felt lost looking at four video feeds and in some cases they just looked at three UAVs or less (hence they neglected the fourth UAV). In the team condition, coordination among the operators took a long time, containing incessant verbal communication, and numerous multiple reports.

Study 3 Summary

One operator could not control more than three UASs, even with additional aids. Furthermore, without facilitating decision support tools, it was difficult and ineffective for a team of four operators to control four UASs as well. The implications of this study were twofold: each single operator can benefit from designated tools that assist in conducting the mission, e.g., coupling or sketch and revisit. A team of operators must be familiarized with a set of rules or provided with a set of tools to facilitate collaboration. Otherwise, they are prone to report multiple times on the same incident and they are not fully aware of each other's doings. Following these findings several novel tools and displays were designed to facilitate payload switching among members of the team (see for example Porat et al., 2011). Probably, the most successful facilitating tool was the “Castling Rays,” which is a switching decision aid, enabling operators to visually view which UAS has the best view of “their” target at any given moment (Porat et al., 2010).

Study 4

Problem: There was a requirement to increase the number of UGVs that one operator can control. The main problem with UGVs is that their level of autonomy is lower, hence more attention needs to be allocated to navigation and driving issues than in UAVs. At the time tested, the problem domain was still within the realm of multiple operators controlling a single system vs. a single operator. We compared performance of two operators controlling (navigating and observing) one UGV to one operator controlling one UGV.

Study Question: Will two operators controlling one UGV perform better than one operator controlling one UGV for scanning the fence?

Initial State

Task

2:1—Two operators controlled one UGV: one operator performed the navigation task and one operator performed the observation task, while scanning a border fence.

Participants

Six highly experienced participants, reserve soldiers in an elite engineering unit with experience in controlling remote robots such as ANDROS and Mini-ANDROS participated in this condition. They had 2–6 years of experience in operating military UVGs (mean: 3.5), and their age ranged from 25 to 30 (mean: 26.8). All were compensated for their time.

Procedure

Each UGV had a navigation camera and an observation camera for scanning the fence for obstacles and hazards (Figure 7). The UGV moved very slowly (7 km/h). One operator performed the navigation task (including health monitoring—alerts were both color coded and audible), and one performed the observation task. The experimental trial took about an hour. In this period, a total of 100 events occurred (obstacle, hazard on the fence, fault in the vehicle).

FIGURE 7
www.frontiersin.org

Figure 7. Observing camera (above) and navigating camera (below)—Condition A.

Results

Results are displayed in Table 5.

TABLE 5
www.frontiersin.org

Table 5. Performance measures of the initial state.

Performance was acceptable with a relatively low rate of misses of obstacles. However, there were synchronization problems among the two operators, for example: operators had delays in stopping the vehicle, which usually occurred after the observer identified a hazard, and notified the navigator who then had to stop the vehicle.

Condition A

Task

1:1—one operator controlled one UGV, performing both the navigation and the observation tasks (as shown in the display in Figure 7).

Participants

Three highly experienced participants, reserve soldiers in an elite engineering unit with experience in controlling remote robots such as ANDROS, and Mini-ANDROS participated in this condition. They had 2–4 years of experience in operating military UGVs (mean: 2.7), and their age ranged from 25 to 28 (mean: 26.3).

Results

Performance measures between the “Initial State” and “Condition A” were compared. Results are displayed in Table 6.

TABLE 6
www.frontiersin.org

Table 6. Performance measures—initial state vs. condition A.

One of the main problems in this condition was that operators were missing pitfalls, which stopped the vehicle and increased the time based performance measures to a large extent.

Condition B

Task

1:5—one operator observed cameras from five different UGVs, scanning the fence for obstacles and hazards.

Participants

The same participants as in condition A.

Results

Performance measures between the “Initial State” and “Condition B” were compared. Results are displayed in Table 7.

TABLE 7
www.frontiersin.org

Table 7. Performance measures—initial state vs. condition B.

Study 4 Summary

It was too complicated for one operator to perform the observation and navigation tasks simultaneously (as in Condition A). These two task types require different skills and performing them at the same time generated major switching costs. However, when operators were performing only one type of task (observation or navigation), their performance has improved.

Based on these findings, several novel tools and displays were designed to facilitate the navigation task, as shown in Figure 8. Side cameras were added. A width pole display aided the operator in estimating the width of the vehicle, and a path Predictor displayed a virtual path that the navigator could follow. Initial examination found this setup to decrease navigation time and improve navigation accuracy. This needs to be further assessed, however could be extremely useful especially when there are communication delays in displaying the online video feed from the navigation cameras.

FIGURE 8
www.frontiersin.org

Figure 8. Display of the navigation cameras with additional supporting tools and displays.

Table 8 summarizes studies 3 and 4 as described above. For each study, cognitive task demand and automation level were added in a separate column (in line with Cummings et al., 2007a). See Table 2 for the levels of automation legend.

TABLE 8
www.frontiersin.org

Table 8. Summary of studies 3 and 4 (MOMU).

Summary and Discussion

In general, our results suggest that one experienced operator can supervise (system health and status) up to 15 UASs efficiently using moderate levels of flight control automation. Concerning controlling UASs (mission and payload management), one experienced operator cannot control more than three UASs, with the level of complexity and automation that has been examined. Providing the operator with various display aids and decision support tools does improve performances of a single operator (as in Study 3) but did not change the modal number to higher extents.

Automation level, availability of decision aids, operators' experience, complexity and criticality of the mission, operational tempo, and cognitive resources and demands, all influence the number of systems that one operator can control. For this reason, comparison across studies is often complicated and inaccurate. However, considering these limitations, our findings do resemble findings of previous studies in the essence that they are confirming that single operators are able to control more remote vehicles as they are provided with increasing automated decision support. Given some automated navigation assistance and management-by-consent automation in the mission management loop, an operator was able to control 4–5 vehicles (e.g., Ruff et al., 2004; Dunlap, 2006; Cummings et al., 2007b). A leap in the amount of vehicles that one operator could control was only seen if management-by-exception was introduced, increasing the number to 8–12 vehicles (e.g., Lewis et al., 2006; Cummings and Guerlain, 2007). Here, we were able to show via Study 1 that a single operator can achieve even a higher ratio of operation between 15 and 17 systems, but only on a limited task or mission component (e.g., health monitoring).

This finding may become more relevant in the future, if organizations change the way they allocate and recruit operators. Nowadays, most organizations, military amongst them, do not want to parse their operators' mission into “small” subtasks and create high levels of skills in fine grained subtasks of the mission among operators (i.e., train people to be experts only on a single component of the mission, such as taxi or health supervising). The current approach can be justified when considering the danger of having operators lacking skill while conducting dynamic, time critical, and situation critical missions. However, the way operators' allocations are done today, it is inevitable for operators to maintain a certain level of proficiency in all aspects of their mission. Evidently this setting dictates that the level of automation of the unmanned system and the use of decision aids become key considerations.

Human operators are vital in this critical, high risk and high demand environment. Keeping the human in the loop, mostly for planning, re-planning, and control or at least for being able to take over in automation malfunction is essential in this domain. Therefore, fully autonomous operations (automation level VI) are not expected any time soon. Using intermediate levels of automation (i.e., supervisory control), will not enable operators to exceed the control of few systems. Figure 9 on the left was taken from Cummings et al. (2007b) and shows that the optimal bound they found was between 2 and 4 vehicles. The left region is primarily constrained by operational demands, but the right region is dominated by human performance limitations. Figure 9 on the right is taken from an operation research study conducted in parallel to Studies 1–4 and on similar urban area conditions (Shaferman and Shima, 2009). It shows that adding the first and second UAV had the most significant influence on mission performance. Above four systems, the area covered, and the added value of more assets became negligible. Hence, organizations need to identify whether there are justified operational cases where one-to-many ratios of more than four are needed. If those cases are sparse, then perhaps more design effort can be geared toward sharing of assets among operators (MOMU) in an efficient and effective way.

FIGURE 9
www.frontiersin.org

Figure 9. Left—operator capacity as a function of mission constrains (cf Cummings et al., 2007b). Right—Impact of UAS Number from an OR study conducted in parallel to our studies (Shaferman and Shima, 2009).

Concerning the operation of UGVs, when the operator performed only one task, as in study 4, condition B (observation task), performance was satisfactory since the operator focused primarily on maintaining awareness for obstacles and hazards. However, when the operator had to navigate the vehicle and observe the fence (as in study 4, condition A), it was too complicated to perform. Dynamic task switching between different functions resulted in greater cognitive workload for the operator than performing only one type of task. In both UASs and UGVs, the human and the automated systems are geographically separated, and therefore face difficulties, which are inherent in remote perception, such as overcoming the “keyhole” or “soda straw effect” (Voshell et al., 2005). Controlling and navigating UGVs is more complex than UASs with regard to spatial perception. While GPS technology may be very effective in providing UASs with positioning information that meets their navigational needs, their use in UGVs may be limited by reliability and accuracy constraints (Chaimowicz et al., 2005). For example, a positioning error of one or two meters may have little effect when controlling a UAS, however it could have crucial results when navigating a UGV.

Successful interaction with any human and automated system is influenced by many factors including vehicle characteristics (air, ground), task characteristics (complexity, number of vehicles controlled, time pressure, workload), environmental characteristics (terrain characteristics, quality, obstacles), and technological constraints (available bandwidth, communication delays). Thus, design specifications of automated decision support aids will differ according to the unique needs of the human operator in each situation. Indeed, the decision support tools that were developed in this study for the aerial and the ground domain differ in their design and implementation (e.g., width pole display for the ground vehicle) but there are also many commonalities in the essence of things (e.g., coupling of vehicles is suitable for both aerial and ground vehicles).

In MOMU environments, as seen in Study 3, when the tasks are similar or when the interest areas overlap (i.e., a connection between the video feeds), one operator has an advantage to a team who need to collaborate and coordinate. However, when there is no connection between the video feeds, a team has an advantage to a single operator. Thus, one of the considerations to prefer one operator to a team is the amount of overlap between the different video sources covered by the payloads. Taking this findings to a practical level, in the MOMU operational settings we strive to gain a consistent ratio of one operator controlling two UASs with some flexibility, thus controlling up to three UASs per operator on demand, and supervise up to six UASs where the covered areas of the UASs are related.

Where can we go from here and Broaden the Understanding and Added Value of MOMU Environments?

The first notion is that automation is a tricky tool. When not tailored to the task, it can easily cause high operator workload, and challenge the “keep the human in the loop” principle. Although this statement may seem true for most human system interfaces, when applying automation in critical and complex environments, such as MOMU, a first step would be to perform a thorough behavioral and cognitive task analysis to understand the cognitive requirements of the task (e.g., decisions, situation awareness, cues, judgment points). Once the different tasks, requirements and possible errors are understood, tailoring the display design and the automation level to the desired setup becomes possible. It should be acknowledged that different sections/parts/sub-tasks of the entire mission are perceived differently at separate stages of the mission process. For example, different automation needs are required for locomotion between areas of interest, as opposed to loitering on a specific target area. This implies that Cummings et al. (2007a) control loops could be further divided into even smaller chunks, and for each chunk one should match the required and desired automation level.

The second notion is that the scenarios used in our studies assumed similarity: all operators had the same type of experience and training, and all systems were alike. While this is a typical mode of operation, it is evident that this is just one possibility. In the U.S. military operations in Iraq, for example, more than 100 UASs of 10 different types were used (Office of the Under Secretary of Defense for Acquisition, 2004). The rising question becomes how MOMU operations may vary if there were multiple types of vehicles and operators with various training and capabilities. One needs also to reconsider the traditional mission allocation. Recent studies tried to define the qualifications and training required from an operator that is expected to control an increasing number of UASs. Parasuraman et al. (2014) discussed the possibility of selecting and training operators according to their molecular genetics. Perhaps now is the time to initiate specialization of operator roles. In order to do so, it would be necessary to revisit the main operational tasks and reallocate them in view of mission benefit. Changes in the function allocation and the nature of task differentiation between human operators and unmanned systems could significantly alter the cognitive loads of the operators when performing the mission (Cuevas et al., 2007). We should introduce flexibility into our rigid-traditional “task thinking,” and let go of beliefs that tie us down and stop the evolvement: must human operators fly the platform? Can we mentally not technically—enslave the platform to the mission needs?

A third direction would be to develop tools and decision-making aids. In our studies tools and techniques that may facilitate operators in MOMU environments were introduced (e.g., Porat et al., 2010). Tool development was done in a bottom up approach, i.e., based upon needs retrieved from SMEs and geared toward solving particular challenging operational situations. Since the tools were not yet tested in real world settings, it would be interesting to examine how they integrate into UASs MOMU environments and affect the metrics of performance. Fern et al. (2011) for example proposed other alternatives to facilitate UAS MOMU operations. It would also be interesting to examine whether tools can be transferred to other MOMU settings such as ground vehicles or drones.

Fourth, our studies focused on the allocation among operators in one team while conducting a single scenario, one can start looking at the broader picture—how to break operations into teams, how to assign and allocate the correct number of assets and operators to each one of the teams, and how to coordinate among teams of MOMU operators.

All these former suggestions lead toward the notion that a more top-down approach needs to be developed in order to provide a coherent way to distribute responsibilities and tasks in MOMU environments. This direction of adjusting resources and personnel according to mission needs is in line with future intentions and models in other domains. For example, in the medical domain, the NHS recent report “Five year forward view” (NHS England, 2014), argues that England is too diverse for a “one size fits all” care model, services need to be integrated around the patient and support their changing needs. Different local health communities will be able to decide which care delivery model best supports their needs, such as a multispecialty community provider model which is a multi-disciplinary team that can include different specialties such as nurses, therapists and other professionals combined with the latest digital technologies, or a specialized care model which is a surgery that specializes in one area such as cancer and provides care only for these patients. All to support the main goal, which is providing the best care for patients. Translating this to our domain and the task specific requirements, we can reach to the extreme cases where a team of operators will control only one asset and vice versa, where a single operator will control up to 15 assets simultaneously (e.g., taxi).

Finally, with regard to Human-Robot Interaction, it is inevitable that people of various abilities and skills will be surrounded by multiple platforms of various kinds and autonomy levels. Much of what is now known from the realm of UASs can be used to facilitate efficient asset sharing and mission successes among other populations. Just to mention one, in the not so far future, the elderly community will be utilizing robotics assistants of various kinds, whether operated by caregivers or by the users themselves. Many of the questions that were raised here about operators' skills, tools to facilitate cooperation and sharing and mission accomplishment will be relevant to these domains as well.

Author Contributions

Conceived and designed the experiments: MR, JS. Analyzed and interpreted the data: TP, TO, MR, JS. Wrote the paper: TP, TO, MR.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

1. ^Office of the Secretary of Defense, “Unmanned Aircraft Systems (UAS) Roadmap, 2005-2030.” Washington DC: DoD.

2. ^In 1954, Taylor and Birmingham, published a paper in the Journal of the Institute of Radio Engineers (now IEEE), titled “A Design Philosophy for Man-Machine Control Systems” (Birmingham and Taylor, 1954). The article discussed the manual control of a submarine, which is a complex control problem because of the massiveness of the boat and the nature of the control surfaces. They also described “quickening”, a clever example of how one could augment the display of information to improve the stability of control.

References

Allport, D. A., Styles, E. A., and Hsieh, S. (1994). “Shifting intentional set: exploring the dynamic control of tasks,” in Attention and Performance IV, eds C. Umilta and M. Moscovitch (Cambridge, MA: MIT Press), 421–452.

Birmingham, H. P., and Taylor, F. V. (1954). “A design philosophy for man-machine control systems,” in Proceedings of the Institute of Radio Engineers, Vol. 42 (IEEE), 1748–1758. doi: 10.1109/jrproc.1954.274775

CrossRef Full Text

Blessing, L. T. M., and Chakrabarti, A. (2009). DRM, A Design Research Methodology. London: Springer-Verlag. doi: 10.1007/978-1-84882-587-1

CrossRef Full Text

Calhoun, G., Draper, M., Miller, C. A., Ruff, H., Breeden, C., and Hamell, J. (2013). “Adaptable automation interface for multi-unmanned aerial systems control: preliminary usability evaluation,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 57 (San Diego, CA), 26–30. doi: 10.1177/1541931213571008

CrossRef Full Text

Calhoun, G. L., Ruff, H. A., Draper, M. H., and Wright, E. V. (2011). Automation-level transference effects in simulated multiple unmanned aerial vehicle control. J. Cogn. Eng. Decis. Making 5, 55–82. doi: 10.1177/1555343411399069

CrossRef Full Text | Google Scholar

Chaimowicz, L., Cowley, A., Gomez-Ibanez, D., Grocholsky, B., Hsieh, M. A., Hsu, H., et al. (2005). “Deploying air-ground multi-robot teams in urban environments,” in Multi-Robot Systems. From Swarms to Intelligent Automata Vol. III (Springer), 223–234. doi: 10.1007/1-4020-3389-3_18

CrossRef Full Text

Chen, T., Campbell, D. A., Coppin, G., and Gonzalez, F. (2013). “Management of heterogeneous UAVs through a capability framework of UAV's functional autonomy,” in 15th Australian International Aerospace Congress (AIAC 15) (Melbourne, VIC: Melbourne Convention Centre).

Google Scholar

Clare, A. S., Cummings, M. L., and Repenning, N. P. (2015). Influencing trust for human-automation collaborative scheduling of multiple unmanned vehicles. Hum. Factors 57, 1208–1218. doi: 10.1177/0018720815587803

PubMed Abstract | CrossRef Full Text | Google Scholar

Cooper, J., and Goodrich, M. A. (2008). “Towards combining UAV and sensor operator roles in UAV-enabled visual search,” in Presented at HRI 08', (Amsterdam). doi: 10.1145/1349822.1349868

CrossRef Full Text

Cuevas, H. M., Fiore, S. M., Caldwell, B. S., and Strater, L. (2007). Augmenting team cognition in human-automation teams performing in complex operational environments. Aviat. Space Environ. Med. 78, B63–B70.

PubMed Abstract | Google Scholar

Cummings, M. L., Andrew Clare, A., and Hart, C. (2010). The role of human-automation consensus in multiple unmanned vehicle scheduling. Hum. Factors 52, 17–27. doi: 10.1177/0018720810368674

PubMed Abstract | CrossRef Full Text | Google Scholar

Cummings, M. L., Bruni, S., Mercier, S., and Mitchell, P. J. (2007a). Automation architecture for single operator, multiple UAV command and control. Int. C2 J. 1, 1–24.

Google Scholar

Cummings, M. L., and Guerlain, S. (2007). Developing operator capacity estimates for supervisory control of autonomous vehicles. Hum. Factors 49, 1–15. doi: 10.1518/001872007779598109

PubMed Abstract | CrossRef Full Text | Google Scholar

Cummings, M. L., Nehme, C. E., Crandall, J., and Mitchell, P. (2007b). “Predicting operator capacity for supervisory control of multiple UAVs,” in Innovations in Intelligent Machines-1, eds J. S. Chah, L. C. Jain, A. Mizutani, and M. Sato-Ilic (Berlin; Heidelberg: Springer), 11–37. doi: 10.1007/978-3-540-72696-8_2

CrossRef Full Text

Draper, M., Calhoun, G., Ruff, H., Mullins, B., Lefebvre, A., Ayala, A., et al. (2008). “Transition display aid for changing camera views in UAV operations,” in Proceedings of the Humans Operating Unmanned Systems. Available online at: http://conferences.telecom-bretagne.eu/

Dunlap, R. D. (2006). “The evolution of a distributed command and control architecture for semi-autonoimous air vehicle operations,” in Presented at Moving Autonomy Forward Conference (Grantham).

Fern, L., Shively, J., Draper, M., Cooke, N. J., Oron-Gilad, T., and Miller, C. A. (2011). “Human-automation challenges for the control of unmanned aerial systems,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 55 (Las Vegas), 424–428. doi: 10.1177/1071181311551087

CrossRef Full Text

Gao, F., Cummings, M. L., and Solovey, E. T. (2014). Modeling teamwork in supervisory control of multiple robots. IEEE Trans. Hum Machine Syst. 44, 441–453. doi: 10.1109/THMS.2014.2312391

CrossRef Full Text | Google Scholar

Goodrich, M. A., and Cummings, M. L. (2014). “Human factors perspective on next generation unmanned aerial systems,” in Handbook of Unmanned Aerial Vehicles, eds K. P. Valavanis and G. J. Vachtsevanos (Springer), 2405–2423.

Gu, W., Mittu, R., Marble, J., Taylor, G., Sibley, C., Coyne, J., et al. (2014). “Towards modeling the behavior of autonomous systems and humans for trusted operations,” in 2014 AAAI Spring Symposium Series (Stanford).

Hancock, P. A., Mouloua, M., Gilson, R., Szalma, J., and Oron-Gilad, T. (2007). Is the UAV control ratio the right question? Ergonom. Design 15, 7; 30–31. doi: 10.1177/106480460701500104

CrossRef Full Text | Google Scholar

Kuechler, B., and Vaishnavi, V. (2008). On theory development in design science research: anatomy of a research project. Eur. J. Inf. Syst. 17, 489–504. doi: 10.1057/ejis.2008.40

CrossRef Full Text | Google Scholar

Lewis, M., Polvichai, J., Sycara, K., and Scerri, P. (2006). Scaling-up human control for large UAV teams. Hum. Factors Remot. Operat. Vehicles 7, 237–250. doi: 10.1016/S1479-3601(05)07017-7

CrossRef Full Text | Google Scholar

Mekdeci, B., and Cummings, M. L. (2009). “Modeling multiple human operators in the supervisory control of heterogeneous unmanned vehicles,” in Proceedings of the 9th Workshop on Performance Metrics for Intelligent Systems - PerMIS 09, Association for Computing Machinery (Gaithersburg, MD). doi: 10.1145/1865909.1865911

CrossRef Full Text

Monsell, S. (2003). Task switching. Trends Cogn. Sci. 7, 134–140. doi: 10.1016/S1364-6613(03)00028-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Mosier, K. L., and Skitka, L. J. (1996). “Human decision makers and automated decision aids: made for each other?,” in Automation and Human Performance: Theory and Applications, eds R. Parasuraman and M. Mouloua (Mahwah, NJ: Lawrence Erlbaum Associates), 201–220.

NHS England (October 2014). Five Year Forward View. Available online at: https://www.england.nhs.uk/wp-content/uploads/2014/10/5yfv-web.pdf

Office of the Under Secretary of Defense for Acquisition, Technology, Logistics. (2004). Defense Science Board Study on Unmanned Aerial Vehicles and Uninhabited Combat Aerial Vehicles. Washington, DC: Department of Defense.

Google Scholar

Oron-Gilad, T., Porat, T., Fern, L., Draper, M., Shively, J., Silbiger, J., et al. (2011). “Tools and techniques for MOMU (Multiple Operator Multiple UAV) environments; an operational perspective,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 55, 86–90. doi: 10.1177/1071181311551018

CrossRef Full Text | Google Scholar

Parasuraman, R., Kidwell, B., Olmstead, R., and Lin, M.-K. (2014). Interactive effects of the COMT gene and training on individual differences in supervisory control of unmanned vehicles. Hum. Factors 56, 760–771. doi: 10.1177/0018720813510736

PubMed Abstract | CrossRef Full Text | Google Scholar

Parasuraman, R., Sheridan, T. B., and Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. 30, 286–297. doi: 10.1109/3468.844354

PubMed Abstract | CrossRef Full Text | Google Scholar

Porat, T., Oron-Gilad, T., Silbiger, J., and Rottem-Hovev, M. (2010). “'Castling Rays' a decision support tool for UAV-switching tasks,” in CHI 2010 Conference Proceedings (Atlanta). doi: 10.1145/1753846.1754023

CrossRef Full Text

Porat, T., Oron-Gilad, T., Silbiger, J., and Rottem-Hovev, M. (2011). “Switch and deliver: display layouts for MOMV (Multiple Operators Multiple Video feeds) environments,” in IEEE CogSIMA 2011 Conference Proceedings (Miami, FL). doi: 10.1109/cogsima.2011.5753457

CrossRef Full Text

Ruff, H. A., Calhoun, G. L., Draper, M. H., Fontejon, J. V., and Guilfoos, B. J. (2004). “Exploring automation issues in supervisory control of multiple UAVs,” in Presented at 2nd Human Performance, Situation Awareness, and Automation Conference (HPSAA II) (Dayton Beach, FL).

Google Scholar

Ruff, H. A., Narayanan, S., and Draper, M. H. (2002). Human interaction with levels of automation and decision-aid fidelity in the supervisory control of multiple simulated unmanned air vehicles. Presence Teleoper. Virtual Environ. 11, 335–351. doi: 10.1162/105474602760204264

CrossRef Full Text | Google Scholar

Saqer, H., de Visser, E., Emfield, A., Shaw, T., and Parasuraman, R. (2011). “Adaptive automation to improve human performance in supervision of multiple uninhabited aerial vehicles: individual markers of performance,” in Proceedings of the Human Factors and Ergonomics Society 55th Annual Meeting (Las Vegas). doi: 10.1177/1071181311551185

CrossRef Full Text

Shaferman, V., and Shima, T. (2009). “Task assignment and motion planning for multiple UAVs tracking multiple targets in urban environments,” in AIAA Guidance, Navigation, and Control Conference (Chicago, IL). doi: 10.2514/6.2009-5778

CrossRef Full Text | Google Scholar

Sheridan, T. B., and Verplank, W. (1978). Human and Computer Control of Undersea Teleoperators. Cambridge, MA: Man-Machine Systems Laboratory; Department of Mechanical Engineering, MIT.

Google Scholar

Squire, P. N., and Parasuraman, R. (2010). Effects of automation and task load on task switching during human supervision of multiple semi-autonomous robots in a dynamic environment. Ergonomics 53, 951–961. doi: 10.1080/00140139.2010.489969

PubMed Abstract | CrossRef Full Text | Google Scholar

Vaishnavi, V., and Kuechler, W. (2004). Design Science Research in Information Systems. Available online at: http://www.desrist.org/desrist

Voshell, M., Woods, D. D., and Phillips, F. (2005). “Overcoming the keyhole in human-robot coordination: simulation and evaluation,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 49, 442–446. doi: 10.1177/154193120504900348

CrossRef Full Text | Google Scholar

Wilson, G. F., and Russell, C. A. (2007). Performance enhancement in an uninhabited air vehicle task using psychophysiologically determined adaptive aiding. Hum. Factors 49, 1005–1018. doi: 10.1518/001872007X249875

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: unmanned aerial systems, control ratio, UAV, decision support systems, DSS, automation, macrocognition, human factors

Citation: Porat T, Oron-Gilad T, Rottem-Hovev M and Silbiger J (2016) Supervising and Controlling Unmanned Systems: A Multi-Phase Study with Subject Matter Experts. Front. Psychol. 7:568. doi: 10.3389/fpsyg.2016.00568

Received: 29 October 2015; Accepted: 06 April 2016;
Published: 24 May 2016.

Edited by:

Paul Ward, University of Huddersfield, UK

Reviewed by:

Joseph Roland Keebler, Wichita State University, USA
Jessie Chen, US Army Research Laboratory, USA

Copyright © 2016 Porat, Oron-Gilad, Rottem-Hovev and Silbiger. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Talya Porat, talya.porat@kcl.ac.uk

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.