A Computer Scientist and Research Psychologist from the Human Systems Integration Branch, ANG-E5B, located at the William J. Hughes Technical Center (WJHTC) in Atlantic City, NJ, introduced an innovative system called "CASPER" (Computer-Automated Simulation Pilot). The program aims to integrate automated speech recognition into the FAA's Simulation Systems.
Several Mike Monroney Aeronautical Center members were provided with a demonstration of the "ghostly" application. Nelson Brown and Jonathan Rein from the WJHTC's Human Systems Integration Branch (ANG-E5B) are responsible for developing CASPER. They demonstrated CASPER's capabilities on September 25, 2024, for the FAA Academy's Air Traffic Control (ATC) Training Division and the mini-National Airspace System (NAS) development team.
Roosevelt McLemore, Tech Ops Training Division Manager (AMA-400), Daniel L. Smith, Division Manager of the NAS Technical Services Division (AMA-900), and several technicians and instructors were in attendance and were among the first to try out the new CASPER system. Elizabeth Waddle, AMA-920 manager, and Shayne McGuffie, FAA Plans & Programs Manager (AMA-520), got a chance to interact directly with the system. They spoke controller commands into CASPER so that the friendly ghost could demonstrate the ability to interpret the ATC instructions directed to the absent simulator pilot. The ATC instructions were then converted into machine language, executing the vector changes in the simulated aircraft, and then reading/speaking them back to the ATC controller. CASPER offers two modes: The first mode aids a busy simulation pilot by providing suggestions. This part converts the Air Traffic Controller's commands into machine language to change the aircraft vector as described above.
However, in suggestion mode, the sim pilot must hit enter on the workstation entry screen and then speak/read back the command to the Air Traffic Controller. The aid allows the sim pilot to handle larger traffic volumes, hence the human factor element being researched.
The second mode handles everything, including the readback to the controller in up to five different voices, for five different simulated aircraft.
CASPER contains a front-end neural net and is an Artificial Intelligence-based (AI) voice recognition and response system that captures Air traffic Controller messages for the pilot. Those messages are then transformed into a digital format that can prompt a Remote Pilot Operator (RPO) with a machine-formatted message suggestion for the readback to the controller. CASPER can also perform the request and respond to the controller independently. It is designed to facilitate research testing using human-in-the-loop (HITL) simulations in a busy airspace sector with insufficient simulation pilots. CASPER can also be used in the mini-NAS, where RPOs are not readily available but have sim pilot workstations.
During the tour and demonstration, some newly identified potential use cases were discovered, allowing practice on specific tasks for ATC students as they go through certain stages of training. The WJHTC has already used CASPER to garner interest from high school students, and the mobile demo platform could potentially be used in some of the MMAC STEM outreach initiatives. The system has a prompt mode that students can use to give directions to the pilots. All they have to do is read the suggestions, and then CASPER does the rest, changing the simulated aircraft vector and responding to them, as it occurs in real life. The system currently responds in 5 different voices, using a different one for each aircraft, but can use many more voices.
The MMAC appreciates its partnership with the William J. Hughes Technical Center and the opportunity to demo the innovative technologies they have developed, as they consider new ways to further improve training for Air Traffic students.