How cutting-edge tech interfaces could monitor astronauts’ health


Sending humans on a mission to Mars is a monumental task. But the list of the main risks to health during these groundbreaking journeys is surprisingly small.

“You would think spaceflight is really complicated, but we boil it down into five,” Julie Robinson, acting chief scientist and manager for science and technology utilization at NASA headquarters, said during a panel at HIMSS22.

“And we simulate future Mars missions on Earth and in current and planned space missions. We use every mission we do to advance our understanding of how to keep the crew safe and healthy on future missions.” 

Those five main hazards during human spaceflight include radiation; isolation and confinement, like the behavioral and psychosocial impact of being stuck in a small space with the same people for long periods; the distance from Earth, since you can’t evacuate or quickly communicate with others back home; the lack of gravity; and hostile and closed environments, since astronauts are constantly reusing water and rebreathing the same air. 

But NASA can simulate some of these health risks here on Earth and on the International Space Station to prepare for the long mission to Mars. That’s also one goal of the Artemis missions that will send humans back to the Moon.

The Artemis program will offer a better approximation of those gravity, radiation, environmental and isolation risks. However, the Moon is a lot closer to home than Mars, and the communication delay will be much shorter. 

“If something goes really wrong on the way to Mars, the crew still goes to Mars. Dead or alive, the crew goes to Mars, because the laws of physics won’t let you get back any quicker,” Robinson said.

One innovation that could assist and monitor astronauts during long space missions is the intelligent agent, an autonomous software program that can sense its environment continuously, and then use that information to learn, decide what to do and take action. It can communicate with a human or another agent, kind of like a personal assistant. 

One test use for the tech was including the intelligent agent inside the spacesuit to monitor astronauts during extravehicular activities, said Maarten Sierhuis, cofounder and CTO of Ejenta and former NASA senior research scientist. 

“The agent was actually running continuously, getting the telemetry from the spacesuit and the astronaut, and was able to predict metabolic rate in this environment  and then use that to actually provide conversational interaction with a speech-dialogue system in the spacesuit, and could provide information about the health of the astronaut,” he said.

The agents were deployed in Mission Control to help send data to and from the space station. They can also be used on Earth for remote monitoring for conditions like heart failure and high-risk pregnancy. 

But the tech could be headed back to space, using wearable patches, cameras and conversational interfaces to relay astronauts’ health and behavioral data back to support teams on the ground.

“We are developing a system that allows us to facilitate communication and collaboration and connection between the different support teams, or even family members, as well as Mission Control,” Sierhuis said. “You can look at it as the agent for the astronaut is kind of like a proxy – or the buzzword is digital twin – and this is how we deal with time delay.”

Artificial intelligence is going to be the new way we interact with technology, argues Tom Lawry, national director for AI, health and life sciences at Microsoft. These interfaces have already changed rapidly within our lifetimes.

“The first time I ever saw the internet, I was a very young man. I was at a computer lab, and it was all green screens and command prompts. In order to do something, you had to actually know what you’re doing,” he said. “So then, someone came along and embedded this interface called the browser, and all of a sudden, all of us could use a keyboard or a mouse.”

We’ve moved to using touch screens, voice commands, gestures and body movement, ambient intelligence, and even augmented and virtual reality to interact with our tech. Our brains themselves could be the next interface, as progress has been made allowing people with paralysis to move robotic limbs. But using AI in space presents an additional problem. 

“AI is used and drives value when it augments the work of physicians, nurses, humans. The challenge in spaces is creating an autonomous system that can operate effectively without that augmentation,” Lawry said. 

Another challenge to using health tech in space is the rapid pace of innovation and change. NASA has a very conservative culture surrounding safety, and those systems need to be thoroughly evaluated before they can be used, Robinson said. For instance, NASA worked on using guided ultrasounds, but those tools were updated more quickly than they could be tested. 

“Getting to the point where we could demonstrate that a crew on their way to Mars could use an ultrasound to do that for themselves took us another 10 years after that, because ultrasounds kept changing. So every time you prove it with one system, that software and the hardware would get upgraded. They got smaller, which is great. But it’s really hard to have a validated system,” she said.

Sierhuis added that these technologies will change how people work, and it’s necessary to fully understand those effects before implementation. For example, with remote assistance, how do you manage data overload? How do you provide only the information that’s necessary and actionable?

“So just bringing in something new can actually change the way people are already normally doing things,” he said. “And if you don’t design these things together, if you don’t understand the work practice of the person that needs to interface with technology, it will either be denied or will go wrong.”

Source link

Leave A Reply

Your email address will not be published.