Using onboard AI to power quicker, more complex prosthetic hands

True artificial intelligence remains out of reach today, but predictive analytics and machine learning is already at work behind the scenes.
By employing an artificial intelligence network typically used for image recognition, researchers at University of Texas at Dallas aim to skip labor-intensive processing steps while reacting to raw nerve signal data in real time. (Getty Images/Pixtum)

Researchers are looking to employ onboard artificial intelligence systems to improve the control and sophistication of prosthetic hands, by using deep learning approaches that read and react to nerve signals transmitted through the arm.

The practice of tracking the natural electric impulses sent by the brain to control individual muscles, known as electromyography, has been used to operate prosthetic limbs and hands before, as well as wheelchairs and other devices. But performance gaps remain when it comes to the fine motor control of fingers and hands.

By running a neural network in real-time on a dedicated processing unit within the prosthetic, researchers at the University of Texas at Dallas (UT Dallas) hope to speed up responses for faster hand movements. In addition, the proposed system could be retrained based on the actions of the user to increase its accuracy.

Virtual Roundtable

ASCO Explained: Expert predictions and takeaways from the world's biggest cancer meeting

Join FiercePharma for our ASCO pre- and post-show webinar series. We'll bring together a panel of experts to preview what to watch for at ASCO. Cancer experts will highlight closely watched data sets to be unveiled at the virtual meeting--and discuss how they could change prescribing patterns. Following the meeting, we’ll do a post-show wrap up to break down the biggest data that came out over the weekend, as well as the implications they could have for prescribers, patients and drugmakers.

According to lead researcher Mohsen Jafarzadeh of UT Dallas, the system uses a convolutional neural network—the kind typically used in image recognition and visual analysis. By applying that to the raw electromyography data taken from electrodes on the arm, they can skip the labor-intensive steps of isolating and characterizing the specific signals within the noise that are typically used to train algorithms by hand.

RELATED: Brain-computer interface allows paralyzed patients to use off-the-shelf tablet

"Removing the feature extraction and feature description is an important step toward the paradigm of end-to-end optimization,” Jafarzadeh said in a statement. “Our results are a solid starting point to begin designing more sophisticated prosthetic hands.”

The work still has far to go, the researchers said, including the collection of more electromyography data from more people to train and improve the accuracy of their networks and allow for more complex hand movements.

The research was presented at the 2019 IEEE International Symposium on Measurement and Control in Robotics in Houston last month and was published by IEEE, the Institute of Electrical and Electronics Engineers.

Suggested Articles

The FDA named more than two dozen coronavirus antibody tests that should be taken off the market weeks after the agency clamped down on tests.

Inovio CEO J. Joseph Kim is undeterred by short sellers and other detractors who doubt his company can shuttle a COVID-19 DNA vaccine to market.

The machine-learning programs scroll through data to detect hard-to-spot patterns. Yet few have been tested against standard procedures.