This breakthrough could advance the development of next assistive technologies for people living with mobility impairments.
TORONTO–New research from the KITE Research Institute could lay the foundation for the next generation of assistive devices.
Researchers at KITE developed a variety of deep learning models that perceive real world environments with 98.8 per cent accuracy in only 2.8 milliseconds.
This breakthrough could advance the development of next assistive technologies like robotic prosthetic legs and exoskeletons for people living with mobility impairments.
The team tested their models and published the results in Biomedical Engineering OnLine.
This research was part of an international collaboration with Ukrainian students, including co-authors Dmytro Kuzmenko and Bogdan Invanyuk-Skulskiy”
The International Conference on Aging, Innovation & Rehabilitation (ICAIR) partnered with BioMedical Engineering OnLine to publish a special edition of the journal featuring full-length papers of the highest scoring abstracts from the conference.
The paper’s first author KITE trainee Andrew Garrett Kurbis goes in depth on his team’s findings below.
Which patient groups are most affected by this? Our research in computer vision aims to support persons with mobility impairments due to aging and/or physical disabilities (e.g., multiple sclerosis, spinal cord injury, Parkinson’s disease, amputation, stroke, and osteoarthritis) via environment-adaptive control of robotic prosthetic legs and exoskeletons, in addition to providing assistive technology for persons with visual impairments.
What did you find? We developed and optimized a wide variety of deep learning models for visual perception of real-world walking environments and developed a large-scale dataset with over 515,000 images, including manual annotations. We focused on lightweight and efficient neural network architectures for real-time embedded computing. Our highest performing model was able to learn to recognize complex stair environments with over 98.8% accuracy and prediction speeds up to 2.8 ms.
Why does this matter? Attempts to develop autonomous controllers for robotic leg prostheses and exoskeletons have relied on mechanical, inertial, and/or neuromuscular sensors, which have limited prediction horizons, analogous to walking blindfolded. Taking inspiration from biological vision, we are developing computer vision systems like AI-powered smart glasses to sense the walking environment prior to physical interactions, thus allowing for improved speed and accuracy of transitions between different locomotion mode controllers.
What is the potential impact? Our research in computer vision and deep learning has the potential to support the development of next-generation robotic prosthetic legs, exoskeletons, and other mobility assistive technologies via allowing them to think and autonomously adapt to different real-world walking environments. Towards this end, we also made our image dataset and software open-source to support the research community. | Research Spotlight: Affiliations: Name of Publication: Name of Journal: |