Mode-Unified, Biomimetic Control Strategy for Robotic Lower-Limb Prostheses Using Deep Learning Methods

 

Traditional robotic knee-ankle prostheses for individuals with transfemoral amputation provide categorized assistance with respect to discretely defined ambulation modes, including level walking, ramps, and stairs. However, human movement scales continuously across various states rather than discretely. Recent studies have implemented continuously varying prosthetic systems within specific subsets of the modes, such as level ground and ramps or level ground and stairs. Yet, no continuous system that covers a fully mode-unified environment was demonstrated. I propose a complete version of a continuous prosthetic control system inspired by constantly varying human biomechanics. To achieve this, I will (Aim 1) generalize the intent of a prosthesis user to adjust dynamics to the environment in a unified terrain slope domain, (Aim 2) develop a mode-unified, bio-inspired prosthetic control system using locomotion dynamics of humans, and (Aim 3) personalize the controller to user by optimizing intent recognition system and control parameters using deep-learning adaptation outside laboratory. These enhancements will lead to a novel prosthesis system generalized across environments and personalized for a user. 

Event Subject
Mode-Unified, Biomimetic Control Strategy for Robotic Lower-Limb Prostheses Using Deep Learning Methods
Event Location
MRDC Building, Room 4211
Event Date