This Is Auburn

The Role of Artificial Intelligence for Human Intent Prediction Across Dynamic Actions Using Wearable Sensors

Date

2024-12-09

Author

Holilnger, David

Abstract

This dissertation explores the application of artificial intelligence (AI) for human intent prediction across dynamic actions using wearable sensors, addressing critical gaps in joint-level and action-level prediction and classification accuracy. The research is presented in three parts to address the fundamental principles for human intent prediction as it relates to naturalistic, seamless control of a wearable lower-limb exoskeleton. The first study investigates the influence of gait phase on predicting lower-limb joint angles using machine learning models. This study compares the capability of machine learning and deep learning models to predict joint angles across four phases of gait using inputs from joint kinematics, inertial measurement units (IMUs), and electromyography (EMG) from 30 participants. The results showed that the bidirectional Long Short-term Memory (BiLSTM) model provided the most accurate joint angle predictions, with a mean root mean square error (RMSE) of 1.42-5.71 degrees. Additionally, the accuracy of the BiLSTM did not significantly vary across phases of gait, demonstrating its reliability in accurately predicting joint angles throughout the gait cycle. This suggests that deep learning models can predict lower-limb kinematics regardless of the user’s gait phase, which can potentially enhance the natural control of wearable exoskeletons. The second study examines how the number and location of IMUs influence joint angle prediction accuracy during simple movements. A random forest model was used to predict lower-limb joint angles from various sensor configurations. Contrary to the hypothesis, adding adjacent IMUs to joint angle inputs did not significantly improve joint angle prediction accuracy. This study shows that selecting relevant types of sensors is more important than simply adding more sensor data for human movement intent prediction at the joint level. This showcases an efficient sensor configuration with the potential for deploying into real-time lower-limb exoskeleton systems. The third study presents a hierarchical-based learning approach for multi-action intent recognition. This study combines action-level classification (e.g., walking, kneeling down, kneeling up, running, and walking) and joint-level regression models to improve movement intent prediction at the joint level. The action-level classifiers used were BiLSTM and a Temporal Convolutional Network (TCN), and the joint-level model was Random Forest. Although the hierarchical-based approach resulted in modest outcomes, a task-agnostic approach trained on multiple actions resulted in significantly more accurate predictions than a combined action-level and joint-level approach. This indicates the need to leverage large and diverse datasets for more robust intent recognition across widely-varying tasks. This aspect of enhancing human intent prediction across a wide range of tasks is essential for more robust and effective control of exoskeleton technology. Together, these studies address the fundamental principles governing our understanding of AI-driven human movement intent prediction at lower-limb joints. By thoroughly investigating AI-driven approaches to human movement intent prediction, the work presented in this dissertation advances the opportunities to deploy effective and robust algorithms for personalized control of lower-limb exoskeleton technologies across dynamic actions.