Authors: Jeremy Scott, David Dearman, Koji Yatani, Khai Truong
Occupation: University of Toronto
Location:
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Summary:
Hypothesis
Mobile device placed in a user's pocket can recognize foot gestures using a built in accelerometer
Method/Result:
Used 6 M series Vicon Motion capture cameras to capture movement of participant's foot
following gestures were tested:
- Dorsiflexion: four targets placed between 10° and 40° inclusive
- Plantar flexion: six targets placed between 10° and 60° inclusive
- Heel & toe rotation: 21 targets (each), with 9 internal rotation targets placed between -10° and -90° inclusive, and 12 external rotation targets placed between 10° and 120° inclusive
Sixteen right-footed participants for this study. Each participant completed 156 selections in the training phase and 468 selections in the testing phase.
system can classify ten different foot gestures at approximately 86% accuracy
The selection time for the 10° target for dorsiflexion was significantly faster than the other targets
The selection time for the 10° and 20° targets for plantar flexion was faster than the 40°, 50°, and 60° targets
Accuracy Decreased for each subsequent trials
For classification, Naïve Bayes theory was used to perform our classification of the user’s foot movements. (based on the Bayesian theorem and assumes that features are conditionally independent)
2 tests were conducted:
Leave-one-participant-out (LOPO) cross-validation:
We used the data gathered from 5 of the 6 participants for training, and used the data from the other participant for testing. This was repeated such that each participant’s data was used once for validation. This method of validation assumes weak user-dependency.
Within-participant (WP) stratified cross-validation:
We used data from only one participant at a time. The data for each participant was split into 10 stratified folds, meaning the ratio of data from each class was equal to the ratio in the total dataset. Using one fold for testing and the other 9 folds for training, tests were repeated such that each fold was used once for testing. The results were then averaged across tests for each participant and summed across participants. This protocol assumes a stronger user-dependency than LOPO.
Results:
WP protocol yielded greater accuracy than the LOPO protocol
Accuracy of the interaction:
side: 92.2%, front: 90.3%, and back: 75.5%
Naïve Bayes resulted in 82-92% classification accuracy for the gesture space
Content:
This paper is a study on performing foot based interactions such as lifting and rotation with the foot to classify gestures. Based on the results of interactions a system was created to recognize foot gestures using mobile phone placed in the user's pocket or holster.
Discussion:
Frankly although I thought the whole concept was very interesting I was not very interested in the whole idea in terms of using it in real life. Although the the interaction accuracy seemed very high according to their trials I would feel (as many would) uncomfortable without actually seeing what is happening on the mobile device with my own eyes. It would also be trouble some if doing leg motions randomly would do thing to my mobile device.

No comments:
Post a Comment