Title: Pen + Touch = new tools
author: Ken Hinckley, Koji Yatani, Michel Pahud, Nicole Coddington, Jenny Rodenhouse, Andy Wilson, Hrvoje Benko, and Bill Buxton
They are part of the Microsoft Research team and are currently working in One Microsoft Way
Presented during:UIST '10 Proceedings of the 23nd annual ACM symposium on User interface
software and technology in New York
Summary:
Hypothesis:
Create a human interaction system where the tools are divided:
pen writes, touch manipulates, and the combination of pen+touch yields new tools (This articulates how our system interprets unimodal pen,unimodal touch, and multimodal pen + touch inputs)
Create a method/device that offers the opportunity to craft new user experiences that are uniquely well suited to how people naturally work with pen and paper—without being beholden to physical mimicry of paper at the same time
Manual Deskterity is intended primarily as a research vehicle to explore pen + touch, which we believe has many potential applications
Methods:
Eight people were tested on 10 different behaviors in real life and the behaviors were used to give ideas in creating the division of tools including multimodal based tools
10 behaviors: Specific Roles, Tuck the Pen, Hold Clippings, Hold while writing, Framing, Scraps, Extended Work space, Piling, Drawing along Edges, Hold Page while Flipping
The research implementation of the device was done on Microsoft Surface with custom LED pen that is activated during contact via a tip switch
Device structuring:
Pen writes, touch manipulates
Zooming, selecting objects
Pen + Touch = new tools
x-acto knife: cutting items
stapler: grouping items into a stack
carbon copy: drag off with the pen
ruler: using object as straight edge
composition of straightedge with cutting
tape curve: holding a pen stroke as a drawing tool
Results:
-Users found our approach to combined pen and touch interaction appealing.
-quickly formed habits around the general pattern (hold touch + gesture with pen) common to most of our gestures
-currently the multimodal pen+touch gestures are not self revealing, nor in most cases is there sufficient feedback of the current action of the tools until after the user finishes articulating a gesture
-There is much potential overall and the usage of multimodal methods have more positive reception than expected
Discussion:
I thought the whole concept was interesting and I believe that it has much future potential in terms of
practical usage. I personally think that it would be difficult for people to get acquainted with bi manual forms of interaction however.

No comments:
Post a Comment