Imaginary Interface
authors: Sean Gustafson, Daniel Bierwirth and Patrick Baudisch
They are PHD students that are part of Hasso Plattner Institute in Potsdam, Germany. They are all currently still at the institute working as researchers
Presentation venue: UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology. The event took place in New York City NY, USA
Summary:
Hypothesis:
Their hypothesis was that spatial interaction does not require a screen and proposing a method that would allow spatial interaction for screen-less devices. Thus imaginary interfaces represents screen less devices that allows spatial interaction with empty hands and without visual feed back.
The question came down to what extent can users interact spatially with a user interface using only their imagination
note- see case study in Methods section
Case 1 study hypo:
They hypothesized that their methods would produce fewer amount of recognition errors than reported by Ni and Baudisch (separate experiment done on spatial recognition)
Case 2 study hypo:
expect lower error in stay condition than in rotate condition
in rotate condition, expect lower error in hand condition. The left hand should fill in for lack of visual reference due to body rotation
Case 3 study hypo:
error would grow with the distance from the tips of index finger and thumb, which participants use would as visual landmarks.
pointing accuracy will be highest at fingertips
pointing accuracy will decrease as distance from the nearest fingertip increases
Methods:
The hypothesis was tested by testing users to define a original of imaginary space by forming a L shaped coordinate cross with non dominant hand and using the dominant hand to draw. This form was testing was used on simple drawings and annotations. (12 participants -> age 20-30)
The concept of imaginary interfaces were created to allow users to interact within a invisible 2D space. The user was allows to manipulate objects spatially by pointing and manipulating objects without a visual. (all takes place in the imagination)
There were 3 cases of study that was tested:
the first study investigated participants visuospatial memory while drawing single stroke characters (94% recognition rate)
-Graffiti
-repeated drawing
-multi stroke drawing
-done to see how stroke connection accuracy decreased from increasing strokes
2nd study investigated how far user motion impacts visuospatial memory
-The experiment was a 2 body rotation (rotate, stay) × 2 reference system (hand, none) within subjects design. Each condition consisted of 15 trials. Each trial within a condi-tion used one unique combination of glyph
-done to see what reduces short term memory when using this device
3rd study tested participants ability to point to location in the coordinates
-For each trial, participants started in a neutral position with their hands held loosely at their sides. Participants now received the target location as two digits via audio and displayed on a monitor. They pressed a footswitch which started the timer and started the trial.
Participants now raised their hands, formed an ‘L’ gesture with their left hand and acquired the respective target by pinching with their right hand (Figure 15a). Participants committed the acquisition by pressing a footswitch again. This recorded the 3D location and rotation of the body and both hands, played a confirmation sound and completed the trial.
-to understand the capabilities and limitations of coordi-nate-based imaginary pointing
Results:
The results showed that using visual short term memory can some what replace conventional screen displays.
Imaginary interfaces was able to clarify spatial information during conversations such as location of players during a game or locations or how to get to the lab.
Spatial interaction also allowed users to extend or edit a drawing after it was initially created by using short term memories
In terms of stabilizing spatial interfaces, it was found that two hands working together provided enough context to maintain a frame of reference without visual feedback
Case study1
About 5.5% of gestures were unsuccessfully recognized and misalignment was mostly caused by translations errors from choosing the wrong starting point
Case study2
There was a significant difference between rotation condition and stay condition (stay was much more accurate)
Case study3
Fingertips were the most accurate locations, with button sizes of .35 thumbs wide and 0.21 index fingers high
overall results:
users should create annotations right away while memory is still fresh
pointing accuracy decreases as we move away from reference hand
Contents:
The attempt on abandoning screen altogether arises from the attempt at creating the ultimate mobile device. The draw backs usually come from the fact that gestures are categorical and only allows a limited series of commands.
The drive to perfect Imaginary Interfaces comes from the goal of creating a computing technology that is completely integrated, always available, and nearly invisible for day to day life.
Discussion:
I personally thought the whole idea was really out of the box and very interesting. However, i still can't not imagine this concept being convenient in a practical application. I believe that it is still not very practical mostly because it the applicable situations for the whole concept is very limited (mostly buttons) and the visual memory varies very strongly for various people making it hard to use as standardized device for the general public
Wednesday, August 31, 2011
Tuesday, August 30, 2011
Paper Reading #0 On Computers
If one carries out an intelligent conversation using pieces of paper that is slid under a door, would this mean that someone or something understands what you are trying to say? A thought experiment carried out by John Searle has asked this question in response to a possible interaction between a human and a computer. Although it is easy to say yes when holding a conversation with a person face it face, it can become much more difficult to reply in the same fashion if it is a communication that is done in a fashion that was mentioned previously. It is important to note that this fashion of communication done through a computer does not imply that the computer understands you. This is due to the fact that response are not based on the understanding but rather a set of instructions that it would gloss over to find a response that would emulate a human like response to a certain question. This would ultimately mean that a computer is not taught what something is or means but is rather bound by a set of rules to spit out something when encountered with a certain situation.
It is important to note that in terms of the end result, the question itself is pointless and does not matter due to the fact that the conversation would have the same human like conversation whether it was a computer (assuming the computer has strong AI) or a real person. However for the sake of answering the question directly, it would be a definite no. This is because when a response is given to a computer, the thinking done is not actually understanding the meanings of the list of words in the sentence itself but rather looking through a set of predefined instructions. The computer is merely following a list of rules that is concretely enforced by a pre-made program. Although many will not be able to tell the difference when talking it is merely an illusion none the less.
Another important point to note concerning this problem is the fact that people in many cases respond to questions due to the emotions they may feel. And in such a case a human being would feel emotional to certain questions because they truly understand the meaning behind such a question. In this case the computer would respond in a humanly fashion not because the computer also feels emotion in the same fashion a human being would but rather due to its strong logic that accounts in emotion ahead of time. For example if would was to describe a sad or unfortunate scenario, the response that would come from the other side may be sympathetic not because it feels and sympathizes with its partner's plight but rather due to the fact that its logic accounts for such a response.
Although it is undeniable that theoretically a strong AI can create a situation where it may emulate a human being. It is very much so that a computer does not understand what we may have to say not based on the results but based on the process the computer goes through in order to respond.
Introduction Blog #-1
email: sung251@gmail.com
4th year senior
I am taking this class because this class seems to stand out from other computer classes in terms of uniqueness
I bring experience to this class in terms of presentations, writing, and programming
I expect to be working in Korea or working on my MBA in 10 years
I believe that the next biggest advancement will come in the form of graphics and visual improvements (real AI or quantum computers still feel way too far away to be the next big thing)
If i can travel back in time i would meet anyone as long as i can sell their signature at a high price
I do not have favorite shoes and i only own 1 pair of shoes at a time (except for 1 extra pair of formal). As long as the color is conservative and it fits well i am content.
I would choose to be fluent in Chinese (Mandarin) because not many people in China know English well. This makes it easy to market your self when you are looking for a job.
Some of my hobbies include making cocktails (i have a fully stocked liqueur cabinet -> mostly make martinis and B&B) and reading a good piece of fiction or history related non-fiction. I have been to 49 states in America (still haven't been to Alaska) and I have lived in 7 different places during my life so far.
Subscribe to:
Comments (Atom)

