Sunday, December 11, 2011

Paper Reading #32: Taking advice from intelligent systems

Authors: Kate Ehrlich, Susanna Kirk, John Patterson, Jamie Rasmussen, Steven Ross, Daniel Gruen

Occupation:
Kate Ehrlich- senior technical staff at IBM research
Susanna Kirk- lead at IBM research
John Patterson- Engineer at IBM research
Jamie Rasmussen- software engineer at IBM research
Steven Ross- senior technical staff at IBM research
Daniel Gruen- research scientist at Cambridge and works with IBM research

Location:

IUI 2011, February 13–16, 2011, Palo Alto, California 

Summary
Hypothesis:
intelligent systems that are designed to provide professionals an explanation of why he/she made that choice can be potentially misleading. This is due to the fact that intelligent system can strongly affect how the user may make choices.

Methods:

Result:

Content:

Discussion:


Monday, December 5, 2011

Paper Reading #31: Identifying emotional states using keystroke dynamics

Authors
Clayton Epp, Michael Lippold, Regan Mandryk

Occupation

Clayton Epp is currently a software engineer for a private consulting company 
Michael Lippold is currently a masters student at the University of Saskatchewan.
Regan L. Mandryk  is an Assistant Professor in the Interaction Lab in the University of Saskatchewan.


Location

CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC


Summary
Hypothesis
Based on the keystrokes of the user it is possible to determine the person's state of emotion

Methods
In order to record the keystroke patterns of the users, the researchers used a software program. The program
gave out a questionairre before giving out the text that was to be typed by the user. Unfortunately users with less than fifty responses to the software being used had to be eliminated due to the fact that there just wasn't enough data collected in general. The data that was collected by the software was based on the following: time stamp on key events, codes for each key, press and release events for keys. The user during the test was not aware of the main purpose of the test during the experiment.

Results
Due to the lack of information that could be used to draw a effective form of correlation, under sampling had to be used. (due to the fact that people less than fifty responses were eliminated) Based on the data the researchers concluded that it was possible to accurately classify at least two out of the seven emotional states. The two level classifiers had an accuracy rate of 88%.

Content
The purpose of this paper was to see if it was possible to gauge the emotion of a user based on how he/she types. This was collected by using a software and goes into various methods that were used to implement this as well as how the data was sorted out to make use of the results.

Discussion
I personally thought that this paper can have some interesting uses in its future fields. The first thing that came to my head was using this kind of research for security purposes. Maybe the computer can look at how the user is typing and use that data to ensure the rightful user is using the computer. Aside from that i think that this research probably will not have strong application in the near future.


Paper Reading #30: Life "modes" in social media

Authors:
Faith Kursat Ozenc
Shelly Farnham

Occupation:

Faith Kursat Ozenc is at Carnegie Mellon University
Shelly Farnham is a researcher for Microsoft Research


Location:

CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems in NYC

Summary
Hypothesis:
The researcher's believe that people organize their social worlds based on life modes and also believe that the ways to express such modes can be further improved when it comes to organizing such things.

Methods
sixteen people were chosen as users to take part in this experiment. Each of these people were told to model their lives in the experiment. During the screening there was special focused placed on observing how they spend his/her time and with who he/she spends time with. The participants mapped out his/her mode of life by using a graph with various colored markers and described how he/she would interact with each of the node in his/her everyday life.

Results
Based on the results of the users. It was seen that most people made maps that were social meme maps. The communication chosen by the users were mostly based on two different things: closeness to the user and the specific area of his/her life that effected the user. Based on the observation the methods of communication increased some what proportionally based on how close the user was to a specific person. The various factors that determined this sense of closeness with others were strongly effected by age, personality, and culture of the people the users dealt with.

Content
The paper focused on how people use their times with his/her social groups and various other people in one's everyday life. The researchers observed the ways that the users organized his/her model of life as the various level of interactions were compared to see the various forms of communication. The overall point of this research was to see how social structuring/modeling/networking organizations can be improved to make it easier for one's everyday lives.

Discussion
I personally found this paper to be some what uninteresting mostly because there was no new idea they were testing or new product they were testing. Although i believe that it is a good idea in general to improving the currently existing way to model our social lives i do not think that this research will have any significant impact anytime soon. I believe however that this research will be a something good to build on if someone wanted to make a new better organized facebook.


Thursday, December 1, 2011

Paper Reading #29: Usable gestures for blind people: understanding preference and performance

Authors:
Shaun Kane, Jacob, Richard Ladner

Occupation:

Shaun K. Kane is currently an Assistant Professor at the University of Maryland
Jacob O. Wobbrock is currently an Associate Professor at the University of Washington.
Richard E. Ladner is currently a Professor at the University of Washington 

Location:

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC

Summary
Hypothesis:
This paper focused on the different styles of gestures between sighted and blind people when used touch based devices

Methods:
In order to test the hypothesis the researchers decided to conduct two tests. The first test consisted of using both sighted and blind people and told them to invent several of their own gestures that can be used to interact as a standard task on a computer or a mobile device. The researchers read a description of the action required and the result from such action for each of the command. Each user created two different gestures for every command. Afterwards each of these gestures were looked over to see usability and accessibility.

The second test was to determine if blind people actually perform gestures in a different manner or if the gestures used are entirely different from the sighted users. In order to conduct this all of the users did the same set of gestures on the device. The researcher described what the intended purpose of the gesture was and the users attempted to re create this based on the instruction given

Results:
In the first test the researchers found that a blind person's gesture has more strokes than the sighted person. Also the blind people made more use of the edges in the tablet when positioning his/her gestures. Multi touch gestures were more occasionally used compared to the sight people as well. The second test saw very little difference in how easily the users recreated the gestures between the blind and the sighted. It was also seen that blind people made bigger overall gestures compared to the sighted people. It was also noted that blind users took much longer to perform the gestures and tended to be less straight.

Content:
The paper goes into how touch screen gestures can be improved and how it can also be applied for blind people. The purpose of the experiments in general was to gauge how differently the sighted people interact with a devices compared to blind people. As the researchers thought there was predictable amount of difference when it came to overall speed and gesture sizes. This can potentially create devices that can be viable to both blind and sighted people.

Discussion:
I thought the article was quite interesting as it talked about a facet of people that are often ignored in my opinion. I believe that this findings can help create new products that can allow even blind people to interact with and i hope that this research will help open further research into such topic.

Monday, November 28, 2011

Paper Reading #28: Experimental analysis of touch-screen gesture designs in mobile environments

Authors:
Andrew Bragdon, Eugene Nelson, Yang Li, and Ken Hinckle

Occupation:
Andrew Bragdon is currently a PhD student at Brown University.
Eugene Nelson is currently a PhD student at Brown University.
Yang Li is a researcher at Google and holds a PhD from the Chinese Academy of Sciences.
Ken Hinckle is a Principal Researcher at Microsoft Research 

Location:
Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC

Summary
Hypothesis
Bezel and marked gestures can be a better way to increase user performance with mobile touch screens in terms of accuracy and the attention required.

Methods
There were total of fifteen participants that filled out a questionnaire and were instructed on how to do the tasks followed by a demo. Afterwards the users performed tasks to show different levels of distractions and to measure the amount of interaction with a mobile device. The researchers focused on two motor activites: sitting/walking. Both activities were given up to three levels of distraction. The distraction levels were based on how much attention the participants have to give to a different activity. 

Results
In terms of completion time there was very little difference when comparing soft/hard buttons but bezel time has the lowest competition time. During moments of high distractions the Bezel gestures outperformed the soft buttons almost every time. However they both performed about the same in normal conditions.

Content
The paper focuses on the effectiveness of interacting with a mobile touch screen device based on soft/hard buttons and gestures. Much time is spent to see how various elements such as distractions can effect a person's ability to communicate effectively with the device. Based on the results given, the researchers show that direction touch gestures are more accurate when the user is distracted. It was also found that bezel based gestures were most preferred by the participants while mark based gestures were better (speed/accuracy) than free form gestures.

Discussion
I think that this is a interesting topic mostly because i see people make mistakes when their attention is else where and typing with soft buttons. I believe that application of interaction using these gestures can greatly allow accuracy and allow more ease of usage.



Paper Reading #27: Sensing cognitive multitasking for a brain-based adaptive user interface

Authors:

Erin Treacy Solov, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, Robert J.K. Jacob


Occupation:

Erin Treacy Solov is a postdoctoral fellow in the Humans and Automation Lab (HAL) at MIT. 
Francine Laloosesis a PhD candidate at Tufts University and has a Bachelor's and Master's degree from Boston University
Krysta Chauncey is a post doctorate researcher at Tufts University
Douglas Weaver has a doctorate degree from Tufts University
Margarita Parasi is working on a Master's degree at Tufts University
Angelo Sassaroli is a research assistant professor at Tufts University and has a PhD from the University of Electro-Communication
Sergio Fantini is a professor at Tufts University in the Biomedical Engineering Department
Paul Schermerhorn is a post doctorate researcher at Tufts University and has studied at Indiana University
Audrey Girouard is an assistant professor at The Queen's University and has a PhD from Tufts University
Robert J.K. Jacob is a professor at Tufts University


Location:

Presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC


Summary
Hypothesis
since cognitive multitasking is a common part of daily life, a human robot system can be helpful in recognizing multitasking tasks and assisting with their execution

Methods
There were two experiments performed to test the human robot system. The first test was based on looking into three aspects of the system: delay, dual task, and branching. The users interacted with a simulated robot on Mars in order to classify/sort various rocks. Based on how the rocks were classified, the data on the three aspects listed above were measured and recorded. Part two of the test was to see if it was possible to notice the variations within a branching task. The branching itself was classified into the following groups: random/predictive branching. This test procedure was identical with the previous.

Results
After accounting for normal distribution the accuracy and response time correlation coefficient was not deemed significant as there was no learning effect. The second experiment which also had data on response time vs accuracy, there was no significant difference in response time when comparing random and predictive branching. However there was a noticeable relationship for predictive branching.

Content
The paper focused on cognitive multitasking and how a human robot system can impact this processes. The paper describes the experiments that were used to see how effective their system was and to see if there was any correlation between response time and accuracy

Discussion
The paper did a good job in providing clear information to the reader and i believe that this research can easily act as a spring board for more research into this field. I believe that the original goals set out by the researchers were accomplished but i believe that it would have been better to obtain a larger base of participants. This is because the data was heavily influenced by normalizing the data and it is much easier to do so when the base is higher at such cases.


Sunday, November 27, 2011

Paper Reading #26: Embodiment in brain-computer interaction

Authors
Kenton O’Hara, Abigail Sellen, Richard Harper.


Occupation
Kenton O’Hara is a Senior Researcher at Microsoft Research and works in the Socio Digital Systems Group.
Abigail Sellen is a Principal Researcher at Microsoft Research and holds a PhD from The University of California, San Diego
Richard Harper is a Principal Researcher at Microsoft Research and holds a PhD from Manchester.


Location
Presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC
 

Summary
Hypothesis
There is a need to better understand the potential for brain computer interaction. The study of the body interaction is important rather than just focusing on the brain itself

Methods
The experiment was based on using a game called mindflex. The game uses EEC technology to calculate electrical signals given off by the brain. The fan in the mindflex platform will begin to blow stronger as the brain activity increases. The like wise will happen if the activity in the brain decreases. The participants brought the game home for a week and played with it and recorded their gameplay. The videos were looked into by researchers to see physical behaviors. Examples included gestures, what they said, and bodily actions. The point of this was to see the nature of interactions and how they were used during game play.

Results
based on the observations it was seen that body position played a large role in this game. the users would tend to orient themselves based on what they would try to do. As expected their gestures and expressions tended to be more stiff when attempting to concentrate while their expressions relaxed quite a bit when not concentrating hard. 

Content
The paper focuses on the importance of understanding how the body as a whole interacts when accomplishing something that requires the brain to concentrate. The experiments are focused on looking into how people would behave and interact with this game called mindflex. The researchers were able to find behavioral patterns that were quite consistent.

Discussion
I thought that this whole research concept was pretty out of the box and was quite interesting to read about in general. Although this paper helps us realize that looking at body behavior is important even if we are only focusing on the concentration given out by the brain, i am still a bit unsure on what the information can be used to applied in. The only thing i can think of is a game that requires concentration while doing a certain task.