Sunday, December 11, 2011

Paper Reading #32: Taking advice from intelligent systems

Authors: Kate Ehrlich, Susanna Kirk, John Patterson, Jamie Rasmussen, Steven Ross, Daniel Gruen

Occupation:
Kate Ehrlich- senior technical staff at IBM research
Susanna Kirk- lead at IBM research
John Patterson- Engineer at IBM research
Jamie Rasmussen- software engineer at IBM research
Steven Ross- senior technical staff at IBM research
Daniel Gruen- research scientist at Cambridge and works with IBM research

Location:

IUI 2011, February 13–16, 2011, Palo Alto, California 

Summary
Hypothesis:
intelligent systems that are designed to provide professionals an explanation of why he/she made that choice can be potentially misleading. This is due to the fact that intelligent system can strongly affect how the user may make choices.

Methods:

Result:

Content:

Discussion:


Monday, December 5, 2011

Paper Reading #31: Identifying emotional states using keystroke dynamics

Authors
Clayton Epp, Michael Lippold, Regan Mandryk

Occupation

Clayton Epp is currently a software engineer for a private consulting company 
Michael Lippold is currently a masters student at the University of Saskatchewan.
Regan L. Mandryk  is an Assistant Professor in the Interaction Lab in the University of Saskatchewan.


Location

CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC


Summary
Hypothesis
Based on the keystrokes of the user it is possible to determine the person's state of emotion

Methods
In order to record the keystroke patterns of the users, the researchers used a software program. The program
gave out a questionairre before giving out the text that was to be typed by the user. Unfortunately users with less than fifty responses to the software being used had to be eliminated due to the fact that there just wasn't enough data collected in general. The data that was collected by the software was based on the following: time stamp on key events, codes for each key, press and release events for keys. The user during the test was not aware of the main purpose of the test during the experiment.

Results
Due to the lack of information that could be used to draw a effective form of correlation, under sampling had to be used. (due to the fact that people less than fifty responses were eliminated) Based on the data the researchers concluded that it was possible to accurately classify at least two out of the seven emotional states. The two level classifiers had an accuracy rate of 88%.

Content
The purpose of this paper was to see if it was possible to gauge the emotion of a user based on how he/she types. This was collected by using a software and goes into various methods that were used to implement this as well as how the data was sorted out to make use of the results.

Discussion
I personally thought that this paper can have some interesting uses in its future fields. The first thing that came to my head was using this kind of research for security purposes. Maybe the computer can look at how the user is typing and use that data to ensure the rightful user is using the computer. Aside from that i think that this research probably will not have strong application in the near future.


Paper Reading #30: Life "modes" in social media

Authors:
Faith Kursat Ozenc
Shelly Farnham

Occupation:

Faith Kursat Ozenc is at Carnegie Mellon University
Shelly Farnham is a researcher for Microsoft Research


Location:

CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems in NYC

Summary
Hypothesis:
The researcher's believe that people organize their social worlds based on life modes and also believe that the ways to express such modes can be further improved when it comes to organizing such things.

Methods
sixteen people were chosen as users to take part in this experiment. Each of these people were told to model their lives in the experiment. During the screening there was special focused placed on observing how they spend his/her time and with who he/she spends time with. The participants mapped out his/her mode of life by using a graph with various colored markers and described how he/she would interact with each of the node in his/her everyday life.

Results
Based on the results of the users. It was seen that most people made maps that were social meme maps. The communication chosen by the users were mostly based on two different things: closeness to the user and the specific area of his/her life that effected the user. Based on the observation the methods of communication increased some what proportionally based on how close the user was to a specific person. The various factors that determined this sense of closeness with others were strongly effected by age, personality, and culture of the people the users dealt with.

Content
The paper focused on how people use their times with his/her social groups and various other people in one's everyday life. The researchers observed the ways that the users organized his/her model of life as the various level of interactions were compared to see the various forms of communication. The overall point of this research was to see how social structuring/modeling/networking organizations can be improved to make it easier for one's everyday lives.

Discussion
I personally found this paper to be some what uninteresting mostly because there was no new idea they were testing or new product they were testing. Although i believe that it is a good idea in general to improving the currently existing way to model our social lives i do not think that this research will have any significant impact anytime soon. I believe however that this research will be a something good to build on if someone wanted to make a new better organized facebook.


Thursday, December 1, 2011

Paper Reading #29: Usable gestures for blind people: understanding preference and performance

Authors:
Shaun Kane, Jacob, Richard Ladner

Occupation:

Shaun K. Kane is currently an Assistant Professor at the University of Maryland
Jacob O. Wobbrock is currently an Associate Professor at the University of Washington.
Richard E. Ladner is currently a Professor at the University of Washington 

Location:

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC

Summary
Hypothesis:
This paper focused on the different styles of gestures between sighted and blind people when used touch based devices

Methods:
In order to test the hypothesis the researchers decided to conduct two tests. The first test consisted of using both sighted and blind people and told them to invent several of their own gestures that can be used to interact as a standard task on a computer or a mobile device. The researchers read a description of the action required and the result from such action for each of the command. Each user created two different gestures for every command. Afterwards each of these gestures were looked over to see usability and accessibility.

The second test was to determine if blind people actually perform gestures in a different manner or if the gestures used are entirely different from the sighted users. In order to conduct this all of the users did the same set of gestures on the device. The researcher described what the intended purpose of the gesture was and the users attempted to re create this based on the instruction given

Results:
In the first test the researchers found that a blind person's gesture has more strokes than the sighted person. Also the blind people made more use of the edges in the tablet when positioning his/her gestures. Multi touch gestures were more occasionally used compared to the sight people as well. The second test saw very little difference in how easily the users recreated the gestures between the blind and the sighted. It was also seen that blind people made bigger overall gestures compared to the sighted people. It was also noted that blind users took much longer to perform the gestures and tended to be less straight.

Content:
The paper goes into how touch screen gestures can be improved and how it can also be applied for blind people. The purpose of the experiments in general was to gauge how differently the sighted people interact with a devices compared to blind people. As the researchers thought there was predictable amount of difference when it came to overall speed and gesture sizes. This can potentially create devices that can be viable to both blind and sighted people.

Discussion:
I thought the article was quite interesting as it talked about a facet of people that are often ignored in my opinion. I believe that this findings can help create new products that can allow even blind people to interact with and i hope that this research will help open further research into such topic.