Monday, November 28, 2011

Paper Reading #28: Experimental analysis of touch-screen gesture designs in mobile environments

Authors:
Andrew Bragdon, Eugene Nelson, Yang Li, and Ken Hinckle

Occupation:
Andrew Bragdon is currently a PhD student at Brown University.
Eugene Nelson is currently a PhD student at Brown University.
Yang Li is a researcher at Google and holds a PhD from the Chinese Academy of Sciences.
Ken Hinckle is a Principal Researcher at Microsoft Research 

Location:
Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC

Summary
Hypothesis
Bezel and marked gestures can be a better way to increase user performance with mobile touch screens in terms of accuracy and the attention required.

Methods
There were total of fifteen participants that filled out a questionnaire and were instructed on how to do the tasks followed by a demo. Afterwards the users performed tasks to show different levels of distractions and to measure the amount of interaction with a mobile device. The researchers focused on two motor activites: sitting/walking. Both activities were given up to three levels of distraction. The distraction levels were based on how much attention the participants have to give to a different activity. 

Results
In terms of completion time there was very little difference when comparing soft/hard buttons but bezel time has the lowest competition time. During moments of high distractions the Bezel gestures outperformed the soft buttons almost every time. However they both performed about the same in normal conditions.

Content
The paper focuses on the effectiveness of interacting with a mobile touch screen device based on soft/hard buttons and gestures. Much time is spent to see how various elements such as distractions can effect a person's ability to communicate effectively with the device. Based on the results given, the researchers show that direction touch gestures are more accurate when the user is distracted. It was also found that bezel based gestures were most preferred by the participants while mark based gestures were better (speed/accuracy) than free form gestures.

Discussion
I think that this is a interesting topic mostly because i see people make mistakes when their attention is else where and typing with soft buttons. I believe that application of interaction using these gestures can greatly allow accuracy and allow more ease of usage.



Paper Reading #27: Sensing cognitive multitasking for a brain-based adaptive user interface

Authors:

Erin Treacy Solov, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz, Angelo Sassaroli, Sergio Fantini, Paul Schermerhorn, Audrey Girouard, Robert J.K. Jacob


Occupation:

Erin Treacy Solov is a postdoctoral fellow in the Humans and Automation Lab (HAL) at MIT. 
Francine Laloosesis a PhD candidate at Tufts University and has a Bachelor's and Master's degree from Boston University
Krysta Chauncey is a post doctorate researcher at Tufts University
Douglas Weaver has a doctorate degree from Tufts University
Margarita Parasi is working on a Master's degree at Tufts University
Angelo Sassaroli is a research assistant professor at Tufts University and has a PhD from the University of Electro-Communication
Sergio Fantini is a professor at Tufts University in the Biomedical Engineering Department
Paul Schermerhorn is a post doctorate researcher at Tufts University and has studied at Indiana University
Audrey Girouard is an assistant professor at The Queen's University and has a PhD from Tufts University
Robert J.K. Jacob is a professor at Tufts University


Location:

Presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC


Summary
Hypothesis
since cognitive multitasking is a common part of daily life, a human robot system can be helpful in recognizing multitasking tasks and assisting with their execution

Methods
There were two experiments performed to test the human robot system. The first test was based on looking into three aspects of the system: delay, dual task, and branching. The users interacted with a simulated robot on Mars in order to classify/sort various rocks. Based on how the rocks were classified, the data on the three aspects listed above were measured and recorded. Part two of the test was to see if it was possible to notice the variations within a branching task. The branching itself was classified into the following groups: random/predictive branching. This test procedure was identical with the previous.

Results
After accounting for normal distribution the accuracy and response time correlation coefficient was not deemed significant as there was no learning effect. The second experiment which also had data on response time vs accuracy, there was no significant difference in response time when comparing random and predictive branching. However there was a noticeable relationship for predictive branching.

Content
The paper focused on cognitive multitasking and how a human robot system can impact this processes. The paper describes the experiments that were used to see how effective their system was and to see if there was any correlation between response time and accuracy

Discussion
The paper did a good job in providing clear information to the reader and i believe that this research can easily act as a spring board for more research into this field. I believe that the original goals set out by the researchers were accomplished but i believe that it would have been better to obtain a larger base of participants. This is because the data was heavily influenced by normalizing the data and it is much easier to do so when the base is higher at such cases.


Sunday, November 27, 2011

Paper Reading #26: Embodiment in brain-computer interaction

Authors
Kenton O’Hara, Abigail Sellen, Richard Harper.


Occupation
Kenton O’Hara is a Senior Researcher at Microsoft Research and works in the Socio Digital Systems Group.
Abigail Sellen is a Principal Researcher at Microsoft Research and holds a PhD from The University of California, San Diego
Richard Harper is a Principal Researcher at Microsoft Research and holds a PhD from Manchester.


Location
Presented at the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC
 

Summary
Hypothesis
There is a need to better understand the potential for brain computer interaction. The study of the body interaction is important rather than just focusing on the brain itself

Methods
The experiment was based on using a game called mindflex. The game uses EEC technology to calculate electrical signals given off by the brain. The fan in the mindflex platform will begin to blow stronger as the brain activity increases. The like wise will happen if the activity in the brain decreases. The participants brought the game home for a week and played with it and recorded their gameplay. The videos were looked into by researchers to see physical behaviors. Examples included gestures, what they said, and bodily actions. The point of this was to see the nature of interactions and how they were used during game play.

Results
based on the observations it was seen that body position played a large role in this game. the users would tend to orient themselves based on what they would try to do. As expected their gestures and expressions tended to be more stiff when attempting to concentrate while their expressions relaxed quite a bit when not concentrating hard. 

Content
The paper focuses on the importance of understanding how the body as a whole interacts when accomplishing something that requires the brain to concentrate. The experiments are focused on looking into how people would behave and interact with this game called mindflex. The researchers were able to find behavioral patterns that were quite consistent.

Discussion
I thought that this whole research concept was pretty out of the box and was quite interesting to read about in general. Although this paper helps us realize that looking at body behavior is important even if we are only focusing on the concentration given out by the brain, i am still a bit unsure on what the information can be used to applied in. The only thing i can think of is a game that requires concentration while doing a certain task.



Paper Reading #25: Twitinfo: aggregating and visualizing microblogs for event exploration

Authors
Adam Marcus, Michael S. Bernstein, Osama Badar, David R. Karger, Samuel Madden, Robert C. Miller


Occupation
Michael S. Bernstein is a graduate student focusing on human-computer interaction at MIT in the CSAIL.  His research is on crowd-powered interfaces: interactive systems that embed human knowledge and activity.
Osama Badar is currently a member of the CSAIL at MIT.
David R. Karger is a member of the CSAIL in the EECS department at MIT.  He is interested in information retrieval and analysis of algorithms. 
Samuel Madden is currently an associate professor in the EECS department at MIT.  His primary research is in database systems.
Robert C. Miller is an associate professor in the EECS department at MIT and leads the User Interface Design Group.  His research interests include web automation and customization, automated text editing, end-user programming, usable security, and other issues in HCI.


Location
Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC


Summary
Hypothesis
twitinfo can provide use for summarizing and searching twitter information in terms of events and trends going on

Methods
twelve participants were selected to use this application to see different aspects of a recent event. This experiment focused on usability feedback. Part two of the experiment involved putting a time limit as the users were given five minutes to research a event using this application and then five minutes to write a report about the findings. At the end the users were interviewed about the system and their responses recorded.

Results
participants were able to recreate a some what detailed information on the events that they researched on. The users tended to perform free form exploration as they focused on the largest peaks and reads the relevant tweets. Afterwards they tended to follow links that were related to the event. The tweets themselves however tended to confirm specific details rather than provide new information. In the second part of the experiment, people tended to skim peak labels to get a sense of time line and people rarely read the outside links for additional information.

Content
The article focuses on this application called twitinfo and the inner workings and details of how it runs. The application itself allows users to classify and look into tweeted information. The paper focuses on how the user interacted with this device and what kind of information people were able to gather using this device.

Discussion
I personally thought the whole application was some what interesting since it can help someone keep track of a certain event by checking information at its peak times. However i believe that it would only be good for merely confirming information like the article said. I do not think that the application itself can have great impact on any actual events.


Paper Reading #24: Gesture avatar: a technique for operating mobile user interfaces using gestures

Authors
Yang Li, Hao Lu

Occupation
Yang Li is currently a Senior Research Scientist working for Google.  He spent time at the University of Washington as a research associate in computer science and engineering.  He holds a PhD in Computer Science from the Chinese Academy of Sciences.
Hao Lu is currently a graduate student at University of Washington in Computer Science and Engineering.  He is also a member of the DUB Group.  


Location
Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC


Summary
Hypothesis
Gesture Avatar can be a useful and viable solution to solve the problem of imprecise finger input for touch screen surface

The hypothesis were broken down into several smaller components in regards to GA's relationship with shift:
-GA will be slower than shift on large targets but faster on small
-GA will have fewer errors than shift
-mobile situations will decrease performance of shift but have little influence on GA

Methods
Participants tested both shift and GA.The participants were split in half and each half started out by using the other application first. The tasks were given out and were to be done both while sitting and walking. Examples of tasks include selecting different targets of different sizes, etc. The performance time was measured and recorded. The researched focused on ambiguity and commonness using 24 different letters and manipulating the distance between the various objects.

Results
The GA was slower than shift when size was twenty pixels but faster when size was ten pixels. They were mostly the same at fifteen pixels. As expected with the hypothesis walking impacted the shift noticeably but the impact on GA was very minimal

Content
The paper focuses on this application called Gesture Avatar which is used to increase the accuracy by dealing with imprecise finger based touch screen. The product was used on an Android and tested with the shift technology to see how applicable it was. The results mostly matched with hypothesis and the users were quite pleased with how the application was done

Discussion
I thought that this application is a interesting way of solving the problem and i think it would be very useful when walking around. However, i feel like most people would rather choose to zoom in to click on buttons when they feel the icons are too small. Also tools that allow for better precision when walking around is a dangerous distraction in my opinion as it helps promote more accidents to occur. Especially on campus where there are lots of people, bikes, and cars.


Paper Reading #23: User-defined Motion Gestures for Mobile Interaction

Authors
Jaime Ruiz, Yang Li, Edward Lank

Occupation
Jaime Ruiz is currently a fifth-year doctoral student in the HCI Lab in the Cheriton School of Computer Science at the University of Waterloo.
Yang Li is currently a Senior Research Scientist working for Google.  He spent time at the University of Washington as a research associate in computer science and engineering.  He holds a PhD in Computer Science from the Chinese Academy of Sciences.
Edward Lank holds a Ph.D. in Computer Science from Queen's University.  He is currently  an Assistant Professor in the David R. Cheriton School of Computer Science at the University of Waterloo


Location
 Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC


Summary
Hypothesis
Even though smart phones these days have sensors to detect 3D motion, there is a need for better understanding of practices in motion gesture design.

Methods
twenty participants were asked to perform motion gestures with a smartphone that can be used to execute a task on the phone. These gestures were taken, analyzed and some of them were used for the rest of the study. The participants were then given a set of tasks and a set of gestures. The participants were to perform the gesture and rate them based on how well it matched and how easy it was to perform

Results
the gestures designed by the participants tended to be normal and intuitiveness. (see pictures on bottom for examples) A lot of the gestures tended to mimic a interaction as if doing so with a real physical object as well. Tasks considered opposites usually results in gestures that were quite similar.

Content
The paper focused on various ways to map 3D gestures with a mobile device effectively by making them easy to use and intuitive for the user. Results and procedure was mostly consistent. The paper also discusses the various aspects that the users manipulated during the experiment. Based on the experiment the authors classified 3D mapping with 2 aspects: gesture mapping and physical characteristics.

Discussion
The paper was quite interesting as i looked at it as something that can be really applied in today's world to make things easier. Especially since a lot of people have smart phones these days. I think that this research will provide better understandings in terms of how we can apply our current technology in a better way by looking at user behavior and thinking


Paper Reading #22: Mid-air pan-and-zoom on wall-sized displays

Authors
Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, Wendy Mackay


Occupation
Mathieu Nancel is currently a PhD student in HCI in the Université Paris-Sud XI under the supervision of Michel Beaudouin-Lafon and Emmanuel Pietriga.
Julie Wagner is a PhD student in the insitu lab in Paris, working on new tangible interfaces and new interaction paradigms at large public displays.
Emmanuel Pietriga is currently a full-time research scientist working for INRIA Saclay - ÃŽle-de-France.  He is also the interim leader of INRIA team In Situ.
Olivier Chapuis is a Research Scientist at LRI.  He is also a member of and team co-head of the InSitu research team.  
Wendy Mackay is a research director with INRIA in France but currently at Stanford University 


Location
Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC


Summary
Hypothesis
Main point of the paper is to show that there is more research necessary on complex tasks when dealing with wall sized displays. The author mentioned several predictions in terms of human interaction with wall sized displays: two hands is faster than one, two hand should be more accurate and easier to use, linear gestures should map better, users will prefer clutch free circular gesture, finger techniques should be faster than other gesture requiring larger muscle groups, path gesture should be faster with lesser haptic feedback, 3D gesture will be more tiring

Methods
twelve participants were based and the experiment focused on three factors: handedness, gesture, and guidance. potential distance effects were controlled by using the distance between two targets as a secondary factor. examples of tasks included pan zoom task which was done by navigating two groups of concentric circles starting from high level zoom and zooming out until other groups were visible.

Results
The data collected from the participants supported several of the small predictions made by the authors. These include finger techniques being faster, path gestures should be faster, 3D gesture being more tiring. There were some results that went against their original thought such as linear gestures map better than circular.

Content
The paper focuses on how users can interact with large screen by looking into gestures and motions. They were concerned with various ways that can facilitate communication without too much trouble, fatigue, and complexity. There were several thoughts thrown in by the authors in terms of how the participants would react. Although most of their guesses came out as expected there were few points that were to the contrary.

Discussion
I thought it was a interesting paper with a interesting field of study. I think that this kind of research can have some good application in the near future since it is very possible to initiate this with the current technology with ease. The paper itself is also very through and easy to grasp based on how it is organized.



Paper Reading #21: Human model evaluation in interactive supervised learning

Authors
Rebecca Fiebrink, Perry Cook, Daniel Trueman

Occupation

Rebecca Fiebrink is currently an assistant professor in Computer Science at Princeton University. She holds a PhD from Princeton and was a postdoc for most of 2011 at the University of Washington.
Perry R. Cook is a professor emeritus at Princeton University in Computer Science and the Department of Music.   He is no longer teaching, but still researches, lectures, and makes music.
Daniel Trueman is a musician, primarily with the fiddle and the laptop.   He currently teaches composition at Princeton University.


Location

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC


Summary
Hypothesis
 since model evaluation is important in interactive machine learning systems, it is important to develop a good understanding of a modeling criteria that is most important to users

Methods
There was a total of three studies of people using supervised learning. In the first study focused on the design process with the seven composers with the goals to refine the Wekinator. The participants mostly spent the time to meet regularly and discuss the software in terms of its usefulness to his/her specific work and possible improvements. The second study required students to use the Wekinator for an assignment where supervised learning was necessary for music performance systems. The students were asked to use the input device to make two gesture controlled music performance systems. The last study was done with a professional musician to produce a gesture recognition system for a cello bow that was equipped with sensors. The point of the study was to produce a gesture classification for data capturing to create musically appropriate classes.

Results
The participants thought that the algorithms used for sound control were difficult to manipulate in a satisfactory manner.(GUI/ controlled procedure was tried) The second and third experiments were based on cross validation technique. The users indicate that high levels of cross validation accuracy was a good way to indicate good performance. The participants in the third experiment strongly based more of their judgement on direct validation instead of cross validation. The classification of direct validation was done into six parts- correctness, boundary shape, cost, decision, label confidence, complexity, and unexpectedness

Content
The authors focused on how users evaluate and use supervised learning system. They look at what criteria can be used during evaluation and look into different techniques applied- cross validation, direct validation. The purpose is to make decisions in algorithm performance and improvements in more effective training data

Discussion
The paper did a good job in classifying/organizing their findings to make it easy for the readers to understand what their work was about and how their whole experimental procedure went. I believe that these kind of supervised learning system can be beneficial in application as well as it allowed better classification as well as better indicators for performance

Paper Reading #20 : The Aligned Rank Transform

Author
Jacob Wobbrock, Leah Findlater, Darren Gergle, James Higgins

Occupation
Jacob O. Wobbrock is currently an Associate Professor in the Information School and an Adjunct Associate Professor in the Department of Computer Science & Engineering at the University of Washington. 

Leah Findlater is a postdoctoral researcher in The Information School, working with Dr. Jacob Wobbrock. She holds a PhD from the University of British Colombia.

Darren Gergle  is an Associate Professor at Northwestern University and has a PhD from Carnegie Mellon University.

James J. Higgins is currently a Professor at Kansas State University and holds a PhD from the University of Missouri-Columbia.


Location
Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems.  at NYC


Summary
Hypothesis
Aligned Rank Transform is a useful and easily accessible tool for pre processing nonparametric data so it can be manipulated in a level beyond current non parametric tests.

Method
five step procedure:
compute residuals: for each raw response Y computer residual = Y - cell mean
compute estimated effect for main effects: Ai is the mean response Yi for rows where factor A is at level i. AiBj represents the mean response. Yij represents rows for where factor A is at level i and where factor B is at level j, etc
compute the aligned response Y', and assign mean ranks Y" where Y' = residual + estimated effect
perform full factorial ANOVA on Y"

Results
the author look into three different ART procedures to show it applicability. One case showed how the usage of ART can show interaction effects that might not be shown with Friedman tests. One case showed how it can allow analysts from going through distributional assumptions of ANOVA. The last case showed nonparametric tests of repeatedly measured data. 

Content
The authors presented their ART tool as a applicable means of nonparametric analysis for factorial experiments. There is a indeph discussion on the process and show examples of how it can be applied effectively with real data

Discussion
Frankly this paper was very confusing and i understood exceptionally little of it due to the large amount of jargon that was passed around during its explanation. From what i can see i think the authors did a good job in presenting their cases with examples to show how their product can be applied but i was still very confused in terms of the inner working that they described in the paper.


Paper Reading #19 : Reflexivity in Digital Anthropology

Author
Jennifer Rode

Occupation

Jennifer Rode is currently an Assistant Professor at Drexel's School of Information in Pennsylvania.  She is also a fellow in Digital Anthropology at University College London.  She holds her PhD from the University of California, Irvine


Location

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC


Summary
Hypothesis
The author believes that digital anthropologists can participate in the HCI field by writing reflexive ethnographies

Method
The author merely discussed the different aspects of ethnographic views. There was no actual research done to develop a specific product.

Result
The author claims that the job of digital anthropologists is to study the context of technology not studying technology. She clarified the various methods of ethnographic study and names several styles of writing: realistic, confessional, and impressionistic

Content

The mentions the various forms of ethnography and how reflexive based ethnography can help promote design and theory in the field HCI. There is a description on three different types of anthropological writing and the important elements of that technique. There is also a mention of how ethnographies are actually used in the HCI design process.


    • Positivist: Data is collected, studied, and tested with the aim of producing an unambiguous result.  
    • Reflexivity:  According to Burawoy, reflexivity embraces intervention as an opportunity to gather data, it aims to understand how the data gathering impacts the data itself, and reflexive practitioners look for patterns and attempt to draw out theories. 
    • Realistic: 
      the need for  experimental author(ity), its typical forms, the native’s point of view, and interpretive omnipotence.
    • Confessional: 
      broadly provides  a written form for  the  ethnographer  to  engage  with  the  nagging doubts surrounding the study and discuss them textually with the aim of demystifying the fieldwork process
    • Impressionistic: based on dramatic recall and a well told story.

Discussion
This paper was very different but i was still interested mostly because we are all working on our ethnography as well. I thought that some of the material the author talked about can be applied to our own personal projects in this class.

Paper Reading #18: Biofeedback Game Design

Authors

Lennart E. Nacke, Michael Kalyn, Calvin Lough, and Regan L. Mandryk


Occupation
Lennart E. Nacke is currently an Assistant Professor for HCI and Game Science at the Faculty of Business and Information Technology at UOIT.  He holds a PhD in game development.
Michael Kalyn is currently a graduate student in Computer Engineering at the University of Saskatchewan.  He spent the summer working for Dr. Mandryk in areas related to interfacing sensors and affective feedback.
Calvin Lough is currently a student in at the University of Saskatchewan.
Regan L. Mandryk is currently an Assistant Professor in the Interaction Lab in the Department of Computer Science at the University of Saskatchewan.  

Location
CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC

Summary
Hypothesis
The authors proposed a physiological sensor input to augment game control using both direct and indirect manners

Method
There were two main fields that the researchers wanted to question. How do users respond when physiological senors are used to augment the game (not replace controllers). And which type of physiological senors work best for in game tasks? (direct / indirect) In order to test these questions the researchers designed a shooter game using a traditional controller as the main input while augmented with physiological sensors. The participants played with three different combinations of sensors while using the game controller as the main input. The first two combinations of the game was done with two direct and two indirect sensors. The last combination had no sensors on the game. The direct sensors applied were based on respiration, EMG on leg, and temperature. The indirect sensors were based on GSR and EKG. The participants filled out a survey after they had played the game.

result
The participants referred the sensors over non sensors as long as the input of the sensors matched the inputs in the game. Examples would include moving legs to boost jump power. The added involvement from the sensors had an overall positive feedback from the users in general. Although there was a concern of making the game more complicated the users commented that the learning curve was a bit high due to learning some of the extra movement one had to do. Despite the learning curve the participants believed the whole gaming experience was more rewarding. Examples of preference include EMG for speed and jump boosts, Temp for controlling the weather and speed of yeti etc.

Content
This paper talks about a field of gaming that still has much room for improvement as it focuses on possibilities based on physiological interactions. This research focused on how people would react to different types of physiological sensors and what were preferred by the users in different situations in a game. There was also a research in seeing the different between sensor augmented game and the tradition controller based inputs. Although the users enjoyed the concept they believe the learning curve was higher due to some non intuitive sensor inputs.

discussion
I was generally very excited about this whole paper since i do game on occasion. I believe that games that would utilize such technology can be very possible in the near future and i believe that it can add another interest element of game play when playing first person shooter. I think it would take games with interactions such as the wii to a whole new level of interaction.


Paper Reading #17 : Privacy Risks Emerging from the Adoption of Innocuous Wearable Sensors in the Mobile Environment

Authors
Andrew Raij, Santosh Kumar, Animikh Ghosh, and Mani Srivastava 


Occupation

  • Andrew Raij is a Post-Doc Fellow in the Wireless Sensors and Mobile Ad Hoc Networks Lab at the University of Memphis.
  • Santosh Kumar is currently an associate professor at the University of Memphis and leads the WiSe MANet Lab.
  • Animikh Ghosh is currently a Junior Research Associate at Infosys Labs in India and spent time as a researcher in the WiSeMANet Lab at the University of Memphis.
  • Mani Srivastava is currently a professor in the Electrical Engineering Dept. and Computer Science Dept. at UCLA. 

Location

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC


Summary
Hypothesis

The authors believe that there is an increased concern for potentially private behaviors being exposed and accessible due to the popularization of sensor based applications.

Methods
The researchers would put the participants in two separate groups. One group acted as the control group as they were not monitored while the other group were monitored for several days and have basic information recorded about them. Both groups did a survey to indicate his/her thoughts on potentially private behavior. Of the two groups only the second group were shown the results of the observation and based on this a conclusion was drawn out. In the end the participants were to fill out another survey based on his/her experience of this experiment

Result
The data collected shows the lack of concern when private data was related indirectly or to third parties. It was also shown that the monitored group had more caution and level of concern even after the observation period was over. The other group showed minimal change however. It was also noted that the perception the participants had on who would potentially see this private data had a noticeable impact on how the participants acted. There was a general concern on the data being shown to a large number of people or to the public. In the end, areas involving stress and conversation periods tended to make people most vulnerable.

Content
The authors did an experiment to see how concerned the people are when it comes to exposing private information. It was also an experiment to see privacy awareness by the people and the reaction given by the people based on the information they might be providing through basic sensors.

Discussion
I personally was a bit disappointed with this paper since there was no actual product they wanted to try out. But i personally think that it was still a interesting idea that most people probably haven't thought about. I believe that ideas like this can help provide more elements to individual's privacy protection in the future.


Paper Reading #17 : Privacy Risks Emerging from the Adoption of Innocuous Wearable Sensors in the Mobile Environment

Authors
Andrew Raij, Santosh Kumar, Animikh Ghosh, and Mani Srivastava 


Occupation

  • Andrew Raij is a Post-Doc Fellow in the Wireless Sensors and Mobile Ad Hoc Networks Lab at the University of Memphis.
  • Santosh Kumar is currently an associate professor at the University of Memphis and leads the WiSe MANet Lab.
  • Animikh Ghosh is currently a Junior Research Associate at Infosys Labs in India and spent time as a researcher in the WiSeMANet Lab at the University of Memphis.
  • Mani Srivastava is currently a professor in the Electrical Engineering Dept. and Computer Science Dept. at UCLA. 

Location

Published in the CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems at NYC


Summary
Hypothesis

The authors believe that there is an increased concern for potentially private behaviors being exposed and accessible due to the popularization of sensor based applications.

Methods
The researchers would put the participants in two separate groups. One group acted as the control group as they were not monitored while the other group were monitored for several days and have basic information recorded about them. Both groups did a survey to indicate his/her thoughts on potentially private behavior. Of the two groups only the second group were shown the results of the observation and based on this a conclusion was drawn out. In the end the participants were to fill out another survey based on his/her experience of this experiment

Result
The data collected shows the lack of concern when private data was related indirectly or to third parties. It was also shown that the monitored group had more caution and level of concern even after the observation period was over. The other group showed minimal change however. It was also noted that the perception the participants had on who would potentially see this private data had a noticeable impact on how the participants acted. There was a general concern on the data being shown to a large number of people or to the public. In the end, areas involving stress and conversation periods tended to make people most vulnerable.

Content
The authors did an experiment to see how concerned the people are when it comes to exposing private information. It was also an experiment to see privacy awareness by the people and the reaction given by the people based on the information they might be providing through basic sensors.

Discussion
I personally was a bit disappointed with this paper since there was no actual product they wanted to try out. But i personally think that it was still a interesting idea that most people probably haven't thought about. I believe that ideas like this can help provide more elements to individual's privacy protection in the future.