February 19, 2009
So here are the results of the machine awareness survey:
- motion sensor (infrared?)
- light intensity sensor
- rain detector
- weather sensors (wind, barometer, humidity, etc)
- weight-sensitive (like the mats that open supermarket doors)
- traction control
- volume of cell-phone traffic
- accelerometers (wiimote, Macbook sudden motion sensor)
- accelerometers for tilt detection (turn sideways to change mode)
- wi-fi (presence of and use of for location tracking)
- circuit breaker
- camera + object recognition, facial recognition
- microphone + voice recognition, noise recognition
- switches in the hinge (Macbook sleeps when closed)
So what is this for?
I’ve become convinced that we are not using all of the resources available to make augmented communication effective and seamless.
Much of our electronic lives have become a sort of augmented input and augmented communication. We use GPS and maps to augment our awareness of our surroundings. We text people, allowing us to communicate anywhere. All of this is a gradual blurring of the boundary between the machine and our reality (cyberspace and meatspace as it were).
AAC users already rely on machines for communication. They are a generation ahead of all of us in terms of the merging of computer and interpersonal communication. The next step in the evolution of AAC is to make the devices more responsive to all aspects of the environment. Right now, the only input to an AAC device is the user and perhaps a programmer (like teachers/parents/SLPs). This is okay, but it requires too much time and energy to input everything involved in communication. How can we make the machine do some of the work?
AAC researchers are not the only ones working on this problem. I am hoping that by looking at how other fields are blurring the boundaries between cyberspace and the real world we can use some of their lessons and tricks.