July 30, 2009
Can we teach computers to co-construct in the way that people do? MIT Media Lab think so.
July 26, 2009
The buzzword of the late ’00s is Augmented Reality.
Healthcare Robotics is working on an interface that uses the Real World as the display. You can “click” on objects in the environment with a laser, and have the computer recognize them.
In this case, it instructs a robot to go and interact with the object (to fetch it or whatever). My interest is in having the computer recognize the object and make vocabulary related to that object available on an AAC system without having to navigate a “tree” of options.
Obviously, navigating the organization system is still necessary for discussing things not in the room. But in speech, we reference things with pointing and eye gaze all the time as a way to focus a person’s attention to what we are talking about (having two people “on the same page” as to what they are attending to and thinking about is called intersubjectivity; it is critical to all human communication).
With cameras being built into electronic devices cheaply (again, in AAC, the Tango leads the way), it is a matter of having the device paying attention all the time instead of stupidly waiting for input.