http://www.hplusmagazine.com/articles/ai/games-design-themselves

Can we teach computers to co-construct in the way that people do? MIT Media Lab think so.

Advertisements

Clickable reality

July 26, 2009

The buzzword of the late ’00s is Augmented Reality.

Healthcare Robotics is working on an interface that uses the Real World as the display. You can “click” on objects in the environment with a laser, and have the computer recognize them.

In this case, it instructs a robot to go and interact with the object (to fetch it or whatever). My interest is in having the computer recognize the object and make vocabulary related to that object available on an AAC system without having to navigate a “tree” of options.

The world is your display and input device.

The world is your display and input device.

Obviously, navigating the organization system is still necessary for discussing things not in the room. But in speech, we reference things with pointing and eye gaze all the time as a way to focus a person’s attention to what we are talking about (having two people “on the same page” as to what they are attending to and thinking about is called intersubjectivity; it is critical to all human communication).

With cameras being built into electronic devices cheaply (again, in AAC, the Tango leads the way), it is a matter of having the device paying attention all the time instead of stupidly waiting for input.

Metaplay 3

July 17, 2009

Turns out, I’m not quite ready to do the play vocab list yet. I haven’t introduced the whole framework. I should explain the framework before I fill it in. There’s a bunch of categories I need to outline and define.

The whole thing makes something of a matrix when it is done. This is a rough draft: you are seeing it pretty much at the same time as I am so wish me luck.

Here we go.

pretendplay

We can improve on this.

Read the rest of this entry »