When we look at wearable computers—smart glasses like Google Glass or smart watches like Pebble—we tend to think of tiny mobile computers that are somehow attached to our bodies or clothing.
The core attributes, we believe, are miniaturization and convenience. A smart watch, for example, saves you the trouble of pulling a phone out of your pocket to check your notifications. But these qualities are irrelevant in the face of the real revolution. Wearable computers will find out what you want to know, then make you know it.
A company called Refresh this week introduced a new Google Glass “glassware” app called Refresh.
For example, wearable devices could help you learn more about people you meet—filling you in about whether you’ve met them before or if you share common acquaintances, interests or histories.
It’s a small thing. But imagine a future where anything you might want to know simply appears to you without any action or effort on your part. You could be eating in a restaurant, and Google Glass could, for example, tell you that it’s the spot where your father proposed to your mother. Or that your friend will be late because of traffic, the salmon got bad reviews online, your parking meter will expire in 20 minutes, or the bathroom is through the bar and up the stairs to the right. Imagine that such knowledge could simply appear into your field of vision at the exact moment when you want to know it. That’s where wearable computing is going. That’s why the wearable revolution is mostly an artificial intelligence revolution. What’s really interesting about wearable computing is the work that back-end servers do to figure out what you want to know and then acquire that knowledge. The delivering-it-to-your-brain part is relatively trivial.
read more at TechHive
0 Response to "Why wearable computing is waiting for AI"
Post a Comment