A soldier in the field with little or no technical training could fix a piece of high-tech weaponry, seniors might use a complex health monitoring device and a newlywed couple can be coached through complicated IKEA instructions, all without the help of another human.
Researchers at the Carnegie Mellon School of Computer Science are using wearable technologies like Google Glass to place an "angel" on a user's shoulder to do those types of tasks.
“If you are like me, every time you try to put one of these things together, you make a mistake,” said CMU professor Mahadev Satyanarayanan, who helped pioneer mobile computing. “You discover it six steps later. It’s a real nuisance to have to undo that work and try to get back."
Imagine how much easier it would be, Satyanarayanan said, "if your GPS navigation system, it takes you step by step. If you make a mistake, it catches it immediately, like, ‘No, not that screw. The short one.’”
Satyanarayanan worked with graduate student Zhuo Chen to illustrate how it works. Testers wore Google Glass, which guided them through building a human figure out of LEGOs and constructing a sandwich using plastic play food.
The headset -- in this case, a cell phone -- prompted Chen as he added to the sandwich. If done correctly, the app moves on to the next step. If the step is done wrong, the app offers a solution and instructs its user to try again.
The video, audio and any other important data gathered by the phone or the wearable device are sent to a cloudlet, which Satyanarayanan describes as the nearest small cloud computer.
“And running on it are algorithms that essentially process that sensor data faster than you can process it,” Satyanarayanan said.
Without that assistance from the cloud, a cell phone or wearable device would not have the computing power need to complete its task.
But that’s where the project stumbles a bit, waiting for digital infrastructure to improve. Satyanarayanan said the distance between an average mobile data user and an Amazon cloud computer is around 100 milliseconds. That’s not a problem for someone building furniture or a sandwich, but it could be for an activity, like ping pong, as the computer instructs the player where to hit the ball. It just so happens that Chen is working on that very project.
Chen and Satyanarayanan named the application Gabriel after the angel described by Christians and Jews as the messenger of God. They said the technology could eventually lend itself to commercial uses, like repairing complex manufacturing equipment on the spot rather than shutting down an operation while a technician is called in from a distant office.
For now, the project exists solely in the development phase. Google Glass already offers a hands-free recipe app for home chefs, but this technology would also give step-by-step advice, just like a good friend would do.
“He or she would say, ‘Wait, don’t put the stuff in yet. The oil is not hot enough,’” Satyanarayanan said.
He said it’s not an attempt to turn humans into robots. Instead, it takes advantage of a human’s ability to carry out spoken commands and react to changing situations, which a pre-programmed robot can't do.
The research is funded by a $2.8 million grant from the National Science Foundation.