Motoman Inc. and Universal Robotics are combining breakthrough software and industrial robots to make materials handling applications more accurate, cost effective and “human.”
The partnership, announced Oct. 1st, will integrate Nashville-based Universal’s Spatial Vision self-calibrating 3D vision software into Motoman’s industrial robots, which will then be launched in the materials handling market in early 2010.
About three years ago Motoman, a subsidiary of Japan’s Yaskawa Electric Corp., introduced the first and only two-armed robot with 15 axes of motion. The SDA10D’s (see SDA10D sidebar) actuator-based design forced the company to rethink its typical industrial applications because a dual-armed assembly robot needs advanced smarts to tackle new human-like tasks.
“So, 18 months ago we met with Universal Robotics and they got very excited about our arm. We realized it was the perfect match,” says Roger Christian, vice-president of marketing at Motoman.
Universal Robotics’ software was initially developed through research at Vanderbilt University and NASA in Nashville, where it has been the “brain” of their humanoid robot for years. (A humanoid robot is based on the human body to allow interaction with made-for-human tools or environments). The less than two-year-old company now creates technology that allows moving machines to actually learn from their experiences to perform tasks that are unsafe or difficult for humans.
Spatial Vision, a spin-off of Universal Robotic’s core product Neocortex (an algorithm that learns from its experience based on sensor data), is a 3D vision system that works with a very low-cost web cam application.
“3D vision is typically going to cost you between $35,000 and $45,000, and that’s sometimes more that the robot itself,” says Christian. “It may not mimic exactly the accuracy of a very expensive 3D system, but with some techniques we’re trying to incorporate, it might just be good enough, at a very low price point, to give more access to our customer base.”
The Spatial Vision system automatically identifies any dynamic point using inexpensive web cams to deliver accurate, full-frame colour results at 960 by 720 pixels, four to five times per second.