Now this is all at the speculation stage, folks, or maybe a step or two later. You might recall my interview in upFront.eZine #713 with Geomagic, in which senior product manager Kevin Scofield said his company was interested in going down-market:
The company is thinking of getting into the consumer market, but is waiting to see which technology pulls ahead. "We're all carrying scanners around in our pocket, and now we need software to stitch it together," said Mr Scofield.
In the context, it sounded to me that the camera of an Android or other phone would create images that were then somehow turned into a 3D model. This week, we learn 'tis not.
Instead, it's images from the Xbox 360's Kinect that are being stitched together. The Kinect is a cheap 3D scanner used for interactive video games. It sprays an array of infrared dots, and then collect the locations of 3D points in space; computer games act on the movement of the dots relative to each other. Ever since Microsoft prohibited hacking of Kinect, geeks have been hacking it to create all kinds of interesting effects. (Microsoft since lifted its ban.)
Geomagic is adapting its Studio software to read the 3D point data generated by Kinect. I write in the present tense, because the software is "at an early stage of development." The idea is that the data generated by Kinect is tesselated sufficiently for 3D printers to output whatever was scanned. Well, within limitations. Kinect can scan room-sized objects, but 3D printers output only breadloaf-size ones.
A two-minute video at http://www.youtube.com/watch?v=0RsIUI0XVy0 manages to leave out nearly all technical information, but does show that the process: (a) scan with Kinect near instantly, (b) convert scan data to surfaces with Geomagic in about 18 seconds, and then (c) print a 3"-tall monochrome model in an hour.
The company says it'll post updates at http://geomagic.com/en/community/blog