Microsoft has made a second 3D announcement in nearly as many weeks. Last time, it was a "3D" interface -- actually a large 2D semi-transparent touch panel that uses pre-defined hand motions to generate zooming, rotation, and pre-defined changes to the 3D model, such as color changes and motion toggles.
This week, it's software called Photosynth that combines numerous photographs into a 3D model. (This technology is not new; other companies, such as PhotoModeler, have been doing this for a decade or longer.)
The technology attempts to extract features, and then link portions of photographs by the features, such as the corner of a roof; with sufficient photographs taken at different angles, 3D coordinates can be determined. "Photosynth’s 3D model is just the cloud of points showing where those features are in space," admit the programmers, and the photos are (partially) overlaid onto the points.
The blog write-up treats us to evocative phrases like "magic" and "image DNA" and "a virtual universe of interconnected scenes that constantly evolves and changes over time." But the sample 3D images at the Weblog look wrecked to me: portions of 2D photos, areas of black, and groups of white dots.
The blog makes grandiose plans for this software, such as "Photosynth could eventually connect you to everything on the Web related to it." I actually like the concept of photographing a sign or some other puzzling object, submitting it to a Web search engine, and it telling me what the object is. Unfortunately, this blog's wording sounds too much like Microsoft invented 2D-to-3D photo stitching and is itching to regain its monopoly status.