The Australian Centre for Visual Technologies has worked out a way to generate 3D models from multiple frames in a video.
The user interacts with VideoTrace by tracing the shape of the object to be modelled over one or more frames of the video. By interpreting the sketch drawn by the user in light of 3D information obtained from computer vision techniques, a small number of simple 2D interactions can be used to generate a realistic 3D model.
Makes sense to me. As the video camera moves around an object, it captures the object from many viewpoints in 2D. The tricky part is combining many 2D views into a single 3D model -- although it seems to me that this technology is not particularly new.
What might be new is how compact the hardware could be. 3D capture hardware has tended to be bulky. But now even my less-than-1"-thick Samsung digital camera could be used (through its 30fps 720 video mode).
VideoTrace is in the prototype stage. The center is looking for employees, as well as applications for industry. Contact them by email here.
Comments