Microsoft's new Kinect API lets you use gestures to create 3D models of any shape that you can use as Kinect avatars and other potential applications. The work is being developed by Microsoft Research's Beijing team and was showed off earlier this month at the company's TechFest 2013. |
In the demo, the example avatars still look like blobs of playdough, but as the researchers refine the technology, characters will start looking better.
If you consider the possibilities digital artists presently create with similar modelling techniques in programs like Blender or Pixologic ZBrush, it is not much of an extension to see that Microsoft could be on to a wholly new way to generate 3D content.
With other natural interface devices also coming onto the market, like the much-anticipated Leap Motion, this looks like a promising area for researchers to explore and for user interface designers to develop applications for.
3D model methods used in the Kinect demo above may soon be used to produce 3D models like this ZBrush example from artist HecM. Such models can also then be used in 3D printing applications. |
In the meantime users should be able to create their own BodyAvatar-like tool using some of Kinect's new capabilities. IEEE Spectrum has also compiled a playlist where you can see some of the possibilities of the new SDK: Kinect's new gesture recognition interactions; Kinect Fusion's 3D modeling feature; and a demo of Kinect Fusion used for augmented reality.
For those ready to experiment, the new Kinect SDK can be downloaded here.
SOURCE IEEE Spectrum
By 33rd Square | Subscribe to 33rd Square |
0 comments:
Post a Comment