bloc 33rd Square Business Tools - Microsoft Research 33rd Square Business Tools: Microsoft Research - All Post
Showing posts with label Microsoft Research. Show all posts
Showing posts with label Microsoft Research. Show all posts

Thursday, August 11, 2016

New System Creates Real-time Performance Capture of Challenging Scenes


3D Scanning

Microsoft is developing new real-time 3D scanning capabilities that could mean you could attend a concert or sporting event live in full 3D, or even have the ability to communicate in real-time with remotely captured people using immersive augmented reality or virtual reality displays in the future.


Researchers at Microsoft have created a system that could be the prototype for a next-generation Kinect camera. Called Fusion4D, the project, the scanning system impressively reconstructs complex 3D scenes digitally, including those with more than one person, animals and can even capture clothing being put on the actor.

The researchers have detailed their work in a paper published online.

Fusion4D is the first real-time multi-view non-rigid reconstruction system for live performance capture, claim the researchers. "We have contributed a new pipeline for live multi-view performance capture, generating high-quality reconstructions in real-time, with several unique capabilities over prior work," they conclude.

Fusion4D

Related articles

Today, most cameras and 3D scanners like the Kinect Sensor, used for motion capture still focus on static, non-moving, scenes. This is due to limitations in computational power and the demands on software to reconstruct scenes. 

For more complex scenes, with moving cameras and many elements, the computer must solve for orders of magnitude more parameters in real-time. This typically results in noisy or missing data, choppy motion and digital artifacts in the output that are not representative of what is being captured in the real world.

Fusion4D Microsoft research


"Our reconstruction algorithm enables both incremental reconstruction, improving the surface estimation over time, as well as parameterizing the nonrigid scene motion."
Microsoft's research team also dealt with the case of changing scene topology, such as person removing a jacket or scarf.

The implications of the research are vast. For instance, it could lead to new real-time experiences such as the ability to watch a remote concert or sporting event live in full 3D, or even the ability to communicate in real-time with remotely captured people using immersive augmented reality or virtual reality displays.

The applications could also extend to robotics and machine vision.

With Microsoft's HoloLens system reaching wider deployment now, this last case could lead to some very interesting possibilities.

New System Creates Real-time Performance Capture of Challenging Scenes

"As shown, our reconstruction algorithm enables both incremental reconstruction, improving the surface estimation over time, as well as parameterizing the nonrigid scene motion," write the authors."We also demonstrated how our approach robustly handles both large frame-to-frame motion and topology changes. This was achieved using a novel real-time solver, correspondence algorithm, and fusion method."

"We believe our work can enable new types of live performance capture experiences, such as broadcasting live events including sports and concerts in 3D, and also the ability to capture humans live and have them re-rendered in other geographic locations to enable high fidelity immersive telepresence."




SOURCE  Microsoft Research


By 33rd SquareEmbed


Thursday, February 19, 2015


 Artificial Intelligence
In a video released by Microsoft Research, Eric Horvitz shares his view about the future of artificial intelligence and there is much to look forward to.





We are entering an era of devices and services that is bringing the dream of artificial intelligence to life. Eric Horvitz, head of the Microsoft Research Redmond lab and former AAAI president, describes that by putting people at the center of our work, we can design products and services that understand our nuances, emotions and quirky behaviors.

Related articles
In the video above, Horvitz shares his view that we are in a resurgence of optimism and hope as it relates to solving some of the core problems in artificial intelligence and there is much to look forward to.

Horvitz believes machines will think original thoughts someday soon. "I often say when talking to folks that AI systems can literally be creative. They can identify new categories, new concepts, synthesize new approaches to challenge problems and come up with new ideas, even new distinctions that weren't told to them originally."

Deep learning methods actually induce structure, according to Horvitz. This type of classification technology is extremely creative he says.

"That vision, that world, those machines are attainable because I believe this is what's going on in our own minds."


Horvitz believes that fundamentally artificial intelligence in the future will be as aware, sensitive, and in fact, conscious as we are. "That vision, that world, those machines are attainable because I believe this is what's going on in our own minds."

Does this mean we will loose control of future AI systems?  Horovitz doesn't think so.  He says we will be proactive as we progress in creating them.

In 15-20 years, Horovitz sees a world where we will all work quite closely with machine intelligence in our daily lives.  This includes personal assistance, reminders, lifelong customized education, and new kinds of entertainment and gaming experiences. "Game AI becomes real AI," in his vision.

In the video, Horovitz talks about the movie Her.  "I love certain things about that movie in terms of this notion of a relationship that we all have someday with a machine intelligence," he says. "I love the notion of what a Singularity is. I mean, these systems will have very little time for us if they are greater than human."

Monica virtual assistant

Like Samantha in Her, Horowitz has created his own virtual assistant that he calls, 'Monica.' "She greets me, she recognizes me.  I get a cute little smile which is very endearing, I think. I get an update about the day, and I head into my office to work. That kind of thing will be more commonplace someday."


SOURCE  Microsoft Research

By 33rd SquareEmbed

Monday, July 14, 2014


 Artificial Intelligence
Microsoft has just upped the ante for artificial intelligence.  Project Adam is a new deep-learning system modeled after the human brain that has greater image classification accuracy and is 50 times faster than other systems in the industry.




Microsoft Research has developed a new artificial intelligence system using machine learning, called “Project Adam." The software increases the speed and efficiency of computers and their ability to learn.  Project Adam uses inspiration from the human brain to absorb new data and teach itself new skills — such as distinguishing among different breeds of dogs.

Related articles
Project Adam aims to demonstrate that large-scale, commodity distributed systems can train huge deep neural networks effectively. For proof, the researchers created the world’s best photograph classifier, using 14 million images from ImageNet, an image database divided into 22,000 categories.

The system was demonstrated at Microsoft’s Faculty Summit in Redmond recently (video below), as Microsoft brought out several different breeds of dogs on stage and showed how the technology could automatically distinguish among them in real time, using computer vision and insights from large sets of data.  The system was integrated into Cortana, Microsoft's digital assistant platform.

Microsoft says Project Adam has achieved breakthroughs in machine learning by using distributed networks and an asynchronous technique that improves the overall efficiency and accuracy of the system over time. This is a critical area of technology as Microsoft and other companies race to build intelligent, predictive systems that leverage mobile technologies and the cloud.

Where Google's Neural Networks Recognized Cats, Microsoft Sees Dogs, and Faster

"Project Adam knows dogs. It can identify dogs in images. It can identify kinds of dogs. It can even identify particular breeds, such as whether a corgi is a Pembroke or a Cardigan."

Now, if this all sounds vaguely familiar, that’s because it is—vaguely. A couple of years ago, Google used a network of 16,000 computers to teach itself to identify images of cats. That is a difficult task for computers, and it was an impressive achievement.

"We wanted to build a highly efficient, highly scalable distributed system from commodity PCs that has world-class training speed, scalability, and task accuracy for an important large-scale task."


According to Microsoft Research, Project Adam is 50 times faster—and more than twice as accurate, as outlined in a paper currently under academic review. In addition, it is efficient, using 30 times fewer machines, and is scalable, areas in which the Google effort fell short.

“We wanted to build a highly efficient, highly scalable distributed system from commodity PCs that has world-class training speed, scalability, and task accuracy for an important large-scale task,” says Trishul Chilimbi, one of the Microsoft researchers who spearheaded the Project Adam effort. “We focused on vision because that was the task for which we had the largest publicly available data set.

“We tend to overestimate the impact of disruptive technologies in the short term and underestimate their long-term impact—the Internet being a good case in point. With deep learning, there’s still a lot more to be done on the theoretical side," Chilimbi says.



SOURCE  Microsoft Research

By 33rd SquareEmbed

Wednesday, May 22, 2013


 Main Label
The SIGGRAPH Technical Papers program is the premier international forum for disseminating new scholarly work in computer graphics and interactive techniques. SIGGRAPH 2013 brings together thousands of computer graphics professionals to share and discuss their work.
-->



The SIGGRAPH 2013 Technical Papers program is the premier international forum for disseminating new scholarly work in computer graphics and interactive techniques. The 40th International Conference and Exhibition on Computer Graphics and Interactive Techniques, 21-25 July 2013 at the Anaheim Convention Center in California, received submissions from around the globe and features high quality and never before seen scholarly work. Submitters are held to extremely high standards in order to qualify.

“Computer Graphics is a dynamic and ever-changing field in many ways,” says Marc Alexa, SIGGRAPH 2013 Technical Papers Chair from Technische Universität Berlin. “The range of ground-breaking papers presented at SIGGRAPH is getting broader every year, now also encompassing 3D printing, and fabricating realistic materials as well as generating ever more realistic images of complex phenomena.”

SIGGRAPH accepted 115 technical papers (out of 480 submissions) to showcase this year representing an acceptance rate of 24 percent (one percent higher than 2012). The selected papers were chosen by a distinguished committee of academia and industry experts.

This year's Technical Papers program also includes conference presentations for 37 papers published this year in the journal ACM Transactions on Graphics (TOG).

Highlights From the SIGGRAPH 2013 Technical Papers Program this year include:

OpenFab: A Programmable Pipeline for Multi-Material Fabrication
Authors: Kiril Vidimce, Szu-Po Wang, Jonathan Ragan-Kelley and Wojciech Matusik, Massachusetts Institute of Technology CSAIL

Open Fab

This paper proposes a programmable pipeline, inspired by RenderMan, for synthesis of multi-material 3D printed objects. The pipeline introduces user-programmable fablets, a corollary to procedural shaders for 3D printing, and is designed to stream over arbitrary numbers of voxels with a fixed and controllable memory footprint.

Opacity Optimization for 3D Line Fields
Authors: Tobias Günther, Christian Roessl, and Holger Theisel, Otto-von-Guericke-Universität Magdeburg

Opacity Optimization for 3D Line Fields

For visualizing dense line fields, this method selects lines by view-dependent opacity optimizations and applies them to real-time free navigation in flow data, medical imaging, physics, and computer graphics.

Related articles
AIREAL: Interactive Tactile Experiences in Free Air
Authors: Rajinder Sodhi, University of Illinois; Ivan Poupyrev, Matthew Glisson, Ali Israr, Disney Research, The Walt Disney Company

AIREAL: Interactive Tactile Experiences in Free Air

AIREAL is a tactile feedback device that delivers effective and expressive tactile sensations in free air, without requiring the user to wear a physical device. Combined with interactive graphics and applications, AIREAL enables users to feel virtual objects, experience free-air textures and receive haptic feedback with free-space gestures.

Bi-Scale Appearance Fabrication
Authors: Yanxiang Lan, Tsinghua University; Yue Dong, Microsoft Research Asia; Fabio Pellacini, Sapienza Universita’ Di Roma, Dartmouth College; Xin Tong, Microsoft Research Asia

Bi-Scale Appearance Fabrication

A system for fabricating surfaces with desired spatially varying reflectance, including anisotropic ones, and local shading frames.

Map-Based Exploration of Intrinsic Shape Differences and Variability
Authors: Raif Rustamov, Stanford University; Maks Ovsjanikov, École Polytechnique; Omri Azencot, Mirela Ben-Chen, Technion - Israel Institute of Technology; Frederic Chazal, INRIA Saclay - Île-de-France; and Leonidas Guibas, Stanford University

Map-Based Exploration of Intrinsic Shape Differences and Variability

A novel formulation of shape differences, aimed at providing detailed information about the location and nature of the differences or distortions between the shapes being compared. This difference operator is much more informative than a scalar similarity score, so it is useful in applications requiring more refined shape comparisons.

Highly Adaptive Liquid Simulations on Tetrahedral Meshes
Authors: Ryoichi Ando, Kyushu University; Nils Thuerey, ScanlineVFX GmbH; and Chris Wojtan, Institute of Science and Technology Austria

Highly Adaptive Liquid Simulations on Tetrahedral Meshes

This new method for efficiently simulating fluid simulations with extreme amounts of spatial adaptivity combines several key components to produce a simulation algorithm that is capable of creating animations at high effective resolutions while avoiding common pitfalls like inaccurate boundary conditions and inefficient computation.

SIGGRAPH 2013 will bring thousands of computer graphics and interactive technology professionals from five continents to Anaheim, California for the industry's most respected technical and creative programs focusing on research, science, art, animation, music, gaming, interactivity, education, and the web from Sunday, 21 July through Thursday, 25 July 2013 at the Anaheim Convention Center. SIGGRAPH 2013 includes a three-day exhibition of products and services from the computer graphics and interactive marketplace from 23-25 July 2013.

More details are available at SIGGRAPH 2013 or on Facebook and Twitter.



SOURCE  SIGGRAPH 2013

By 33rd SquareSubscribe to 33rd Square