bloc 33rd Square Business Tools - computer graphics 33rd Square Business Tools: computer graphics - All Post
Showing posts with label computer graphics. Show all posts
Showing posts with label computer graphics. Show all posts

Sunday, June 12, 2016

This Cool Video Short Displays Incredible Array of Computer Graphics


Computer Graphics

The studio Method Design was tapped by production company RSA to concept and create this year’s AICP Sponsor Reel. The AICP awards celebrate global creativity within commercial production. Method Design wanted to create an entertaining piece of design that encapsulates the innovative and prolific nature of this industry.


Related articles
"Our aim was to showcase the AICP sponsors as various dancing avatars, which playfully reference the visual effects used throughout production. Motion capture, procedural animation and dynamic simulations combine to create a milieu of iconic pop dance moves that become an explosion of colorful fur, feathers, particles and more,"write the creators of the creative new video short.

This Cool Video Short Displays Incredible Array of Computer Graphics

Method Design was hired by production company RSA to concept and create this year’s AICP Sponsor Reel. The AICP awards celebrate global creativity within commercial production




SOURCE  Method Design


By 33rd SquareEmbed


Tuesday, March 15, 2016

Unity Engine's Latest Demo Points to a Vanishing Uncanny Valley


Computer Graphics

In the the first part of a real-time rendered short film “Adam”, the developers of the Unity engine have broken new ground in realistic 3D graphics. Adam illustrates the high quality systems that are coming available to gaming, VR and other applications. 



"Adam" (video below) relates the tale of a humanoid robot coming to life in a dark vessel, and then being released out into the wild with hundreds of others like him. The robot's breathing indicates that there may be more to the story. Is Adam robotic consciousness, or an uploaded human being?

Unity Adam


Related articles
The production of this sci-fi short was used to stress test the beta versions of Unity 5.4 and the cinematic sequencer currently in development. The team also used an experimental implementation of real time area lights.

"Adam" makes extensive use of the high fidelity physics simulation tool CaronteFX.

The Demo team also created some custom tools on top of the Unity engine to cover specific production needs. For this project they needed volumetric fog, transparency shader, and motion blur – to name a few. These tools are expected to be released soon as well.

Unity GDC demo - Adam - Part I


The full length movie will be shown at Unite Europe 2016 in Amsterdam. We look forward to it!





SOURCE  Unity


By 33rd SquareEmbed


Monday, June 15, 2015


 Computer Graphics
A newly developed technology that uses an inexpensive facial mapping method can recreate and replace faces of celebrities, but the technology may also have applications in creating digital avatars and mindclones.





Soon, nearly everyone with a computer will be able to create inexpensive, controllable computer models of famous people’s faces in 3D just using online photos of celebrities, or anyone.

"This capability opens up the ability to create puppets for any photo collection of a person, without requiring them to be scanned," write the researchers from the University of Washington.

The researchers also were able to recreate idiosyncrasies, like facial ticks, of the celebrities they modeled, including Kevin Spacey and Daniel Craig. This work could let us interact with digital avatars that look and act like people we know.

Celebrity Mindclones

Related articles
Typically, creating CGI models of a human face is expensive and requires laser scanning and motion capture technology. The researchers postulated that for celebrities, there are more than enough paparazzi photographs online to capture digitally what they look like from just about every possible angle.

"The idea was to create realistic virtual models of people just from photos rather than complex lab set-ups," says researcher Ira Kemelmacher-Shlizerman. Led by Supasorn Suwajanakorn, the team collected around 200 photos each of various famous people from Google Images, taken in different poses and at varying angles.

"Don't underestimate how much people want to be someone else."


The photos were analysed using face-tracking software, and a realistic 3D model for their face and head created. Further analysis added wrinkles and textures that appear and disappear as expressions change.

"The result is a full 3D model you can turn right around," says Suwajanakorn.

As well as being able to manipulate the digital puppets any way they wanted, the team found they could realistically switch an actor's face with that of another – allowing them to be replaced throughout an entire TV show or movie, for example.

A real potential commercial application could be to provide famous animated faces for visual versions of Siri-like assistants, says Suwajanakorn. Another might be to automatically create 3D faces for telepresence robots, says Nicole Carey at humanoid robot maker Engineered Arts in Penryn, UK. "Don't underestimate how much people want to be someone else," she says.

The face of someone who has died could be recreated and driven with one of the emerging breed of chatbots trained – using the deceased's tweets and emails – to converse like them.

The system may also be useful for creating mindclones. "Our model could bring back your memories of the people you care about," says Suwajanakorn.




SOURCE  New Scientist

By 33rd SquareEmbed

Monday, February 2, 2015


 Computer Graphics
CG artist Dereau Benoît has created a delightfully realistic virtual Parisian apartment. "Unreal Paris" crosses the uncanny valley of interior shots in nearly every aspect.




For the demonstration called, "Unreal Paris," computer graphics designer Dereau Benoît has created a hyper-realistic Parisian apartment with living room, dining area, kitchen, bedroom, hallways and even a full bathroom. With all but a few minor elements, the video certainly crosses the uncanny valley of interior shots.

The project uses Epic Games’ latest Unreal Engine 4, and as the video above shows, it is nearly impossible to spot the difference between this virtual reality a real Parisian home.

Unreal Paris

Related articles
Benoît has added incredible details from the walls and ceilings filled with ornamental patterns, to the streaming-in sunlight coming in through the windows. Apart from the mirrors, the shadows, textures are very realistic in "Unreal Paris."

"Unreal Paris" points to the future of graphics in gaming, virtual reality and more. Soon it will be impossible for the human eye to tell the difference between what is real and virtual in these media.

The full version of  "Unreal Paris" is also available for download.

Take A Look At This Amazing Virtual Parisian Apartment

"Unreal Paris"

Take A Look At This Amazing Virtual Parisian Apartment



SOURCE  Dereau Benoît

By 33rd SquareEmbed

Thursday, October 9, 2014


 Uncanny Valley
It is hard to believe that the orangutan, "Maya," in a new commercial for the Scottish electrical utility is a real, well trained animal actor—she's not, she's 100% computer generated.




When you see the advertisement above, you will wonder how they trained an orangutan, "Maya," to perform the way she does.  As it turns out, they didn't trained it at all. It was entirely made with CGI by production house, The Mill.

"It has been an amazing ten month journey," says The Mill's Chief Creative Officer, Pat Joseph. "I've yet to have anyone tell me that that's not a real orangutan."

Orangutan Totally Crosses the Uncanny Valley

SSE TV Advert

Orangutan commercial


"I've yet to have anyone tell me that that's not a real orangutan."


Related articles
In the commercial, the majestic orangutan makes her way from the rainforest of Sumatra into our urban world.

To our eye, "Maya," totally crosses the uncanny valley.

The Mill created the photo-real 100% CGI Orangutan for leading energy supplier SSE. More details on the making of "Maya," are in the video below.




By 33rd SquareEmbed

Thursday, October 2, 2014


 Uncanny Valley
Computer graphics artist Chris Jones has created one very realistic looking head using the latest software.  His short video of the piece called 'Ed' must be seen.




Australian-based computer game artist Chris Jones has created one of the most realistic human head simulations we've seen.

One person commented on Sploid, "This IS the most realistic human face I've ever seen produced via computer; however it's so perfect that something still screams uncanny valley."

The head, 'Ed,' was made with Lightwave, Sculptris and Krita, and composited with Davinci Resolve Lite. Jones also created the music for the video.  It has been an ongoing work for the artist.

Hard To Believe This Face Is Computer Generated

Related articles

Jones has worked as a freelance children’s book illustrator, and following graduation with and industrial design degree, continued illustrating and animating before becoming a computer game artist at Beam Software (later to become Infogrames).

He left Infogrames in May 2000 to work full-time on a film called, The Passenger.


Soon this quality of avatar will be walking around in Second Life.





SOURCE  Chris Jones

By 33rd SquareEmbed

Saturday, August 23, 2014


 Artificial Intelligence
BabyX is an experimental computer generated psychobiological simulation of an infant which learns and interacts in real time. The software integrates realistic facial simulation with computational neuroscience models of neural systems involved in interactive behavior and learning.




BabyX is an interactive animated virtual infant prototype created by Dr. Mark Sager and his team at the Auckland Bioengineering Institute
Laboratory for Animate Technologies. The software is a computer generated psychobiological simulation under development and is an experimental vehicle incorporating computational models of basic neural systems involved in interactive behavior and learning.

"What we’re doing is making models of the face, but we’re driving them with models of the brain in order to build a system which creates its own expressions, its own emotions, and it can learn and react."


First introduced at a TEDx event last year, BabyX has grown into Version 3.0, and is now learning her first words (see video at top).

These models are embodied through advanced 3D computer graphics models of the face and upper body of an infant. The system can analyse video and audio inputs in real time to react to the caregiver’s or peer’s behavior using behavioral models. According to Sagar, BabyX is "A simulation of a brain driving the simulation of a face."

BabyX

"BabyX embodies many of the technologies we work on in the Laboratory and is under continuous development, in its neural models, sensing systems and also the realism of its real time computer graphics," claims the project's website.

“What we’re doing is making models of the face, but we’re driving them with models of the brain in order to build a system which creates its own expressions, its own emotions, and it can learn and react,” says Dr Sagar.

“We are taking models from theoretical neuroscience and using those to create animation; we’re integrating current theories of how emotional and behavioral systems work and we’re using those to create artificial nerve signals to drive the face of a digital baby. The neural systems driving the animation can be explored in real time, that is to say, you can watch the brain of BabyX.

BabyX Neural activity

Related articles
BabyX is modeled after Sagar’s own daughter, Francesca, and is a driven by computational models of brain activity. The system shows neural activity and emotions and reacts in real time to those around it. The reactions of the viewer and of the computerized baby are entirely dependent on what happens in real time between them.

Sagar previously worked as the Special Projects Supervisor at Weta Digital. He was involved with the creation of technology for the digital characters in blockbusters such Avatar, King Kong, and Spiderman 2.

His pioneering work in computer-generated faces was awarded with two consecutive Oscars at the 2010 and 2011 Sci-tech awards, a branch of the Academy Awards that recognizes movie science and technological achievements.


SOURCE  Auckland Bioengineering Institute
Laboratory for Animate Technologies

By 33rd SquareEmbed

Tuesday, July 22, 2014

The Urban Uncanny Valley Fades Away

 Computer Graphics
3D imaging continues to progress.  In this example from artist Gilvan Isbiro, a hybrid San Francisco street (with some European elements) looks incredibly photo-realistic.




Check out the nice image above.  Looks like a photograph from San Francisco, possibly taken by a tourist.  Wrong.  The image is entirely computer-generated.

The image was painstakingly created with Autodesk 3DS Max software. Artist Gilvan Isbiro says that he had to separate the street into two rendering "plans" considering how big the street's surface is. He spent a week creating the image he calls, "Davis Street."

"There are no tricks or secrets in buildings models, so let's focus on street, materials, modeling, lighting and render settings."


Related articles

"There are no tricks or secrets in buildings models, so let's focus on street, materials, modeling, lighting and render settings," writes Isbiro.

According to Isbiro the sheer amount of textures and materials present in the scene, not to mention lighting and post-processing effects meant he had to divide the rendering tasks in two: foreground and background.

Gilvan Isbiro Davis Street

The image is a clear demonstration of how photo-realistic 3D environments are becoming impossible to separate from physical reality. For future virtual reality applications, the power of the software to create lifelike environments.


SOURCE  Evermotion

By 33rd SquareEmbed

Wednesday, January 22, 2014


 Computer Graphics
A team of researchers has developed a realistic walking simulator for a variety of bipedal creatures. In the simulator, two-legged computer-based creatures walk in various conditions with a system using discrete muscle control parameters.




Agroup of researchers from Utrecht University and the University of British Columbia have created a  a muscle-based control method for simulated two-legged computer-based creatures where the muscle control parameters are optimized.

Through an evolutionary algorithm, the system yields effective gaits for the creatures for various parameters, including speed rotation, and even gravity.

Watch A Computer Learn How To Walk
Image Source - Geijtenbeek, van de Panne and van der Stappen

The generic locomotion control method developed titled, Flexible Muscle-Based Locomotion for Bipedal Creatures, supports a variety of bipedal creatures. All actuation forces are the result of 3D simulated muscles, and a model of neural delay is included for all feedback paths. 

The researchers' conntrollers generate torque patterns that incorporate biomechanical constraints. The synthesized controllers find different gaits based on target speed, can cope with uneven terrain and external elements, like blocks being thrown at the creatures.

Muscle Path
An example muscle path from the research.  Image Source- Geijtenbeek, van de Panne and van der Stappen
The current method still has limitations, so it won't be powering up any humanoid robots soon. Compared to the results of other studies, the walking and running motions in the system are of somewhat lesser fidelity, especially for the upper-body.

Related articles
This can be partially explained by the absence of specific arm features in the researchers' humanoid models. For now, they favored using a generic approach, but the researchers say focusing on a more faithful human
gait could make their models even more realistic.

Despite this, the team's lower-body walking motions are very close to their state-of-the-art result. "We witness a similar near-passive knee usage during swing, as well as a natural build-up of the ankle plantarflexion moment during stance," they write.

Work on an improved set of authoring tools remains an important direction for future development. Such efforts which could be further improved include: greater fidelity for the modeling joints such as the knees, ankles, and shoulders; more accurate muscle path wrapping models that interact with the skeleton geometry; giving further thought to the detail with which the target feature trajectories need to be modeled; the addition of anticipatory feed-forward control to the architecture; and the use of alternate dynamics simulators.


SOURCE  ACM Transactions on Graphics

By 33rd SquareSubscribe to 33rd Square

Friday, November 22, 2013

New Method of 3D Graphic Creation Speeds Up Rendering and Accuracy

 Computer Graphics
Using new methods for developing triangle meshes based on particles, Gaussian energy and the embedded space theory of John Forbes Nash Jr., computer scientists have been able to create more realistic 3D computer graphics that are also faster to render.




C omputer scientists at UT Dallas have developed a technique to make 3D images faster and with more accuracy.

The method uses anisotropic (irregular) triangles — triangles with sides that vary in length depending on their direction — to create 3D “mesh” computer graphics that more accurately approximate the shapes of the original objects, and in a shorter amount of time than current techniques.

The team's results were made available at the SIGGRAPH 2013 Conference. 

These types of images are used in movies, video games, 3D scanning and computer modeling of various phenomena, such as the flow of water or air across the Earth, the deformation and wrinkles of clothes on the human body, or in mechanical and other types of engineering designs.

Related articles
The researchers also hope this technique will also lead to greater accuracy in models of human organs to more effectively treat human diseases, such as cancer.

“Anisotropic mesh can provide better simulation results for certain types of problems, for example, in fluid dynamics,” said Dr. Xiaohu Guo, associate professor of computer science in the Erik Jonsson School of Engineering and Computer Science whose team created the technique.

The technique finds a practical application of the Nash embedding theorem, which was named after mathematician John Forbes Nash Jr., subject of the film A Beautiful Mind.

anisotropic triangles
Anisotropic meshing of surface with 50,000 particles Image Source: Zichun Zhong et al./SIGGRAPH)

In computer graphics shapes are three dimensionally represented shapes with triangle meshes. Traditionally, it is believed that isotropic triangles — where each side of the triangle has the same length regardless of direction — are the best representation of shapes.

However, the aggregate of these uniform triangles can create edges or bumps that are not on the original objects. Because triangle sides can differ in anisotrophic images, creating images with this technique would allow the user flexibility to more accurately represent object edges or folds.

Guo and his team found that replacing isotropic triangles with anisotropic triangles in the particle-based method of creating images resulted in smoother representations of objects. Depending on the curvature of the objects, the technique can generate the image up to 125 times faster than common approaches.

Objects using anisotropic triangles are of a more accurate quality, and most noticeable to the human eye when it comes to wrinkles and movement of clothes on human representatives.

3D Meshes with anisotropic triangles

Up next for this research is moving from representing the surface of 3D objects to representing 3D volume. “If we are going to create accurate representations of human organs, we need to account for the movement of cells below the organ’s surface,” Guo said.

“These types of images are used in movies, video games, Computer-Aided Design (CAD), Computer-Aided Manufacturing (CAM), computational fluid dynamics (CFD) fields, scientific visualization, architecture design, etc.,” Zichun Zhong, research assistant in computer science and PhD candidate at UT Dallas, was also involved in this research told to KurzweilAI.



SOURCE  KurzweilAI.net

By 33rd SquareSubscribe to 33rd Square

Wednesday, July 24, 2013


 Mobile Graphics
NVIDIA recently showed off Project Logan, which tantalizes us with the future of mobile computer graphics. The new mobile graphics engine is powerful enough to do one of the company’s most intensive demos — real-time rendering of a detailed human face, Ira.




Processor maker NVIDIA has introduced the next generation in mobille graphics. Project Logan uses the Kepler GPU, which NVIDIA claims the fastest, most advanced scalable GPU in the world.



Related articles
Last year, Kepler hit desktops and laptops, and next year your phone and tablet will have the technology available too.

According to NVIDIA, they took Kepler’s efficient processing cores and added a new low-power inter-unit interconnect and extensive new optimizations, both specifically for mobile. With this design, mobile Kepler uses less than one-third the power of GPUs in leading tablets, such as the iPad 4, while performing the same rendering. And it gives us enormous performance and clocking headroom to scale up.

The company writes,
Logan has only been back in our labs for a few weeks and it has been amazing to see new applications coming up every day that have never been seen before in mobile. But this is only the beginning. Simply put, Logan will advance the capability of mobile graphics by over seven years, delivering a fully state-of-the-art feature set combined with awesome performance and power efficiency.


In the videos embedded in this post, you can see what these guts might make things look like on your phone in the future. NVIDIA won't say which devices are getting Kepler, but perhaps if you get a new phone in 2014 with an incredible battery life and amazing on-screen performance, the Kepler GPU might be the reason.



SOURCE  NVIDIA

By 33rd SquareSubscribe to 33rd Square

Tuesday, July 9, 2013


 Special Effects
To achieve the mass crowd effects in the movie World War Z, the film makers turned to a form of artificial intelligence. Visual effects firm Motion Picture Company picked up on an idea that WETA used for the Lord of the Rings movies - to make each zombie an individual actor with goals and behaviors.




For the Brad Pitt film World War Z, London-based VFX house Moving Picture Company had to make enormous zombie hoards behave with a bit of intelligence.

World War Z is an apocalyptic horror film directed by Marc Forster. The screenplay is loosely based on the 2006 novel of the same name by Max Brooks.

World War Z Zombie Horde

Related articles
Animating the sequences by hand would not have allowed the thousands of zombie 'actors' to be in the shots, so the company turned to artificial intelligence.  Using digital creatures, whose movements came from motion-capture performances of real people running up nets and falling down ramps, the team programmed a kind of artificial intelligence, a proprietary crowd-simulation program called Alice.

This type of crowd animation intelligence was pioneered in the Lord of the Rings movies by WETA Digital's Massive software. Through the use of fuzzy logic, the software enables every agent to respond individually to its surroundings, including other agents.

These reactions affect the agent's behavior, changing how they act by controlling pre-recorded animation clips, for example by blending between such clips, to create characters that move, act, and react realistically.


For the movie’s massive piles of Zs in Jerusalem, the VFX company—one of a handful that worked on World War Z—had to build zombie “agents” that could “act” on their own. The
“All complex crowd shots got layers of hand-animated zombies,” said MPC senior CG supervisor Max Wood. “We’d start off setting the shots up in Alice, our in-house crowd tool, then we’d work out where we wanted to add the additional animation detail.”

How long before such crowd software makes it's way into video games and virtual worlds?

SOURCE  Wired

By 33rd SquareSubscribe to 33rd Square

Monday, July 8, 2013


 
Digital Avatars
Researchers using photographed samples of skin from people's chins, cheeks and foreheads at a resolution of about 10 micrometers, have created super-realistic simulated CGI skin with detail such that each skin cell spreads across only three pixels.




New research could further CGI graphics with simulated skin that is faithful down to the level of individual cells.

Creating realistic faces is one of the biggest challenges for CGI, in large part because skin's appearance is the sum of a complex interplay of tiny features and flaws. Get any of those factors wrong, and the CGI face comes out looking eerily wrong. "The renderings can only be as real as the input data," says Abhijeet Ghosh at Imperial College London. "And that's where we come in."

Uncanny Valley Getting Closer With Cell-Level Simulated Skin

Related articles
Ghosh and Paul Debevec of the University of Southern California (USC) and their colleagues had already developed a way to simulate light reflecting off human skin. It involves splitting the light into four rays: one that bounces off the epidermis and three that penetrate the skin to varying depths, scattering before being reflected.

The team have now massively cranked up the level of detail. Using a specially developed lighting system and camera, they photographed samples of skin from people's chins, cheeks and foreheads at a resolution of about 10 micrometres, so that each skin cell was spread across roughly three pixels. They then used the images to create a 3D model of skin and applied their light reflection technique to it. The result was CGI skin complete with minute structures like pores and microscopic wrinkles. Finally, they fed the CGI images to an algorithm that extended them to fill in an entire CGI face.

Usually, CGI uses a standard set of values for skin structure, says Ghosh. But for big-budget films, digital effects companies like Weta Digital – which used some of Ghosh and Debevec's techniques in the movie Avatar, for example – prefer to tailor skin textures to individuals. To create the blue-skinned Na'vi, for example, artists took surface details like moles and wrinkles and added them to the characters by hand. "In movies they zoom in to show that stuff off," says Ghosh, but the work is a slow process. Ghosh and Debevec's system, which Ghosh presented at the Games and Media Event at Imperial College London in May, could automate this level of customization.

CGI face

It's not only the entertainment industry that is eyeing this technology. In 2010, the cosmetics company Avon gave Ghosh funding to explore whether his digital skin could be used to simulate the application of different kinds of make-up. Other cosmetics firms have also showed interest in the idea. Ghosh thinks that one day we will have an app that offers a virtual try-before-you-buy service for make-up. "You would arrive at a kiosk and have your face scanned," he says. The software would then show you exactly what your skin would look like with, for example, a certain foundation applied, he says.

Next, he team at USC is working with games publisher Activision to try to find a way to bring these sort of high quality faces to games as soon as possible.


SOURCE  New Scientist

By 33rd SquareSubscribe to 33rd Square

Wednesday, May 22, 2013


 Main Label
The SIGGRAPH Technical Papers program is the premier international forum for disseminating new scholarly work in computer graphics and interactive techniques. SIGGRAPH 2013 brings together thousands of computer graphics professionals to share and discuss their work.
-->



The SIGGRAPH 2013 Technical Papers program is the premier international forum for disseminating new scholarly work in computer graphics and interactive techniques. The 40th International Conference and Exhibition on Computer Graphics and Interactive Techniques, 21-25 July 2013 at the Anaheim Convention Center in California, received submissions from around the globe and features high quality and never before seen scholarly work. Submitters are held to extremely high standards in order to qualify.

“Computer Graphics is a dynamic and ever-changing field in many ways,” says Marc Alexa, SIGGRAPH 2013 Technical Papers Chair from Technische Universität Berlin. “The range of ground-breaking papers presented at SIGGRAPH is getting broader every year, now also encompassing 3D printing, and fabricating realistic materials as well as generating ever more realistic images of complex phenomena.”

SIGGRAPH accepted 115 technical papers (out of 480 submissions) to showcase this year representing an acceptance rate of 24 percent (one percent higher than 2012). The selected papers were chosen by a distinguished committee of academia and industry experts.

This year's Technical Papers program also includes conference presentations for 37 papers published this year in the journal ACM Transactions on Graphics (TOG).

Highlights From the SIGGRAPH 2013 Technical Papers Program this year include:

OpenFab: A Programmable Pipeline for Multi-Material Fabrication
Authors: Kiril Vidimce, Szu-Po Wang, Jonathan Ragan-Kelley and Wojciech Matusik, Massachusetts Institute of Technology CSAIL

Open Fab

This paper proposes a programmable pipeline, inspired by RenderMan, for synthesis of multi-material 3D printed objects. The pipeline introduces user-programmable fablets, a corollary to procedural shaders for 3D printing, and is designed to stream over arbitrary numbers of voxels with a fixed and controllable memory footprint.

Opacity Optimization for 3D Line Fields
Authors: Tobias Günther, Christian Roessl, and Holger Theisel, Otto-von-Guericke-Universität Magdeburg

Opacity Optimization for 3D Line Fields

For visualizing dense line fields, this method selects lines by view-dependent opacity optimizations and applies them to real-time free navigation in flow data, medical imaging, physics, and computer graphics.

Related articles
AIREAL: Interactive Tactile Experiences in Free Air
Authors: Rajinder Sodhi, University of Illinois; Ivan Poupyrev, Matthew Glisson, Ali Israr, Disney Research, The Walt Disney Company

AIREAL: Interactive Tactile Experiences in Free Air

AIREAL is a tactile feedback device that delivers effective and expressive tactile sensations in free air, without requiring the user to wear a physical device. Combined with interactive graphics and applications, AIREAL enables users to feel virtual objects, experience free-air textures and receive haptic feedback with free-space gestures.

Bi-Scale Appearance Fabrication
Authors: Yanxiang Lan, Tsinghua University; Yue Dong, Microsoft Research Asia; Fabio Pellacini, Sapienza Universita’ Di Roma, Dartmouth College; Xin Tong, Microsoft Research Asia

Bi-Scale Appearance Fabrication

A system for fabricating surfaces with desired spatially varying reflectance, including anisotropic ones, and local shading frames.

Map-Based Exploration of Intrinsic Shape Differences and Variability
Authors: Raif Rustamov, Stanford University; Maks Ovsjanikov, École Polytechnique; Omri Azencot, Mirela Ben-Chen, Technion - Israel Institute of Technology; Frederic Chazal, INRIA Saclay - Île-de-France; and Leonidas Guibas, Stanford University

Map-Based Exploration of Intrinsic Shape Differences and Variability

A novel formulation of shape differences, aimed at providing detailed information about the location and nature of the differences or distortions between the shapes being compared. This difference operator is much more informative than a scalar similarity score, so it is useful in applications requiring more refined shape comparisons.

Highly Adaptive Liquid Simulations on Tetrahedral Meshes
Authors: Ryoichi Ando, Kyushu University; Nils Thuerey, ScanlineVFX GmbH; and Chris Wojtan, Institute of Science and Technology Austria

Highly Adaptive Liquid Simulations on Tetrahedral Meshes

This new method for efficiently simulating fluid simulations with extreme amounts of spatial adaptivity combines several key components to produce a simulation algorithm that is capable of creating animations at high effective resolutions while avoiding common pitfalls like inaccurate boundary conditions and inefficient computation.

SIGGRAPH 2013 will bring thousands of computer graphics and interactive technology professionals from five continents to Anaheim, California for the industry's most respected technical and creative programs focusing on research, science, art, animation, music, gaming, interactivity, education, and the web from Sunday, 21 July through Thursday, 25 July 2013 at the Anaheim Convention Center. SIGGRAPH 2013 includes a three-day exhibition of products and services from the computer graphics and interactive marketplace from 23-25 July 2013.

More details are available at SIGGRAPH 2013 or on Facebook and Twitter.



SOURCE  SIGGRAPH 2013

By 33rd SquareSubscribe to 33rd Square