Asus Zenfone 2 (2GB RAM)
Microsoft Lumia 640
ASUS O!Play Mini V2
Moto E (2nd Gen) 4G
Using Advanced Intel C++ Compiler Features for Android Applications
In focus: Hooq for Android
Murl Engine Cross-Platform Development Tool with Android x86 Support
Creating an x86 and ARM APK using the Intel Compiler and GNU gcc
The 7 biggest unsolved mysteries in science
Don't read this, lest you get offended!
How tech is taking football to the next level
Classic FPS games are a dying breed
Slowly gathering steam...
The obsession within
OnePlus One to reportedly get a price cut on June 1
Grabit: New cyber security threat to SMBs in India
Lenovo showcases a Magic view smartwatch
Google I/O 2015: Google Photos standalone app unveiled
Google Android 'M' debuts, focuses on security and battery life
Blue Star 3W18LB
How to use Intel XDK plugins for Sublime Text
Intel XDK Update - HTML5 Games, Sublime Text* & Easier to Get Started
Steps to add x86 support to Android Apps Using Unity
3 easy steps for maximum performance for your Android emulator (Intel HAXM)
How does your GPU affect your image blur algorithms
Huawei Ascend P8 - First Impressions
New Asus ZenFone variant with Snapdragon 410 spotted on TENAA
IDC: Windows Phone to steal iOS & Android market share by 2019
Twitter launches live video streaming Periscope app on Android
Microsoft announces Cortana for iOS and Android
First look: HP Omen gaming laptop, Pavilion, Spectre
26 upcoming & latest smartphones to check out (May 2015)
Sony Xperia C4: First Look
Sony Xperia M4 Aqua: First Look
Xiaomi Mi4i vs Asus Zenfone 2 (2GB): Quick Comparison
Intel Windows Developer Zone
Intel Developer Zone
Intel IoT Developer Zone
James Cameron had been working on the concept of 'Pandora' long before he started working on The Titanic. Why did it more than 13 years for the concept to be realized? The primary reason why the concept took this long to slip into a whole new 'Avatar' was because the technology just wasn't there. James Cameron was well aware that even if he and his crew had tried their very best at that point of time, they would not have been able to produce something like the Avatar of today. In fact, as you will find out in this article, many of the technologies needed to be invented by Cameron and team. If you are curious to know more about what it took to bring Avatar to the big screen, then join us in this feature story as we take a tour of the technology features of Avatar. We will also learn how these technologies are changing the aesthetics of the next-generation digital media entertainment.
From the early days of the movie - we are talking pre-visualization (pre-viz) here, Cameron wanted Avatar to be completely based on motion-captured (mocap) animation and not traditional animation; for all the CGI aspects of the movie. But the expected workload involved for this operation would have been enormous. Previously, mocap animations used to be recorded at the studio, then the production houses used to apply the mocap data on the 3D modeled character to create the animated CGI scene. This was a tough process, as the synchronization between the director and the whole working crew had to be perfect (and not to mention, between the production house's CG artists too). So, the main technologies that James Cameron was waiting for were the 'Reality Camera System' amalgamated with the 'performance capture' technology.
The 3-D Camera Technology Behind James Cameron's Avatar
The Reality Camera System 1, developed by James Cameron himself along with Vince Pace, allowed the whole film to be captured in stereoscopic 3D. This fusion camera system is a very advanced piece of technology, as it can work for the director as the gateway to visualize the augmented reality for the film's production. Now, in case you are wondering what this augmented reality we are talking about is; well, we are exactly going to look at just that...
Avatar Exclusive Behind The Scenes (The Art of Performance Capture)
Performance Capture is basically the same thing as mocap, but it is more taxing for the actors involved, and requires a different level of performance from them. Essentially, under Performance Capture, all the actors are interacting as they would in any Live Action film, but instead of the scene calling for make-up and set decoration, it's more akin to a virtual stage performance: with each actor wearing a mocap suit. At the same time, this performance is being recorded through an array of camera systems (not the characters, just their skeletal and muscular movement). The fusion camera will then integrate this action-data on pre-modeled CGI characters (basically, virtual versions of these actors, avatars, if you will). Specially for the facial animation, a FACS-based (Facial Action Coding System), muscle mapping system has been developed for realtime facial animation and better control over emotions. So the director can watch CGI characters performing realtime, on his monitor, and in case something went wrong, he can ask the crew and cast to re-shoot the scene or just the part of the scene, as required. This whole system only captures the character movements in 3D, so the camera position, lighting, and all the other aspects of the scene can be explicitly adjusted later on. This also implies that in case of common issues with filming (wrong camera position, for example), one doesn't need to retake the entire scene. Cameron compares this to a very advanced game engine; this forms the augmented reality we were talking about.
Avatar: motion capture mirrors emotions
This implementation of the fusion camera system along with performance capture allows the entire film industry to work inside a very fast pipeline, as well as enhancing the perfection of the performance, and completely freeing the need for hectic frame-by-frame animation. Apart from the acting by the humans and the Na'vi tribe, Avatar features a great tropical paradise of Pandora, its huge flora-fauna ecosystem; and a long-action packed battle scene. None of these were easy to create and none of them took less importance while the making of the film.
The terrain, mountains, rivers, oceans, and the living beings on Pandora were created with the WETA Digital's critically acclaimed- MASSIVE (Multiple Agent Simulation System in Virtual Environment) software. You have already seen this software in action in most of your favourite CGI/action movies (Avatar, The Lord of the Rings series, King Kong, 300 - you name it). The team came up with an all new 'plant' and 'L-system' add-on for MASSIVE, allowing artists to create a complete vegetation system for Pandora: to be planted, grown, and to interact with the environment.
Avatar: the science behind Pandora
The large variety of creatures (beasts mostly), all of them being a never-before conceptualized 'hexapod' species, also lead the entire animation team to work on a very new system to handle this fictional eccentricity of nature. The bioluminescent property of all the living beings on the planet was another element of concern - for daytime and at night, they had to appear differently.
The battle scenes were entirely handled by ILM (Industrial Light and Magic). They also worked on the cutting-edge war-inventory design and explosion systems to give us the realistic "heat" of the battle experience. Add in all the preceding tech, stir well, and when the dish is finally served as Digital-3D media, the movie experience really becomes jaw-dropping. Avatar was published through nearly every type of 3D film projection method: RealD 3D, Dolby 3D, XpanD 3D and IMAX 3D. Whereas in Korea, a 4D version, complete with physical effects including rain, wind, strobe lights, and vibration etc., was also premiered.
While working on the movie Avatar, James Cameron wanted to showcase his invention to two other legendary film directors, in case they wanted to work with similar technology for their future ventures. So he invited Steven Spielberg and Peter Jackson to his set. Spielberg was very enthusiastic for the advanced motion capture and virtual camera system, so much so that he planned his next project, "The Adventures of Tintin: Secret of the Unicorn" to be based on the very same technology. Set to be released on 2011, The Adventures of Tintin: Secret of the Unicorn is a complete 3D CGI movie, in which all the animation part will be the implementation of Performance Capture of live artists. The filming of the movie was over in just 32 days. After the motion capture data capture, the complete project will be handed over to Peter Jackson's WETA Digital studio, for the post-production and CGI works.
Image copyright: Frédéric Bennett, DeviantART
It sounds insane at first, that the post production pipeline of the movie will be twenty times lengthier than the actual shooting. But, if we hear it from Peter Jackson, the movie is somewhat ready in a very raw state; there's nothing more to create or produce - it's just the rendering of the film in the presentable media, that will take two long years at WETA Digital's render farm. Apart from these intangible technological break-throughs, sometimes small and legacy armaments also can make significant differences. Although the full Direction credit goes to Spielberg for this first movie, some sources revealed that Peter Jackson, though bodily present only for the first week, thoroughly assisted Spielberg with the filming direction, over videoconferencing from New Zealand. So, as you can see, tele/video-conferencing may just be a past and twentieth century phenomena, but it is sufficient enough to work as a never-before seen collaborative (remote) film direction platform.
Not just a one-off film; Tintin is going to be a complete motion picture series. Peter Jackson will be co-directing future sequels, all of which will feature this new dimension of film technology at its core. Check out the images to find out the filming, right from the capture set. Rumor has it that, apart from the existing technologies, "The Adventures of Tintin" will introduce a few more step-forwards. While in Avatar, the technical implementation led to an Augmented Reality; for Tintin being an animated movie, the production house will have to work on an Augmented Virtuality ambiance. Avatar being the usher of new-age mainstream motion pictures, we'll see how the legend continues with Tintin, in the animation industry.
Peter Jackson and Steven Spielberg
On the contrary, not everyone is welcoming this entire stereoscopic 3D filming and performance capture augmentation, spontaneously. For example, lets take The Hobbit series; the next project of Peter Jackson with Del Toro. Guillermo del Toro wanted to keep it sync with the former LOTR series, not only in story representation or cinematography, but also continuing the similar technology, and filming it with 35mm traditional lens. But Peter Jackson, the producer of these films, who have already decided to completely move over to 3D for the rest of his production life, has been continuously evangelizing for shooting in 3D. Del toro firstly compromised with 3ality, a propitiatory 2D filming format, optimized for the captured video to be converted in 3D. No wonder nobody liked that idea; hence finally the team decided shooting in 3D is the best idea, if they really have to publish in 3D formats at the end of the day.
Some critics have claimed that these advances in technology would be the doom of traditional acting and animation. For most of us though, these advances are a convergence of live action and animation; coupling the very best of both worlds. For the audience, the only thing that matters most is the movie experience. If a very realistic, technically-flexible, life-like motion capture can be offered, why not to embrace it? If a stereoscopic 3D movie can make you 'feel' the movie, then why not to like it? All that is new, is not necessarily bad. This point is what is moving the film industry into a whole new era of motion picture. And with the infinite possibilities offered by this new tech, the next 10 or 20 years of cinema will only be limited by the imagination of the creators.