From the earliest emergence of computer graphics there has always been a serious give-and-take relationship on the speed and quality of rendering. Take a look at the most recent Pixar or Transformers film and compare it to your favorite video game and you’ll instantly recognize this. Not too long ago, Untold’s Interactive Designer Andy Mindler did a great post on the difference between 3D graphics for video games compared to that of films. But let’s look to the future, where the lines that divide the two are quickly blurring.
What makes cinematic rendering look so much better than basic console graphics? The answer is revealed in the process major film studios use to produce the images. While the average household gamer probably doesn’t have the massive budget required for a render-farm (a huge set of computers, linked together for the sole-purpose of doing lighting calculations) you can bet that Hollywood does.
Using a method called “ray-tracing”, computers simulate what light would do in the real world again and again. Literally thousands of times for every single frame of the film. Finally, the machines will take those results and average them together to produce an image that is indistinguishable from a photograph.
Because ray-tracing is such a computationally expensive process, the seemingly unfathomable holy grail of computer graphics has long been to accomplish this in real-time (faster than 30 frames-per-second). It’s likely high-end visual effects studios like Industrial Light & Magic and Digital Domain have had this capability for a few years now, but what about your typical Joe Schmo who wants to play Halo with life-like fidelity. Will these marvels of modern day technology ever trickle down to the consumer level, so that Optimus Prime can look as good on my iPhone game as he does on the big screen?
Just two years ago the answer would have been a definitive “no”, but major advances lead by graphics card manufacturer Nvidia are beginning to prove otherwise. Let’s go back to the render-farm idea where lots of computers linked together for calculations. Each computer can only handle so much calculation at a given time, and this is dictated by the amount of “cores” on the machines. If you’ve purchased a PC in the last five years or so, then you’re more than likely familiar with terms like “dual-core”, “quad-core” and so forth, which is referencing the number of cores on that particular board. But the problem with ray-tracing is that we need hundreds (preferable thousands) of these cores.
Fortunately, modern day graphics cards are perfectly designed to carry out these lighting calculations entirely on their own; even better, they come loaded with cores. In fact, Nvidia recently launched a new flagship workstation graphics card, the Quadro K5000, which comes with a whopping 1536 cores! Just think how many quad-core CPUs it would take to match that. When used in tandem with a GPU accelerated renderer, such as cloud-computing giant Otoy’s Octane renderer, virtually any 3D artist or designer can see their work in photorealistic quality nearly instantaneously.
So let’s bring it back to the video game world. Otoy is currently developing an exciting new game engine based on the Octane renderer called Brigade, which will bring this level of quality to not just PC’s and consoles but to mobile devices as well. Using cloud-based GPUs, Brigade will do the “heavy-lifting” remotely, sending your phone or tablet a constant stream of beautifully rendered images. Some examples of this have surfaced on Ray Tracey’s blog, and they look absolutely stunning.
While it’s still a ways off, expect Brigade and other cloud-based render engines to become popular in the coming years. And while I’m not throwing out my Xbox 360 anytime soon, I couldn’t be more excited about the future of real-time graphics, which just happens to look really good.