Recipe for Rendering
Many ingredients optimize rendering.
Latest News
December 1, 2012
By Mark Clarkson
Setting up a machine for rendering is easier than ever. If you’re producing your models in, say, 3ds Max or SolidWorks, and you can run the applications themselves, you can almost certainly run their renderers.
That being said, to optimize rendering means to do it as quickly as possible. The need for speed throws a few new ingredients into the mix. In addition to upgrading your workstation, you may want to connect workstations together on a local-area network, or access a cloud of processor cores. The options get a bit more complicated as you go.
LAN Rendering
If you’re part of an engineering team, there are probably other computers nearby, and they can help you render. Most rendering software makes some provision for rendering across a local network, usually by sending different individual render jobs to different machines. This approach works best for animations or other sequences of images; it’s no help if you want to accelerate the rendering of a single, high-resolution image.
There are other ways of distributing the load. The mental ray renderer, for example, supports distributed bucket rendering (DBR), which breaks images down into smaller chunks and distributes those chunks among the processors on the network.
“You can install 3ds max on five machines,” says Autodesk’s Kelly Michels, “load up mental ray, add the other machines to the render and, when you render, all those machines act as one big processor. If you have 20 processors on the network, they all act as one processor.”
Of course, some people might object to you soaking up their machine cycles while they’re trying to work. You can set up batch jobs to run renders overnight, but this doesn’t help you if you just want faster feedback from your rendering application.
If you do enough rendering to justify it, you can build your own render farm: a networked collection of machines just waiting to run your next render job. If that sounds imposing, companies such as BOXX Technologies, Appro (recently acquired by Cray) and Ciara Technologies build preconfigured render farms that make it easy to get up and running quickly. (See “Create Computer Clusters for Small Engineering Teams,” in DE’s June 2012 issue for more information on clusters and render farms.)
Another option is to let someone else build and maintain a render farm for you, on the cloud.
An Out-of-This-World Option
The cloud, says Autodesk’s Rob Hoffman, promises on-demand rendering power: “You can spool up from one to 1,000 machines at a whim, for as long as you require. Once the project is over, all those machines go away. There’s no capital expense lingering after the fact.”
A local render would be competing for your compute processes, but with a cloud render, your computer isn’t affected in any way. You can get on with your work.
The big problem cloud rendering is that it’s not quite ready for prime time.
“Everybody’s really on the cusp,” Hoffman admits. “There are some cloud rendering options available, but nothing is really in place for the bulk of the standardized rendering solutions.”
Autodesk does offer a cloud rendering solution, 360 Rendering, available for newer versions of Revit and AutoCAD. The cloud is just another rendering option within the software, giving you access to lots of processor cores. But cloud rendering isn’t currently available for the rest of Autodesk’s desktop offerings. Autodesk won’t comment, but I don’t expect that to last.
While rendering in the cloud with Autodesk 360, your local machine is freed up to do other tasks. |
There are also commercial render farms like Render Rocket that sell time by the core hour, but they tend to be geared toward the media and entertainment industries. Whether they’ll work for you depends on your software and business considerations.
“It’s not a matter of if ]cloud rendering] is going to take off,” says Hoffman. “It’s a matter of when. Everybody’s eyeing the cloud as a potential savior. This is going to give people the ability to have the rendering power of a Pixar or a Weta, but not have the cost of implementing a rendering farm like that.”
Graphics Processing Units (GPUs)
Aside from the cloud, another big change in rendering technology is the rise of the graphics processing unit (GPU). Developed to take on some of the heavy lifting of displaying graphics on the screen, GPUs can run in parallel. Modern graphics cards can have a lot of them. Dozens. Hundreds.
NVIDIA’s new K5000 cards, for example, boast a startling 1,536 processor cores. That’s a lot of raw processing power on a card. Coupled with plenty of memory—say, 16GB of GDDR5 high-performance, high-bandwidth RAM—it makes for a fast, powerful parallel computer.
This development is especially good news to those of us who produce 3D renderings.
“Rendering applications are one of the few applications that actually scale with the number of cores,” notes Edwin Braun, CEO of cebas Visual Technology.
Consider ray tracing. Computationally, ray tracing involves tracing the paths of rays of light as they bounce through the scene. These rays are independent and calculated in parallel—perfect for a massively parallel computer like the modern GPU-based card. The more cores you have, the faster you can calculate the scene.
Software is the Missing Link
Many software applications use GPU-based rendering, either by itself or in conjunction with CPU-based rendering. But many of them don’t. I have started to feel frustrated by applications that don’t leverage the processors on my Quadro.
If you do use a GPU with software that takes advantage of it, your CPU and system memory may be freed up during renders, but your graphics card is now busy running simulations and rendering futuristic products. Regular tasks like scrolling a page or even moving the cursor can slow down or grind to a halt.
To counteract that, your killer render machine might very well end up, not just with multiple graphics cards, but with multiple kinds of graphics cards—one for crunching numbers and one for actually displaying information on the screen. NVIDIA’s Maximus workstations, for example, pack Quadro cards for OpenGL graphics, and Tesla cards for compute unified device architecture (CUDA)-powered simulation and rendering tasks, as well as powerful system CPUs and plenty of RAM to go around.
Not to be outdone on the parallel processing front, Intel just released its Xeon Phi coprocessor. It uses the company’s Many Integrated Core (MIC) architecture to cram 50 x86 cores onto a PCI-connected card, each of which supports four threads. Phi is designed to work with Intel’s Xeon E5 family of server and workstation CPUs. According to Intel, Phi is easier to deploy than NVIDIA’s CUDA or AMD’s OpenCL-based technologies because it uses common x86 software. The initial Phi model, available widely next month, is expected to be used mostly on advanced supercomputers before the technology trickles down to high-end workstations.
As computing power increases and cloud options become more readily available, software makers are responding by updating their software to take advantage of those extra cores, right out of the box. As those updates accelerate, one of the bottlenecks of the design cycle—rendering complex products—will be eliminated.
Contributing Editor Mark Clarkson is DE’s expert in visualization, computer animation, and graphics. His newest book is Photoshop Elements by Example. Visit him on the web at MarkClarkson.com or send e-mail about this article to [email protected].
INFO
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
Mark ClarksonContributing Editor Mark Clarkson is Digital Engineering’s expert in visualization, computer animation, and graphics. His newest book is Photoshop Elements by Example. Visit him on the web at MarkClarkson.com or send e-mail about this article to [email protected].
Follow DE