The Star-P software platform from Interactive Supercomputing enables you to operate your existing desktop simulation tools interactively and automatically in an HPC (high-performance computing) environment. Star-P serves as a bridge between such scientific and engineering tools as MATLAB from The MathWorks, your desktop workstation, and grid and HPC solutions such as those available from HP and SGI.
So, how can you leverage Star-P and what does it really bring to the table? The following vignettes answer that question. Processing MRI Brain Scan Images Researchers use MRIs (magnetic resonance imaging) in a broad set of applications, such as distinguishing pathological tissue from normal tissue, studying brain function, and understanding correlations between brain structure and diseases. MATLAB is a popular tool in neuroimaging for both algorithm development and production analysis.
However, image processing has become increasingly compute-intensive because of increasing data volumes and the complexity of the processing algorithms. Further, new MRI machines offer increased resolution, faster acquisition, volumetric imaging, and temporal studies, all of which—when combined with the computing requirements of many neuroimaging applications—have exceeded the computer capacity of most workstations. Consequently, more and more scientists turn to HPC servers to support their image processing workloads. Problem is, porting applications to a parallel HPC environment is a specialized activity, one that many neuroscience environments do not have access to. This is where Star-P gets involved. Researchers at several facilities worldwide use Star-P to run their MATLAB codes in parallel. Star-P lets them write MATLAB applications on their desktop and then run them interactively on multiprocessor servers. There’s no re-programming applications in C, FORTRAN, or MPI languages.
Star-P’s data-parallel mode can be applied to the processing of large images, and its task-parallel mode can be applied to independent processing of multiple moderately sized images. With Star-P and parallel HPCs, neuroscientists can solve computational problems in a fraction of the time. For example, perfusion analysis of 256 3 256 images that took 45 minutes on a desktop will run in under 5 minutes on a 4-processor SGI Altix server. Parallelization of FEA Star-P’s task- and data-parallel modes lend themselves well to the FEA workflow. MATLAB is widely deployed in situations where engineers need more control of the underlying algorithms and tighter control over the equations that govern the relationships between nodes in the mesh, perhaps in such applications as fluid flow, high-temperature plasma flow, airframe optimization, and grain-boundary effect in crystals.
In such circumstances, a typical workflow would involve the following steps: 1. Export the 3D geometry of the object to be studied from CAD 2. Import the geometry into MATLAB 3. Assemble the matrices that define the set of equations to be solved (e.g., stiffness and force) 4. Solve the resulting equations (e.g., ]F]=]K]]x], where ]F] is the force applied, ]K] is the stiffness matrix, and ]x] is the displacement).
However, as models become increasingly complex—due to complexities in the geometry, and physical phenomena modeled—desktop processing can quickly become impractical. Relatively modest models with tens of thousands of nodes can take hours to compute, or might even run into paging limitations.
Star-P’s task-parallel mode is well-suited to carrying out in parallel the operations that do not depend on each other—such as the creation of the various sparse matrices (e.g., calculating the stiffness matrix).
With the matrices assembled, you could now solve the equations in a variety of ways—either using standard MATLAB functions transparently overloaded with Star-P, or you could plug in any of the range of solvers from the open source community or numerical library vendors. For example, using the Star-P SDK, you can plug in solvers from Sandia National Labs’ Trilinos library.
Recently, researchers at a leading medical school applied Star-P to FE modeling of the human heart. The algorithm consisted of the following four steps: 1. Read in the data 2. Build the stiffness matrix ]K] and force vector ]F] 3. Set the boundary conditions for K and F 4. Solve the matrix equation KU=F.
The problem was an excellent candidate for conversion to task-parallel computation because the contribution of each element to the stiffness matrix ]K] can be calculated from all other elements independently.
Four changes were made to the original MATLAB code to vectorize the for-loops that calculate the stiffness value for each element of matrices K and F. Two different models — one with 16,000 elements, and the other with 159,000 — were run on the desktop and with Star-P connected to an 8-processor server. Increases in speed of 10 to 100 times were observed, due in part to the improved (i.e., vectorized) code, and in part due to running the computation on multiple processors. Radar System Design For years, the military relied on radar information from land-based facilities and reconnaissance aircraft. Recently, it expanded its data to include satellite-based radar systems. This resulted in a flood of data to deal with: traditional radar images might measure 10MB; satellite radar can easily inundate a facility with terabytes of data daily. Obviously, this complicates and delays analysis dramatically.
Engineers and researchers at the Air Force Research Laboratory (AFRL; Rome, NY) use MATLAB to develop, test, and analyze surveillance equipment and algorithms. Because AFRL researchers are skilled at evaluating radar analysis algorithms with MATLAB, they wanted to preserve the familiarity and interactivity of their desktop environment while taking advantage of the computational power of parallel HPCs. Due to the time-sensitivity of their work, they cannot afford the time required to re-program their algorithms for parallel processing. So, AFRL’s engineers chose Star-P to parallelize their MATLAB codes.
Star-P preserves the familiar workflow while tackling data sets that are orders of magnitude larger than they process on their desktops. For AFRL, Star-P combines the critical parallel approaches in one environment: task and data parallelism, backend support, and compilation.
The result gives the user a familiar interface and the ability to use the parallel server’s large processor and memory resources. A key goal of the AFRL is to enable an agile workforce, and researchers chose Star-P for its ability to deliver desktop interactivity and ease-of-use, while handling large and growing data sets and simulations. AFRL also got a speed boost. For example, a 2D FFT on a large dataset—200GB, matrices with more than 13 billion elements—computes in less than 50 seconds on a 128-processor SGI Altix server.
“Star-P has dramatically accelerated the development of microwave image reconstruction algorithms by allowing us to stay within the MATLAB programming environment and exploit supercomputing resources for algorithm development and processing throughput,” says Kevin Magde, electronics engineer at AFRL. “It’s an enabler—without it we would not be using parallel hardware, and the increased capabilities fundamentally transform what we are able to do in the Advanced Radar Waveforms and Processing Branch.”
Ilya Mirman is a vice president at Interactive Supercomputing (ISC). Prior to joining ISC, he was a VP at SolidWorks, helping establish it as a standard in 3D mechanical design software. Mirman has a BS in mechanical engineering from the University of Massachusetts, an MS in mechanical engineering from Stanford, and an MBA from MIT’s Sloan School. Send your comments about this article through e-mail by clicking here. Please reference “Desktop to Grid, November 2006” in your message.
Contact Information
Altix Server SGI Mountain View, CA
HP Palo Alto, CA
MATLAB The MathWorks, Inc. Natick, MA
Star-P, Star-P SDK Interactive Supercomputing Waltham, MA
Trilinos Sandia National Laboratories Albuquerque, NM
|