« Back

Evolving Tecplot Visualization Software Keeps Pace with Hardware


Written by Dr. Scott Imlay, updated September 2021

As Tecplot CTO, I’ve been attending the IEEE Visualization Conference off-and-on since 2000—it helps me make sure Tecplot is aware of the newest techniques and trends in visualization research. Many clever ideas are presented each year, but most are not applicable to our products—either because they don’t solve the problems our customers face or don’t add enough value to warrant the additional complexity they bring, or the ideas are just trendy and oversold.

On the flip side, some ideas are priceless gems that can improve the capability and usability of Tecplot software. Our grid coarsening techniques (decimation) were derived from ideas presented at the 2001 Vis conference in San Diego, and many subzone load-on-demand techniques can be traced back to out-of-core techniques presented at the 2005 Vis conference in Seattle.

Since then, I’ve heard a couple of interesting themes (and thankfully, no “trendy” ideas). The rest of this blog is about one of the gems with this theme:

Visualization Software Architectures Must Evolve to Keep Pace with Hardware Computing

This won’t be a surprise to those who have been reading our blogs or have attended our webinars. In fact, Tecplot has been aggressively responding to the changing hardware landscape for the last two years. Three primary drivers of the change are:

Computing Performance is Improving Much Faster than I/O Performance

Insitu Visualization of 10B-cell Transient SolutionIn other words, it is taking more and more time to read or write the data we are capable of creating with high-performance computer (HPC) systems. The industry is responding to this in a variety of ways – from not writing the data at all (in situ visualization) to faster storage systems and file formats (like exaHDF5). Tecplot is responding to this change with subzone load-on-demand, a technology that allows our software to read only the data needed to perform the desired analysis or visualization.

The Number of Cores Per CPU or GPU is Growing Rapidly

This has huge ramifications for software that uses the message-passing-interface (MPI) to parallelize computations. As the number of cores increase, MPI becomes less efficient. In fact, Kenneth Moreland (a panelist in the “Challenges for Scientific Visualization Software” session) said that MPI is simply not usable for GPU-based computing. The solution is to use threading in combination with MPI. Fortunately, Tecplot is primarily parallelized through threading so we are not impacted as much as some other software packages.

Memory Per Thread is Declining

Memory for each CPU/GPU combination is actually increasing, but the number of threads required to efficiently utilize CPU’s or GPU’s is increasing faster than memory. This has major ramifications for in Situ visualization—a common idea for circumventing the first driver. Any memory utilized for visualization is memory that cannot be utilized for the simulation software. Existing visualization software tends to take too much memory for in Situ visualization on HPC systems.

To summarize, visualization products struggle to keep pace with changes in computing hardware, but software packages that depend on massively parallel systems to visualize large data files are affected the most.

Tecplot Visualization Software

Get notified of upcoming webinars and when new versions of Tecplot software are released.

Subscribe to Tecplot