« Back

Visualization Conference: Trendy Ideas or Priceless Gems?


Last month, I attended the IEEE Visualization Conference in Atlanta. The Vis conference is the premier forum where researchers and academics in the fields of scientific visualization, information visualization, and visual analytics present their latest and greatest ideas. This year Durrell Rittenberg (Vice President of Product Management) joined me on the cross-country trek.

I’ve been attending the Vis Conference off-and-on since 2000—it helps me make sure Tecplot is aware of the newest techniques and trends in visualization research. Many clever ideas are presented each year, but most are not really applicable to our products—either because they don’t solve the problems our customers face or don’t add enough value to warrant the additional complexity they bring, or the ideas are just trendy and oversold.

On the flip side, some ideas are priceless gems that can improve the capability and usability of Tecplot software. Our grid coarsening techniques (decimation) were derived from ideas presented at the 2001 Vis conference in San Diego, and many subzone load-on-demand techniques can be traced back to out-of-core techniques presented at the 2005 Vis conference in Seattle.

This year, I did hear a couple of interesting themes (and thankfully, no “trendy” ideas). The rest of this blog is about one of the gems.

Theme:  Visualization software architectures must evolve to keep pace with changes in computing hardware.

This won’t be a surprise to those who have been reading our blogs or have attended our webinars. In fact, Tecplot has been aggressively responding to the changing hardware landscape for the last two years. Three primary drivers of the change are:

  1. Computing performance is improving much faster than I/O performance. In other words, it is taking more and more time to read or write the data we are capable of creating with high-performance computer (HPC) systems. The industry is responding to this in a variety of ways – from not writing the data at all (in Situ visualization) to faster storage systems and file formats (like exaHDF5).  Tecplot is responding to this change with subzone load-on-demand, a technology that allows our software to read only the data needed to perform the desired analysis or visualization.
  2. The number of cores per CPU or GPU is growing rapidly. This has huge ramifications for software that uses the message-passing-interface (MPI) to parallelize computations. As the number of cores increase, MPI becomes less efficient. In fact, Kenneth Moreland (a panelist in the “Challenges for Scientific Visualization Software” session) said that MPI is simply not usable for GPU-based computing. The solution is to use threading in combination with MPI. Fortunately, Tecplot is primarily parallelized through threading so we are not impacted as much as some other software packages.
  3. Memory per thread is declining. Memory for each CPU/GPU combination is actually increasing, but the number of threads required to efficiently utilize CPU’s or GPU’s is increasing faster than memory. This has major ramifications for in Situ visualization—a common idea for circumventing the first driver. Any memory utilized for visualization is memory that cannot be utilized for the simulation software. Existing visualization software tends to take too much memory for in Situ visualization on HPC systems.

To summarize, visualization products struggle to keep pace with changes in computing hardware, but software packages that depend on massively parallel systems to visualize large data files are
affected the most.