I was solving calculus problems, differential equations, learning electric and electronic circuits, and occasionally soldering electronic components. I was an undergraduate student in the Electronic and Electrical Engineering department. I chose to study EE in a university because I was curious about how a television works. But I found I was happier when I’m coding a program during my undergraduate study. And I got curious about how computer software works. This is how I ended up as a graduate student in the Computer Science department in the University of Southern California.

As a master student, I took the Computer Science classes and completed projects and was happy. After the two years of study and the Master of Science degree in my hand, I asked to myself. “I think I understand how a computer works but do I really know what the Computer Science is?” I thought I don’t even know what a science is. I wasn’t satisfied. There’s got to be something more. So I became a Ph.D. student at one of the best visualization laboratories in the world, Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago. When I first came to the lab the LambdaVision run by Scalable Adaptive Graphics Environment (SAGE) [Jeong et al, 2006] grabbed my attention. And the LambdaVision is the part of the lab’s NSF-funded project called OptIPuter which had begun in 10 years ago and was essentially an early instantiation of today’s growing Cloud computing model. The LambdaVision works as a thin-client visualization platform at users’ ends. So my research as a Ph.D. student has been focused on the large-scale collaborative thin-client visualization system.

The OptIPuter project was trying to envision what would happen if bandwidth was no longer the limit. We realized the major bottleneck for a data-intensive network application in the OptIPuter model resides at an end-host. And we noticed the trend that processors increased the number of physical cores and the Non-Uniform Memory Architecture (NUMA) became widespread for the multi and many-core processors. We conducted research to exploit modern multi-core, hierarchical memory architecture systems to achieve increased high-speed network performance (on the order of 10’s of gigabits/s) in order to support data intensive e-Science applications. This work is presented in [Vishwanath et al, 2008].


Figure 1. NCSA’s storm data is visualized remotely and parallel streamed to LambdaVision run by Scalable Adaptive Graphics Environment (SAGE).

After that, I wanted to see serious scientific visualizations on the LambdaVision. So I have worked on enabling ultra-high resolution remote visualization by integrating a well-known scientific visualization tool called ParaView into our display wall environment [Nam et al, 2009]. This showed how we could utilize remote computing resources and a cluster-driven display wall when they are coupled with high-speed network. Figure 1 shows a remote visualization demonstration on the LambdaVision (100-megapixel display wall driven by a cluster of 28 computers).

What would happen when multiple image fragments are streamed in parallel and in best-effort fashion over high-speed networks? One needs a proper synchronization scheme in order to provide seamless display surface on cluster-driven tiled-display walls. The solution to this problem is not trivial when a display wall supports multiple parallel streams of multiple applications. My research on a cluster-driven display wall system includes inter-node data and image synchronization that provides a scalable solution (that achieves 10 fold improvement over a previous naïve approach) to this problem [Nam et al, 2010].

As the project progresses, I noticed the improvements in graphics hardware increased the number of displays and the image resolution they could support. This enables display walls can be driven by a single machine thereby reduces the cost of building and managing cluster system. It was obvious that more users can benefit from the collaborative visualization platform if it can be run in a single machine because one of the challenges for users is managing a cluster. So I developed a new graphics middleware called SAGE-Next that runs in a single machine and provides collaborative work environments. Figure 2 shows the tiled-display wall driven by a single machine.

SAGENext small

Figure 2. 20×6 foot display wall driven by SAGE-Next at Electronic Visualization Laboratory at University of Illinois at Chicago

However, I realized that the traditional Operating System schedulers don’t fit in the multi-user collaborative environments. This led me to my Ph.D. dissertation multi-user centered resource scheduling for large-scale display wall environments. The dissertation presents model-based application priority assessment based on multi-user interactions and resource estimation and allocation algorithms that achieve visually fair and higher user-perceived performance. [Nam et al. 2014]

With my experience in multi-user collaborative display wall environment, I am currently working at Sensory Technologies. My main focus area as a Lead Software Engineer and Researcher includes designing and prototyping a user interface and underlying mechanisms for ultra high-resolution multi-user enabled display wall environments.