A radio telescope produces a data stream for each antenna. Since we use up to hundreds of thousands of antennas, these data streams are processed in parallel. Our research focuses on the architectures of these supercomputers and how we can program them efficiently.
Many Big Data problems in society require parallel computing of large data streams. Radio Astronomy is one of the most data-intensive science areas today and the technology that we develop is needed in society, e.g., for the smart grid or smart traffic systems.
During the past decade, High-Performance Computing has made a steady shift from big, expensive supercomputers to clusters of server machines connected by a fast interconnect. These servers are often accelerated by special-purpose, highly parallel processors. Applications offload the computationally most-intensive tasks to these accelerators, like GPUs (Graphics Processor Units), many-core CPUs, DSPs (Digital Signal Processors), or FPGAs (Field-Programmable Gate Arrays). The architectures of these accelerators differ in many ways. We compared a wide range of accelerators for key radio-astronomical applications with respect to performance, energy efficiency, and programming effort. We found GPUs to be the most suitable architecture to accelerate compute-intensive applications, and FPGAs for I/O-intensive tasks.
A second research line is to explore high-level programming languages for new Field-Programmable Gate Array (FPGA) technologies: a high-level programming language (OpenCL), hard Floating-Point Units, and tight integration with Central Processing Units (CPU) cores. Previously, FPGAs were programmed in a Hardware Description Language, which was difﬁcult, time consuming, and error prone. We aim to demonstrate that we can signiﬁcantly reduce the programming effort of “simple” (signal-processing) tasks, and we want to demonstrate that we can implement a complex High-Performance Computing application (an imager), which was previously deemed too complex.