Accelerators

A radio telescope produces a data stream for each antenna. Since we use up to hundreds of thousands of antennas, these data streams are processed in parallel. Our research focuses on the architectures of these supercomputers and how we can program them efficiently.

Many Big Data problems in society require parallel computing of large data streams. Radio Astronomy is one of the most data-intensive science areas today and the technology that we develop is needed in society, e.g., for the smart grid or smart traffic systems

High Performance Computing

During the past decade, High-Performance Computing has made a steady shift from big, expensive supercomputers to clusters of server machines connected by a fast interconnect. These servers are often accelerated by special-purpose, highly parallel processors. Applications offload the computationally most-intensive tasks to these accelerators, like GPUs (Graphics Processor Units), many-core CPUs, DSPs (Digital Signal Processors), or FPGAs (Field-Programmable Gate Arrays). The architectures of these accelerators differ in many ways. We compared a wide range of accelerators for key radio-astronomical applications with respect to performance, energy efficiency, and programming effort. We found GPUs to be the most suitable architecture to accelerate compute-intensive applications, and FPGAs for I/O-intensive tasks.

A second research line is to explore high-level programming languages for new Field-Programmable Gate Array (FPGA) technologies: a high-level programming language (OpenCL), hard Floating-Point Units, and tight integration with Central Processing Units (CPU) cores. Previously, FPGAs were programmed in a Hardware Description Language, which was difficult, time consuming, and error prone. We aim to demonstrate that we can significantly reduce the programming effort of “simple” (signal-processing) tasks, and we want to demonstrate that we can implement a complex High-Performance Computing application (an imager), which was previously deemed too complex.

Cooling

Computers consume power, generate heat and need cooling. Cooling with air is the most common cooling technology, but liquid cooling has the potential to reduce the power consumption substantially. Radio astronomy is extremely data intensive and cooling of our data centers is required both from the perspective of sustainability as cost. The technology is broadly applicable.

Information Technology

Information Technology is a key enabler technology for energy reduction by introducing smartness in systems. Simultaneously, the data center industry contributes substantially to the global energy consumption. The liquid cooling solutions applied in radio astronomy have the potential to reduce the overall power consumption of data centers and improve the lifetime of the systems. Technology need to be developed for the most cost effective and easy to implement cooling system and new liquid Infra structure standards need to be defined to apply in data centers. A water cooling infrastructure makes re-use of the energy more easily to adopt in other processes or applications.

Power consumption of supercomputers

The data processing applied in radio astronomy requires large supercomputers. The power consumption of supercomputers can’t be neglected and recent innovations in liquid cooling technology lead to a reduction of power consumption by 10 – 30%. The power consumption of processor chips (Field Programmable Gate Array (FPGA’s) and Graphical Processor Units (GPU) is strongly dependent on temperature of the junction. The power consumption drops significantly and the chip lifetime increases when the chip temperature is lowered and more stable. Direct cooling could also be applied by keeping the temperature of the junction at high level and cool the chip without chiller just by using water cooling without special chiller system, which saves significant investments and power.

The power density (Watt/mm2) of latest processors require water cooling at the chip itself to keep the junction temperature at acceptable level. Cooling of the peripheral infrastructure (memory, IO, ADC, Power supply, ..) by liquid is a logical, but not easy, next step. A low aspect ratio of area over thickness is required to make dense packed server systems in e.g. 19” racks possible.  There are specific solutions for direct liquid cooling, but there is no generic direct liquid cooling system available yet for retrofit or general purpose nor a liquid infrastructure or international standard available.

Latest tweets

Daily image of the week

On June 13-17, the LOFAR Family Meeting took place in Cologne. After two years LOFAR researchers could finally meet in person again. The meeting brings together LOFAR users and researchers to share new scientific results.
https://www.astron.nl/dailyimage/main.php?date=20220621

Our renewed ‘Melkwegpad’ (Milky Way Path) is finished! The new signs have texts in Dutch on the one side and in English on the other side. The signs concerning planets have a small, 3D printed model of that planet in their centre.
https://www.astron.nl/dailyimage/
#Melkwegpad @RTVDrenthe

Daily image of the week

The background drawing shows how the subband correlator calculates the array correlation matrix. In the upper left the 4 UniBoard2s we used. The two ACM plots in the picture show that the phase differences of the visibilities vary from 0 to 360 degrees.

Daily image of the week: Testing with the Dwingeloo Test Station (DTS)
One of the key specifications of LOFAR2.0 is measuring using the low- and the highband antenna at the same time. For this measurement we used 9 lowband antenna and 3 HBA tiles.
https://www.astron.nl/dailyimage/main.php?date=20220607

searchtwitter-squarelinkedin-squarebarsyoutube-playinstagramfacebook-officialenvelopecrosschevron-right