The SKA telescopes are currently in the construction phase, and with it the central signal processor (CSP) for the SKA Low frequency telescope, called CSP-Low. CSP-Low will be integrated and delivered to site by a consortium led by the Dutch company TOPIC; ASTRON is one of its subcontractors. One of TOPIC’s specializations is designing and developing hardware, firmware, and software for sophisticated systems.
Published by the editorial team, 1 July 2022
In a nutshell, CSP-Low is the central processing ‘brain’ of the SKA low-frequency telescope, which will be situated in Western Australia. CSP-Low will convert digitized astronomical signals detected by the SKA-Low receivers into data that can be used to generate detailed images, but it will also have specialized pulsar capabilities. It is a combination of hardware, firmware, and software.
CSP-Low will be integrated and delivered to site by a consortium led by the Dutch company TOPIC; ASTRON is one of its subcontractors. One of TOPIC’s specializations is designing and developing hardware, firmware, and software for sophisticated systems.
“We have lots of experience with working in scaled agile frameworks and setting up high-performance teams. We have been involved in similar types of projects like these before,” says Duncan Stanton, director of projects at TOPIC.
Four subsystems
“The CSP will receive around 260,000 antenna signals in total, coming from 256 dual polarized antennas, 512 stations in total,” says Daniël van der Schuur, project manager of CSP-Low at ASTRON. Such an antenna is polarized in two directions, meaning that it can receive two different signals at once. Even though the data is reduced at the antenna site, bringing the volume down to a more manageable level, CSP-Low will still have to process and convert terabits of data per second to provide several other systems of SKA-Low with the necessary information.
In order to do so, CSP-Low contains four subsystems (products), three of them for data processing: the correlator & beam former (CBF) for imaging, the pulsar search subsystem (PSS), and the pulsar timing subsystem (PST). The fourth subsystem is a local monitoring and control (LMC) system, which handles the control of hardware for the SKA Telescope Manager, which in turn is a distributed software application, aimed to control the operation of the thousands of systems that the SKA telescope will consist of. All subsystems are being built on different locations around the globe.
The digitized data from the antennas travel to CSP-Low through optical fibre cables, where they reach a set of high-end network switches. Van der Schuur: “Then, the data goes through the CBF and after that to the three other systems.” The CBF provides the data that is used to create images of the sky. It also provides data to observe pulsars or record radio bursts.
CSP-Low receives circa 6 terabits per second of antenna data. After processing, it outputs around 7 terabits to the Science Data Processor (the SDP will process data from each telescope and create images and other products which astronomers can use), and 2 terabits per second to PSS and PST. The correlator & beam former pre-processes the data specifically for the subsystems to which they are sent. It combines the signals from the antenna stations into beams (beamforming). Beams are specific directions in the sky at which the telescope is pointed. SKA-Low is capable of generating multiple independent beams, which means that it can observe in different directions simultaneously. It creates 16 beams for the PST and 500 beams for the PSS.
“The CBF is the initial part of CSP-Low,” Stanton says. “It relies on immense parallel processing power.” That means that it has to have FPGA hardware powerful enough to process all the data. Nevertheless, the telescope also relies heavily on software. Stanton: “Software effectively knows no limit. You can always make it faster and more power efficient. But it is the hardware that has to support the software running on it. Continuous developments in hardware mostly mean giving you more for less: more processing power resulting in less time to calculate and using less energy.” Chip developers continuously develop more powerful processors, so being patient pays off: you get faster chips for the same price.
Data off-site
But at some point during the process, you need a decision on which hardware you are going to work with if you actually want to start building: you have to set a deadline for starting construction of the telescope. Stanton: “The SKA project has moved on from the research-and-design phase to the construction phase.” There were some changes in the design of CSP-Low along the way. For example, the location of the signal processor was changed. Van der Schuur: “At first, CSP-Low was to be built in the desert, right on site, about 600 kilometres north of Perth. However, it was decided to move it to Perth. It will now be housed at the Pawsey Centre, the same location as the Science Data Processor.” The original idea was to place CSP-Low on site in order to reduce the data rate before sending it off for further processing. However, this proved unnecessary, since the CBF actually outputs more data than it receives. Van der Schuur: “This change in the design makes the maintenance of CSP-Low much easier to perform.”
What does CSP-Low actually look like? “It totals more than 200 servers and a lot of network switches and optical fibre,” Van der Schuur says. “The CBF consists of around 20 to 25 servers, the PSS around 175 servers and the PST around 25 servers. The system also has a lot of GPUs (graphic processing units, which are also used on graphic cards in PC’s and game consoles) and Field-programmable Gate Arrays; there are about 20 FPGA cards in a single server.” An FPGA is an integrated circuit, which can be configured by the customer after manufacture (hence ‘field-programmable).
Test location
The consortium team currently consists of five people but will grow to around ten. The majority will come from consortium leader TOPIC. “The biggest challenge that we face is the integration of the four subsystems,” says Van der Schuur. “We have to get the separately built components to work together after integration.” These components have been tested extensively individually, but the integration of these complex systems is challenging, and much can go wrong.
“Which is why we are currently establishing an integration and test location at TOPIC in Best, near Eindhoven,” Stanton says. Product team members can test their systems or parts of their systems, either remotely or physically, to make sure that they work properly. If they do not work, the TOPIC consortium can debug, provide insights, and assist in ensuring seamless integration.
Clear communication, together with proper testing, is key, Stanton says: “Different companies in different countries have different cultures and laws and therefore work with different certifications. It is all about having a clear verification strategy, so that everybody knows where they have to plug their part into. That is why the integration test facility in Best is so important.” Nevertheless, there are some tests that will still need to eventually be done in Australia.
The integration facility in Best is currently being established. The first real integration tests will commence in the next half year once the specialised hardware has arrived.