There is Not a Single Centre Ready Today in India to Host a 5 Petaflop Machine: Rajesh Chhabra, Altair Engineering
Even though the Indian government is making efforts to advance the high-performance computing sector, Rajesh Chhabra of Altair Engineering feels there are severe challenges.
Consider a hypothetical situation: You want to buy a car of your dreams. Now McLaren is aggressively planning to sell its sports car for daily use. But then you look at the streets of Bengaluru and not help but wonder “Why will I even buy this?”—unless you don’t know what to do with the pots of money in your secret vault. We still face so many challenges navigating the streets with our SUV, and McLaren wants us to buy a race car.
A similar situation is seen somewhat in the high-performance computing field with the government aggressively pushing for supercomputing technologies. A technology that has been present for many years, HPC still faces severe challenges, such as shortage of labor and energy saving. “HPC is very traditional and has been there for many years,” says Rajesh Chhabra, Vice President of Enterprise Solutions division of Asia Pacific, Altair Engineering.
However, he says that even with these challenges, HPC is now present in very field, right from designing vehicles and forecasting weather to drug discovery and manufacturing. “My favourite tagline is ‘HPC improves people’s lives’. We, at Altair, help improve the HPC experience,” says Chhabra.
Altair, which was founded in Troy, Michigan, in 1985, is unique in the HPC market. Along with end-user applications and consultancy, Altair also makes HPC middleware, PBS Professional, which functions as an HPC workload manager and job scheduler for clusters, clouds and supercomputers.
In March, the Indian government announced the launch of National Supercomputing Mission, which aims at installing 73 supercomputers in different parts of the country. This Rs. 4,500-crore mission “will enable India to leapfrog to the league of world-class computing power nations,” a government release said. This initiative shows that India is running on the path of advancement in the HPC field. But is India ready to take on this huge challenge?
Not an easy road to compute
To host such a mammoth machine, Chhabra says that facilities should be energy efficient. “There is not a single centre ready today to host a 5 petaflop machine, because there is not enough power... IISc has the biggest machine today, which is close to a petaflop. But then again they do not have a datacenter to host a 5 petaflop machine. And if they did, then there will be never enough power to run another machine in CSIR. So they will have to limit and decide who hosts the big machine,” Chhabra says.
This may bring in some sorts of competition among institutes to show their supercomputing prowess. “It is already happening, C-DAC wants to have a big computer but then there are certain computers in other institutes there. If two sites have 5 petaflop machines each, Pune will not have electricity,” he smirks.
Even if a center is ready with its energy-saving plan, Chhabra says that institutes have not tackled the issue of shortage of manpower. “While there is a lot of emphasis on Make in India, [the government] will have to get a lot of people or a lot of training from consultants overseas to come in and train the people. There is little training going on today on how to manage HPC,” he says.
Speaking about the 11th Altair Technology Sales Conference Series in Bengaluru, Chhabra saw this to be true. “Among the people that attended the event, not a single one is trained in HPC. They all learned HPC on the job … People are looking for experts, but the community is small in terms of number of experts in HPC,” he says.
However, he says that with the government spending on NSM, this issue might have a solution. “One of the big chunks of the Rs. 4,500 crore is allocated for training and skill development, which is the right step that the government has taken.”
In India, network architecture is also a big challenge. “We don’t know how much money is going to be spent on network infrastructure. You can build all the supercomputers you want, but the whole idea is to collaborate and share and if moving a large file takes forever, people will find it difficult to work with it,” he says.
“What is big in Big Data?”
The Internet of Things and Big Data are slowly and steadily taking importance as the most sought after technologies that CEOs and CIOs are talking about. So with these new technologies, will traditional technologies like HPC help in their growth? “HPC is all about large amount of data and large amount of computing. When the term ‘big data’ was coined, traditional HPC experts were like ‘What’s big here? Because we’ve been doing this for many years’. But it has come to a marriage point. The infrastructure is similar and is playing a role at the backend in managing IoT and big data,” he says.
Chhabra uses the example of driverless cars to substantiate this. “When you have a driverless car, there is intercommunication between the cars, traffic signals, GPS, sensors; there is a lot data going there. So what happens when there is high transactional data? All this data needs to be stored, analyzed and fed. The infrastructure is very similar to HPC,” says Chhabra.
One of the major applications of supercomputers is in nuclear research. The US is the largest supplier of supercomputer technology around the world. For this reason, the US has placed several restrictions on certain countries to curb the export of devices, more specifically on China and North Korea. “Thankfully in India we don’t have any such restriction,” he says.