Nithin Mohan is an AI and Supercomputing leader at Hewlett Packard Enterprise (HPE) in the USA, where he leads engineering teams building AI-driven software products used in some of the world’s largest supercomputers. He holds dual master’s degrees: an M.S. in Telecommunications Engineering from the University of Colorado Boulder and an M.S. in Computer Science with a specialization in AI from the Georgia Institute of Technology.
AI and supercomputing together can accelerate human discovery in a way neither can achieve alone. Supercomputers are engines of simulation: they let us model climate, materials, fluids, fusion plasmas, genomes, and complex engineered systems with increasing fidelity. AI is an engine of inference: it extracts patterns from data, builds surrogate models, and guides decisions when the search space is too large for brute force alone. The frontier is now the fusion of the two, where AI does not merely run on supercomputers, but actively reshapes what we simulate, what we measure, and how quickly we learn.
The biggest shift is that discovery is becoming more iterative and targeted. In many domains, scientists used to run large simulations, collect outputs, and analyze results after the fact. Today, AI can close the loop. A model can learn from early simulation outputs and steer the next set of runs toward the most informative experiments, the rare edge cases, or the most uncertain regions of a parameter space. This approach, sometimes called active learning for simulation, turns supercomputing into a smarter search process rather than a purely larger one. It can cut years off research cycles in areas like drug candidate screening, battery materials, catalyst design, and aerodynamic optimization.
AI also enables digital twins, high-fidelity computational replicas of physical systems that update continuously using data. Digital twins are not just engineering tools; they are discovery platforms. They allow researchers to test hypotheses safely, explore counterfactual scenarios, and anticipate behavior before it appears in the real world. At scale, this matters for everything from power grids to manufacturing to public health. When AI and high-performance computing work together, the twin becomes more adaptive, more predictive, and far more useful.
At the frontier model level, supercomputing is becoming a strategic resource. Training and operating state-of-the-art AI models is increasingly constrained by compute, energy, and system efficiency. Exascale-class infrastructure, combined with optimized software stacks, is what makes it possible to train large models faster, experiment with new architectures, and run high-throughput inference for science and industry. The next wave of progress will come not only from bigger models, but from models that integrate physics, incorporate uncertainty, and can interact with simulations and experiments as partners in discovery.
So what can India do to participate at the frontier?
First, treat HPC-AI convergence as a national capability, not a collection of projects. India should invest in flagship exascale-adjacent systems and, equally important, the software and talent pipelines that make them useful. The hardware is only the beginning. The frontier is built by compilers, communication libraries, model-parallel training stacks, scalable storage, and robust AIOps that can support scientific and industrial users.
Second, focus on AI for science and science for AI programs that create shared benchmarks and shared infrastructure. India can lead in domains where it has scale and urgency: monsoon and climate prediction, energy and grid modernization, health and genomics, advanced manufacturing, and space. These problems demand both simulation and learning, and success would create globally relevant models and datasets.
Third, build open innovation pathways with national labs, universities, and industry. The fastest progress happens when systems engineers, AI researchers, and domain scientists collaborate early, not when integration is an afterthought. Incentivize joint appointments, shared research platforms, and challenge programs that reward reproducible, scalable solutions.
Finally, commit to operational excellence and reliability as part of the frontier. Exascale and frontier AI are not just about peak performance; they are about sustained throughput, usability, and time-to-discovery. The countries that win will be those that can keep these complex systems productive for researchers every day.
The opportunity is clear: the convergence of AI and supercomputing can compress the distance between hypothesis and insight. For India, participating at the frontier means investing not only in machines, but in the ecosystem that turns compute into discovery.


















