Beyond The Hype: Krishna Kumar On The Soul, Safety, And The Future Of The AI Revolution

As AI accelerates at an unprecedented pace, Krishna Kumar’s perspective offers a grounded reminder that intelligence is more than algorithms and data.

Krishna Kumar
Krishna Kumar
info_icon

As artificial intelligence rapidly reshapes industries, societies, and everyday life, conversations around its impact are often dominated by extremes. On one side is unchecked optimism, and on the other is fear of mass job loss and human irrelevance. Cutting through this noise is Krishna Kumar, a seasoned technologist and strategist with decades of experience across public infrastructure and advanced technologies.

Honored with the title of Distinguished Technologist for the State of Mexico and based in Austin, Krishna Kumar draws on his Technology background with HP, HPE, IRCTC and Indian Railways to offer a rare perspective that connects the worlds of governance, technology and human systems.

In this candid conversation with Rohit Sharma, he shares grounded insights on AI’s limitations, opportunities, ethical risks, and why the future of intelligence must remain deeply human.

Rohit Sharma: You have written extensively on the relationship between AI and Human. Based on your research, is the technology capable of replacing humans ?

Krishna Kumar: The fear primarily comes from a misunderstanding of intelligence itself. AI is mighty at processing what is already known. It finds patterns in massive datasets far beyond human capability. But intelligence is not just computation. It includes empathy, wisdom, moral judgment, and lived experience. AI does not possess consciousness or intent. The danger is not replacement, but misuse. AI should augment human capability, not replace human agency. The steering wheel of the AI must always stay with human hands.

Rohit Sharma: We hear about layoffs across industries, yet you predict a surge in new jobs. How do these two realities coexist?

Krishna Kumar: This is what I call the “AI Job Paradox”. Some roles will disappear, especially those that are repetitive and rule-based. But the overall picture is different. Studies suggest AI could create nearly 78 million net new jobs globally by 2030. These roles will be different. We will see demand for AI generalists, people who understand both machine logic and human context. The future belongs to those who complement machines with emotional intelligence, ethics, and critical thinking.

Rohit Sharma: You often mention the risk of a Great AI Divide. What does that mean in practical terms?

Krishna Kumar: There is a real risk that AI power remains concentrated in a few corporations and wealthy nations. This creates a form of digital colonialism where the Global South becomes a consumer rather than a creator. Regions like parts of Africa and parts of South America could be left behind. However, there is also opportunity. Countries like Kenya are using AI to leapfrog traditional infrastructure, much like they did with mobile banking. Inclusive policy and local innovation are the key.

Rohit Sharma: Can AI actually help reduce inequality, or will it worsen it?

Krishna Kumar: It can do either. That depends entirely on governance. AI applied responsibly can transform agriculture, healthcare diagnostics, access to education, and public services. But without inclusive frameworks, it will widen gaps. The technology itself is neutral. The outcomes depend on who controls it and how it is deployed.

Rohit Sharma: One fascinating area you discuss is sensory AI. Machines that can smell or taste. Are we crossing into human territory?

Krishna Kumar: We are expanding perception, not consciousness. AI systems can now detect chemical signatures with extreme precision. They can identify spoiled food, gas leaks, or even medical conditions. But detection is not experience. A machine does not associate a smell with memory or emotion. It does not feel nostalgia or pleasure. We must not confuse advanced sensing with genuine experience.

Rohit Sharma: AI hallucinations have become a serious concern. How dangerous are confident but incorrect AI responses?

Krishna Kumar: They are very dangerous, especially in fields like healthcare, law, and governance. These hallucinations are not bugs. They are a byproduct of how large language models predict responses. The issue is that they sound confident even when wrong. The solution lies in Responsible AI. This includes human oversight, transparency, clear disclosures, and moving toward smaller, domain-specific models that are easier to audit and control.

Rohit Sharma: Do you believe AI can ever truly be creative or artistic?

Krishna Kumar: AI can replicate styles, patterns, and techniques. But creativity comes from struggle, limitation, and the desire to communicate meaning. A machine does not suffer or aspire. When art moves us, it is because we sense another human consciousness behind it. AI can be a powerful tool, like a brush or an instrument, but the soul of creativity remains human.

Rohit Sharma: You have described the current moment as a broadband phase for AI. What does that mean?

Krishna Kumar: Just as dial-up internet could not support streaming, today’s classical computing is reaching its limits with AI. The energy and processing demands are enormous. We are approaching what I call the Quantum Inflexion Point. Between 2025 and 2030, quantum computing will begin solving optimisation problems in minutes that would take today’s systems thousands of years. This will unlock breakthroughs in drug discovery, climate modelling, and materials science.

Rohit Sharma: Language models are improving rapidly. Why do you believe cultural intelligence is the next frontier?

Krishna Kumar: Because intelligence is not universal in how it manifests. AI today is heavily biased toward English and Western frameworks. Translation alone is not enough. Cultural intelligence means understanding how societies think, value time, interpret responsibility, and solve problems. If AI ignores cultural nuance, it fails entire communities. Diversity in data and perspectives actually makes AI more accurate and resilient.

Rohit Sharma: Should AI be globally regulated, or is national governance enough?

Krishna Kumar: National policies are necessary but not sufficient. AI does not respect borders. We need a global AI governing framework, similar to how we manage aviation or nuclear safety. This body should focus on ethics, accountability, and human-centred outcomes. Governance must be proactive, not reactive. The future should not be about data-driven dominance of Artificial Intelligence but about Absolute Intelligence that serves humanity.

Conclusion

As AI accelerates at an unprecedented pace, Krishna Kumar’s perspective offers a grounded reminder that intelligence is more than algorithms and data. The real challenge is not whether machines will become powerful, but whether humans will remain wise enough to guide them responsibly.

The future of AI, as he emphasises, is a choice. A choice between unchecked dominance and collective stewardship. Between automation without empathy and innovation guided by ethics. If governed thoughtfully, AI has the potential not just to transform industries but to elevate humanity itself.

Published At:

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement

×