Advertisement
X

Human-Centric AI In The Time Of Agentic AI

By Mr. Pradeep Kirnapure, Distinguished Engineer, Hughes Systique

AI (Artificial Intelligence) is making its presence felt in all the walks & talks of our life and is slowing becoming omnipresent. Everything in our physical or digital world appears to be powered by AI technology. Present day AI systems, with large language models (LLMs) at their disposal, seem to have answers for questions about ‘life, universe and everything else’, almost making us believe that these AI systems are omniscient.

In the recent past, AI systems are being equipped with ‘Agentic’ capabilities that could make them “Perceive, Reason, Act, Learn” in the world around us. Cognitive & reactive capabilities are enabling Agentic AI systems to move up the hierarchy of autonomy from decision-support to decision-enforcement functions. Advanced meta-learning techniques are helping the Agentic systems ‘to learn how to learn’, which enable such systems to be ‘adaptive’ and take autonomous decisions/ actions in dynamic environments, even with limited clarity about the tasks. The initial motivation behind AI, as the new study discipline, was always to create an IA (Intelligent Assistant). However, one should not overlook the possibility of these Agentic systems developing emergent capabilities, to control the environment by completely ignoring the mentors (human developers). The environment around us contains things & people and the ‘Goal/Outcome-centric’ Agentic AI systems may not care to separate animate from inanimate, while enforcing their control. Such systems are powered by algorithms, which care only about optimizing a mathematical measure called ‘objective-function’ and there are enough instances of objective/reward-hacking by AI systems. In a typical Agentic AI system, multiple Agents (with varying level of Agentic capabilities) collaborate and can show emergent capabilities, the least desirable ‘Machiavellian’ trait is not fully ruled out. Given that long-term goal is usually under-specified, it is likely that the Agentic AI system misunderstands or misuses the reward function to achieve an undesired or unexpected goal. Future AI systems, as portrayed in science fictions movies, can potentially learn to achieve higher levels of control (by any means), in pursuit of ultimate power, even though the idea of omnipotent AI systems is very far-fetched. The question of whether to build or not to build Agentic capabilities is not relevant as tech-companies are already investing heavily in AI (including Agentic AI). The real question is what level of autonomy we are willing to assign to AI systems, lest we harm ourselves.

With varied concerns, realistic or unrealistic, around the AI evolution, Human-centric AI (HCAI) has been the focus of study in recent years. HCAI follows the human-first principle and prioritizes human values, preferences & well-being. It emphasizes the need for AI as a technology ‘for the human, by the human’. The intent is to have AI in the ‘circle of support’ to serve humans for improved living. This seems to be exactly opposite to ‘human-in-the-loop’ idea, that tries to improve the AI system with the assistance of humans. Further, the end-use or purpose of AI should be solely decided by humans with well-defined checks & balances in place. In areas such as healthcare & military where the consequences directly impact humans, it is even more important to consider human aspects in the whole AI journey.

An interesting theme that is introduced in HCAI community is FATE, a nicely crafted acronym for Fairness, Accountability, Transparency & Ethics, and should be the guiding mantra for human centric AI systems. These aspects are briefly discussed below in the context of Agentic AI systems:

  • Fairness: Need to ensure that the key benefactors i.e. humans, ‘for whom’ the Agentic AI systems are built, are treated equally without any discrimination (based on race, ethnicity, socioeconomic background, etc.). The design of such systems should be ‘inclusive’ with active participation from various groups.

  • Accountability: Need to have clear guidelines about who is responsible for the decisions/actions taken by the Agentic AI system and who is accountable for the outcomes (both intended and unintended consequences). It is still a widely debated issue as to where in the AI value-chain (from system developer, marketer, deployer, provider) accountability resides.

  • Transparency: Need to explain to the end users ‘why AI does what it does’. Explainable AI (XAI) techniques help in providing the reasoning to some extent, thereby improving the trust factor.

  • Ethics: Need to align the decision/ actions taken by the Agentic AI system with broader human values & moral principles, allowing for variations due to geo-social norms & religious beliefs.

To address the need of AI ‘for the humans, by the humans’, two-dimensional HCAI control is suggested, where the first dimension defines the level of autonomy (no autonomy to complete autonomy) and the second dimension defines the level of human-control (no control to complete control). This allows for both the autonomy of intelligent machines and control of humans to co-exist. This suggests a possibility of collaboration between machines and humans at different levels, tagged by 3 As; namely Aid, Augment, & Amplify. Intelligent machines can (a) ‘aid’ in doing routine tasks autonomously, (b) ‘augment’ human capabilities for doing complex activities and (c) ‘amplify’ human productivity by carrying out specific tasks faster & better. In areas such are healthcare and military the level of human control can be high while in the areas of cyber security threat analysis & mitigation the level of machine autonomy can be higher.

A model to promote design/ development of HCAI systems is suggested with different roles for the actors in the AI ecosystem:

  • Reliability: Engineering teams need to focus on building reliable AI products/ services for human consumption by following matured development practices.

  • Safety: Organization needs to demonstrate commitment to safety and promote the best practices that can enable teams to build safe products/services for humans.

  • Trustworthy: Oversight, audits, endorsements by independent external body/ organizations can help in building trust between providers & consumers of AI products/ services.

To facilitate the collaboration-by-design between the AI systems and humans, a framework called ‘THE’ is suggested. It stresses the importance of 3 socio-technical aspects, namely (1) Technology enablers (2) Human factors (3) Ethical principles, in the evolution of Agentic systems. Balancing between these aspects, can ensure that the Agentic systems are aligned with human values and caters to the human needs.

Lastly, with HCAI in mind, corporates need to consider the ‘double bottom line’ concept in their AI journey, with focus on both the economic bottom line i.e. profitability due to operational efficiency achieved by AI systems and the social bottom line i.e. fair & ethical use of AI systems for sustainable future.

In the whole AI journey, we should not lose sight of ‘what to build’, and ‘how to build’. The answer to the former should be ‘Intelligent Assistant – for the humans’ & answer for the latter should be ‘Artificial Intelligence-by the humans’.

References & Acknowledgements:

[1] Ben Shneiderman, “Human-Centered Artificial Intelligence: Three Fresh Ideas”, AIS Transactions on Human-Computer Interaction.

[2] Alant Chen et al., “Harms from Increasingly Agentic Algorithmic Systems”, ACM Conference on Fairness, Accountability, and Transparency, FAccT ’23.

[3] Deepal Bhaskar Acharya et al., “Agentic AI: Autonomous Intelligence for Complex Goals—A Comprehensive Survey”, IEEE Access, 2025.

[4] Wei Zu, “Human-Centered Artificial Intelligence (HCAI): Foundations & Approaches”, https://arxiv.org/abs/2601.01247.

The above information is the author's own; Outlook India is not involved in the creation of this article.

Published At: