Sector regulator TRAI on Thursday mooted creation of an independent statutory authority "immediately" to ensure development of responsible AI (Artificial Intelligence) and regulation of use cases in India.
The authority designated 'Artificial Intelligence and Data Authority of India (AIDAI)' should be tasked with framing regulations on various aspects of AI, including its responsible use, TRAI said.
The views came as part of comprehensive set of recommendations on 'Leveraging Artificial Intelligence and Big Data in Telecommunication Sector'.
The regulator's latest move assumes significance given the backdrop of raging global debate on the benefits versus risks of AI and generative AI. Many nations, in fact, are rushing to work out norms to govern the use of this new-age technology that holds the promise of sweeping transformation while redrawing the contours of many industries.
TRAI suggested that regulatory framework should comprise of independent statutory authority and a Multi Stakeholder Body (MSB) that will act as an advisory body to the proposed statutory authority.
Other suggested tenets of the framework includes "Categorisation of the AI use cases based on their risk and regulating them according to broad principles of Responsible AI."
It said an independent statutory authority -- designated as "Artificial Intelligence and Data Authority of India" (AIDAI) -- should be established immediately for ensuring development of responsible AI and regulation of use cases in India.
"For ensuring development of responsible Artificial Intelligence (AI) in India, there is an urgent need to adopt a regulatory framework by the Government that should be applicable across sectors," Telecom Regulatory Authority of India (TRAI) said.
Among the functions that could be assigned to AIDAI are defining principles of responsible AI and their applicability on use cases based on risk assessment.
Its regulation-making functions would also include ensuring that principles of responsible AI are made applicable at each phase of AI framework lifecycle that is design, development, validation, deployment, monitoring and refinement.
Other dimensions include developing model AI Governance Framework to guide organisations on deploying AI in a responsible manner, and developing model Ethical Codes for adoption by public and private entities in different sectors.