AI Governance and Guidelines Development in India
Under the guidance of the Advisory Group led by the Principal Scientific Advisor, a Subcommittee on ‘AI Governance and Guidelines Development’ has been formed to provide actionable recommendations for regulating AI in India.
About AI Governance:
AI governance involves frameworks, standards, and safeguards to ensure AI systems are safe, ethical, and respect human rights.
Key Issues Identified:
- Deepfakes & Malicious Content: Legal frameworks fail to adequately address AI-generated harmful content.
- Cybersecurity: Existing laws on AI-related cybercrimes require strengthening to keep pace with emerging threats.
- Intellectual Property Rights: AI's use of copyrighted data raises concerns regarding infringement and accountability.
- AI Bias & Discrimination: AI systems may perpetuate biases, posing challenges in identifying and correcting discrimination.
Key Recommendations:
- Inter-Ministerial AI Coordination Committee: To ensure coordinated AI governance across ministries and regulators (e.g., MeitY, NITI Aayog, RBI, SEBI).
- Technical Secretariat: A body to provide technical support to the AI Coordination Committee.
- Techno-Legal Measures: Implement technologies like watermarking and content provenance to combat deepfakes.
- AI Incident Database: Develop a database to document AI-related risks and harms, encouraging voluntary reporting from both public and private sectors.
The recommendations aim to foster ethical AI governance, addressing challenges while maximizing AI’s societal benefits.