AI regulation India: The Indian government has implemented a stringent policy requiring social media platforms to remove content flagged by authorities within three hours. This new directive, issued by the Ministry of Electronics and Information Technology, will take effect on February 20, 2026, reducing the previous 36-hour window.
The updated regulations focus on AI-generated and synthetic content, including deepfakes, across platforms such as X and Instagram. These rules mandate the labeling of AI content, aligning AI-generated information with existing IT laws to identify unlawful acts.
Amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, formally define AI-generated and synthetic content, requiring intermediaries to label such materials clearly. Platforms must also embed metadata or provenance identifiers to trace the origin of content, where feasible.
The notification emphasizes banning illegal AI content, urging platforms to deploy automated tools against deceptive or harmful AI material. This move reflects India’s commitment to tackling the challenges posed by AI in digital media.
Published in SouthAsianDesk, February 11th, 2026
Follow SouthAsianDesk on X, Instagram and Facebook for insights on business and current affairs from across South Asia.




