Current Affairs

AI Rivalry Intensifies: Anthropic Cuts OpenAI’s Claude Access Before GPT-5 Launch

The AI race has now gotten dramatic after the Anthropic engine stopped OpenAI access to Claude models, citing a violation of terms as it used the Claude code to develop GPT-5 that will launch in August 2025. The spokesperson of Anthropic, Christopher Nulty, informed Wired that the OpenAI engineers used Claude coding tools to benchmark and develop it, therefore violating rules that prohibited them from building a competing product. This inner-industry conflict can be explained by India, which is moving towards its thriving AI industry of 7.8 billion (NASSCOM, 2025) and gives birth to growing animosities among AI giants, in addition to raising the question of collaboration or competition.
Claude Code is a popular tool in India, which has 1.5 million developers (AICTE, 2024), and can be best used in writing such scaffolds as in apps and debugging bugs (The Indian Express). The goal of his usage of the APIs of Claude by OpenAI was to smooth out the coding capabilities of GPT-5 with coding, creative writing, and safety tests and testing a response with sensitive prompts like CSAM. Anthropic, an Amazon- and Google-funded endeavor, allows benchmarking but not the training of a competing model, which OpenAI is accused of doing. According to the Times of India, one of OpenAI’s members, Hannah Wong, justified the practice as standard industry safety conduct and advancement, and regardless, their APIs were still open to Anthropic.
That is not the first defense play Anthropic has made. It prohibited access to Windsurf in July 2025 when rumors of it being acquired by OpenAI spread, and Chief Science Officer Jared Kaplan told The Verge that it would be strange to sell Claude to OpenAI. The incident, together with new Claude Code rate caps as a result of the scale of its usage (Anthropic x post), is indicative of an increase in protectiveness of proprietary technology. India has many AI startups with funding of $1.2 billion in 2024 (Tracxn) that find themselves under pressure to secure innovation and yet encourage collaboration.
Users of X, such as @Abhinav05655123, emphasize the divide, with some applauding the position of Anthropic and others, such as @Techmeme, citing that API access remains available to safety checks. Both as India targets a 2026 AI governance system (MeitY) and as a generality, this tussle highlights the need to develop ethical requirements that can support competition, innovation, and confidentiality. The decision by Anthropic could transform the dynamics of artificial intelligence development, interrupting the tech ecosystem in India because 32% of the world’s enterprise LLM has been connected to Claude (Analytics Insight).

Disclaimer

The information presented in this blog is derived from publicly available sources for general use, including any cited references. While we strive to mention credible sources whenever possible, Web Techneeq – Website Developer in Mumbai does not guarantee the accuracy of the information provided in any way. This article is intended solely for general informational purposes. It should be understood that it does not constitute legal advice and does not aim to serve as such. If any individual(s) make decisions based on the information in this article without verifying the facts, we explicitly reject any liability that may arise as a result. We recommend that readers seek separate guidance regarding any specific information provided here.