In a significant move, 40 leading researchers from renowned tech giants such as Google, OpenAI, DeepMind, and Meta have jointly authored a position paper. They highlight a growing unease about their limited ability to fully grasp or predict the decision-making processes of advanced AI models.
The researchers are urging for more concentrated efforts in studying ‘chains-of-thought’ to shed light on the intricacies of AI reasoning.
This call resonates with broader industry concerns surrounding AI transparency, explainability, and control as AI systems grow increasingly autonomous and complex.
Understanding AI’s decision-making is critical as these systems are deployed in various sectors, from healthcare to finance. The ‘chains-of-thought’ approach refers to a method where the intermediate reasoning steps of AI systems are examined more meticulously.
Although AI models have shown remarkable capabilities, this transparency is essential for ensuring they operate reliably and ethically.
The researchers’ appeal emphasises the need for ongoing scrutiny and deeper investigation to maintain control over these powerful technologies.
Their appeal also reflects tension in contemporary AI development: the trade-off between performance and interpretability.
Many of the most powerful models, such as large language transformers, derive their capabilities from sheer scale and dense layers of abstraction, which in turn make their internal logic almost inscrutable to human observers.
As a result, even experts can struggle to determine why a model arrives at a specific output or how it weighs various inputs. This ‘black box’ nature is increasingly viewed as a liability, especially in contexts where accountability and fairness are paramount.
Moreover, the researchers’ push for studying ‘chains-of-thought’ aligns with emerging regulatory frameworks that demand higher standards of algorithmic transparency.
Upcoming policies in the EU and other jurisdictions may soon require organisations to justify AI decisions in plain terms, particularly when these decisions impact individual rights or safety.
By making AI reasoning more interpretable, chains-of-thought methodologies could serve as a technical bridge between innovation and regulation, fostering trust while supporting responsible deployment.
Key Data and Insights
- Industry Alarm:
Over 65% of AI experts in a 2025 global survey cite interpretability as the #1 bottleneck to responsible AI deployment (Nature, 2025). - Regulatory Response:
More than 30 countries are now drafting or updating AI regulations to mandate transparency and explainable outputs for any AI system used in areas such as lending, insurance, hiring, healthcare, and legal proceedings (European Commission). - Core Insights and Industry Data
- Concern among top experts: In 2025, a coalition of 40 leading researchers from Google, OpenAI, DeepMind, and Meta jointly emphasised the urgent need for greater transparency in advanced AI models, especially regarding “chains-of-thought” the intermediate steps an AI takes to reach its conclusions. Over 65% of surveyed AI experts now cite model interpretability as the top obstacle to responsible AI deployment.
- Transparency gap in the enterprise: 88% of large organisations using advanced AI admit they cannot reliably explain model outputs to users, raising compliance and trust issues.
- Regulatory momentum: Over 30 countries, including those in the EU and UK, are moving to mandate explainable AI for high-stakes domains like lending, insurance, healthcare, and law.
- Sector urgency: 79% of firms in finance and healthcare say explainability is “critical” for regulation, but only 32% believe their systems provide adequate transparency.
‘Chains-of-Thought’: Why This Matters
- Transparency for Trust:
Examining the intermediate reasoning steps (“chains-of-thought”) in AI makes it easier for regulators, developers, and the public to understand and trust automated decisions. - Performance vs. Interpretability Dilemma:
Large transformers and generative models owe their power to abstract, intertwined computations – yet these very strengths make interpretability nearly impossible without deliberate research into their reasoning chains. - Regulatory Momentum:
New AI laws in the EU and elsewhere are set to require end-user explanations for high-stakes AI decisions, meaning chains-of-thought tools and techniques will become standard before deployment.
Looking Ahead
- Strong demand for chains-of-thought research signifies a shift toward more rigorous, transparent, and accountable AI especially as foundation models expand into every sector.
- Forward-looking businesses are already investing in model interpretability to future-proof against coming regulations and to build user trust.
References
- Nature: Chains of Thought—How Can We Make AI Reasoning Transparent? (2025)
- Policy Pros: AI Transparency and Regulatory Best Practices
- Google Research Blog: Chains-of-Thought Reasoning in Large Language Models
- European Commission: The European Approach to Artificial Intelligence
Reference Links
For direct access to recent studies, position papers, and policy frameworks on chains-of-thought and explainable AI:
- A New and Fragile Opportunity for AI Safety (arXiv, July 2025): position paper from researchers at Google, DeepMind, OpenAI, Meta, and others
- Chain of Thought Monitorability: Full PDF version (July 2025)
- TechCrunch coverage: Research leaders urge tech industry to monitor AI ‘thoughts’ (July 2025)
- IBM: What is chain-of-thought (CoT) prompting?
- GigaSpaces: Exploring Chain of Thought Prompting & Explainable AI
- MarketsandMarkets: Global Explainable AI Market Trends 2025