OpenAI Forms Science Team Under Kevin Weil to Speed Research With GPT-5

SAN FRANCISCO — OpenAI announced the formation of “OpenAI for Science,” a dedicated internal team led by Vice President Kevin Weil aimed at adapting the company’s GPT-5 architecture to accelerate scientific research, according to a report shared with MIT Technology Review and Axios. The initiative, first launched in October 2025, gained renewed attention this week following the release of usage data showing nearly 1.3 million scientists now use ChatGPT weekly to discuss advanced research topics as of January 2026.

Weekly messages on advanced science and mathematics topics surged 47% year-over-year, climbing from 5.7 million to 8.4 million between January and December 2025. The numbers reflect broader adoption across physics, chemistry, biology, and mathematics, with researchers primarily using the AI for literature synthesis, data interpretation, and experimental planning.

Adoption Metrics Drive Strategic Push

The team’s formation responds to what OpenAI characterizes as a critical shift in researcher behavior. Weil told MIT Technology Review that scientists increasingly rely on GPT models for tasks ranging from drafting scientific text to debugging code and planning experiments. “AI is increasingly being used as a scientific collaborator, and we’re seeing its impact grow in real research settings,” Weil stated.

OpenAI’s internal analysis of anonymized ChatGPT conversations revealed that most researchers use AI for writing and communication tasks, while fewer deploy it for rigorous calculations and analysis. The gap suggests room for deeper integration into the research process itself, according to the company’s 20-page report.

GPT-5.2 Targets Research Precision

The initiative centers on GPT-5.2, a specialized variant designed for scientific applications. According to OpenAI’s technical documentation, the model achieved 92% accuracy on PhD-level knowledge benchmarks, a substantial increase from GPT-4’s 56% and GPT-5’s base 78.4%.

The Science Edition introduces “Explainable Traces,” allowing researchers to click on AI assertions and view the logic path and source materials used. This transparency feature addresses longstanding concerns about the “black box” nature of neural networks in scientific settings.

GPT-5.2 employs recursive verification, cross-referencing outputs against verified scientific databases before generating responses. The system flags ambiguity when confidence levels drop, prompting researchers for clarification or suggesting experiments to resolve uncertainty.

Weil Tempers Expectations on Autonomous Discovery

Despite performance gains, Weil emphasized the model’s limitations in an interview with MIT Technology Review. “I don’t think models are there yet” for making groundbreaking new discoveries independently, he said. The comment follows an incident where OpenAI executives deleted posts falsely claiming GPT-5 solved unsolved mathematics problems.

Weil now frames the technology as a “sparring partner” rather than an oracle. “Our mission is to accelerate science. And I don’t think the bar for the acceleration of science is, like, Einstein-level reimagining of an entire field,” he told the publication.

Mathematician Terence Tao estimated that only one to two percent of open problems can currently be solved by AI with minimal human help. OpenAI aims to develop an autonomous research agent by 2028, though current systems require deep scientist involvement.

2026 Positioned as Breakthrough Year

Weil predicted that “2026 will be for science what 2025 was for software engineering,” referencing the rapid adoption of AI coding assistants. He argued that scientists not using AI heavily within a year “will be missing an opportunity to increase the quality and pace of your thinking”.

The comparison draws on 2025 trends, when AI coding tools shifted from early-adopter technology to industry standard. At year-end 2025, not using AI for coding meant falling behind, Weil said.

Model Handles Volume, Not Novelty

The system’s primary advantage lies in managing research volume rather than generating novel insights. GPT-5 can ingest thousands of papers, identify conflicting data, and highlight gaps in current understanding. It operates continuously and can handle ten parallel queries, addressing what OpenAI describes as the “discovery phase” bottleneck.

In mathematics and theoretical physics, GPT-5.2 generates intermediate proof steps, offering scaffolding for formal theorem development. For experimental chemistry, the model simulates hypothesis testing by modeling interactions based on known physical laws, potentially saving research and development funding by predicting reaction viability before laboratory testing.

Leadership Structure Reflects Product Focus

Weil previously held leadership positions at Instagram and Twitter before joining OpenAI. He studied physics, combining technical training with product experience in his current role. The Science team operates as both a product development unit refining GPT models for research use cases and a partnership program connecting researchers with AI tools.

Most scientists featured in November 2025 case studies approached OpenAI independently rather than through company outreach, Weil noted. The team emphasizes collaborative workflows where human researchers remain “deeply involved” reviewing intermediate results.

Biosafety Concerns Persist

OpenAI CEO Sam Altman expressed nervousness about AI biosecurity risks in 2026, acknowledging concerns that advanced models could accelerate dangerous research. The company has not publicly detailed specific safeguards beyond the recursive verification system built into GPT-5.2.

The Science Edition expands context window capacity to over 1 million tokens, compared to GPT-4’s 128,000, enabling analysis of longer documents and more complex data sets. The model’s reasoning depth uses what OpenAI terms “recursive verification” rather than standard chain-of-thought processing.

Latest news
Related news