DeepSeek and the Future of AI: Insights from Professors Mohamad Moosavi and Ben Sanchez-Lengeling

Artificial intelligence is evolving at an astonishing pace, and one of the latest breakthroughs making headlines is DeepSeek. This Chinese AI model has drawn significant attention for its capabilities. Unlike many existing AI models that generate instant responses, DeepSeek R1 takes a different approach—it first processes information and “thinks” before formulating an answer. This method enhances its performance in complex problem-solving, making it especially valuable for STEM applications.

A Shift in AI Development and Accessibility

Mohamad Moosavi, Assistant Professor in the Department of Chemical Engineering, emphasizes that DeepSeek R1’s reasoning ability is what sets it apart. “Unlike many AI models that prioritize rapid response generation, DeepSeek R1 takes time to reason through problems, which makes it particularly useful for engineering and scientific applications,” he explains. While DeepSeek V3 functions similarly to AI models from companies like Mixtral, Meta, and Google, DeepSeek R1’s structured reasoning process enhances its effectiveness in technical problem-solving—though some may argue it remains less refined in assisting with writing tasks.

Another major advantage of DeepSeek R1 is its accessibility. Unlike some AI models that require costly subscriptions, DeepSeek R1 is free to use and can even run locally on relatively affordable hardware. “The ability to run DeepSeek R1 on a local machine for around $6,000 makes it a practical option for researchers and engineers looking for AI-driven problem-solving tools,” Moosavi notes. He contrasts this with OpenAI’s GPT-01, which has similar reasoning capabilities but comes with a steep $200 monthly fee.

Assistant Professor Benjamin Sanchez-Lengeling agrees that open-source AI is making strides toward competing with proprietary models. “Before, we were lagging behind closed-source models by one to two years—now it’s just weeks,” he says. He also highlights DeepSeek’s innovation in human-less reinforcement learning, which could influence future AI training methods. However, he raises concerns about content restrictions in DeepSeek’s online model, noting that certain geopolitical queries receive censored responses.

Global AI Innovation and Market Impact

DeepSeek’s emergence reflects a broader shift in AI research, with innovation now extending beyond traditional tech hubs like the United States. “It’s encouraging to see AI models coming from different parts of the world,” says Moosavi. “This diversification helps drive new ideas and reduces reliance on a handful of dominant companies.”

AI breakthroughs like DeepSeek have also influenced financial markets, with tech stocks experiencing volatility as new models challenge industry leaders. “We saw a similar reaction when Mistral (France) entered the AI space,” says Sanchez-Lengeling. “The long-term impact is uncertain, but it’s clear that competition is intensifying.”

Challenges and Opportunities in AI Performance

Despite its impressive reasoning capabilities, DeepSeek R1 is far from perfect. Sanchez-Lengeling notes recent evaluations using the ARC-Prize benchmark test, an artificial general intelligence (AGI) challenge, revealed a significant performance gap. OpenAI’s o3 model reportedly solved over 85% of reasoning puzzles—but at an enormous cost of $3,400 per puzzle. In contrast, DeepSeek R1 achieved 15% at a much lower 6 cents per puzzle. For comparison, GPT-01 scored 35% at $1.31 per puzzle.

“These results show that while DeepSeek R1 has potential, we need to study its full capabilities further,” says Moosavi. “Benchmarks only provide part of the picture—we need real-world testing to understand how effective these models truly are.” Sanchez-Lengeling agrees, adding that AI models must become more reliable, especially in high-stakes applications. “We talk a lot about AI’s increasing capabilities, but reliability is still a challenge,” he says. “This ‘capability-reliability gap’ is something researchers need to focus on.”

Implications for Academic Research and Industry

For universities and research institutions, the rapid evolution of AI presents both opportunities and challenges. Moosavi sees enormous potential in AI tools like DeepSeek but warns that widespread adoption will take time. “Technology moves fast, but academic and industrial implementation is slower,” he explains. “We need to overcome challenges like data curation and model validation before AI can be widely integrated into engineering workflows.”

Sanchez-Lengeling adds that while large AI models are becoming more powerful, they remain unreliable in applications where failure carries high risks. He believes universities should take a proactive approach by supporting faculty-led AI research. “We should be running small-scale experiments to see where AI can enhance our research capacity,” he says. “One area that could see immediate benefits is administrative processes—AI could significantly reduce paperwork and improve efficiency for people that work at these institutions.”

The Future of AI in Scientific Discovery

Both academics agree that AI is changing the landscape of scientific research. However, the future remains uncertain—especially as the balance between open and closed AI development continues to shift. “We’ve seen AI research become more closed off in recent years, particularly with the rise of commercial models,” says Sanchez-Lengeling. “DeepSeek could signal a return to more open AI research, but only time will tell.”

Moosavi emphasizes that staying informed and adaptable is essential for researchers. “The field is moving incredibly fast,” he says. “Keeping up with these advancements will be crucial for those looking to stay ahead in AI-driven innovation.”