Tech News

‘Reasoning’ AI models have become a trend, for better or worse

Call it a reasoning renaissance.

In the wake of the release of OpenAI’s o1, a so-called reasoning model, there’s been an explosion of reasoning models from rival AI labs. In early November, DeepSeek, an AI research company funded by quantitative traders, launched a preview of its first reasoning algorithm, DeepSeek-R1. That same month, Alibaba’s Qwen team unveiled what it claims is the first “open” challenger to o1.

So what opened the floodgates? Well, for one, the search for novel approaches to refine generative AI tech. As my colleague Max Zeff recently reported, “brute force” techniques to scale up models are no longer yielding the improvements they once did.

There’s intense competitive pressure on AI companies to maintain the current pace of innovation. According to one estimate, the global AI market reached $196.63 billion in 2023 and could be worth $1.81 trillion by 2030.

OpenAI, for one, has claimed that reasoning models can “solve harder problems” than previous models and represent a step change in generative AI development. But not everyone’s convinced that reasoning models are the best path forward.

Ameet Talwalkar, an associate professor of machine learning at Carnegie Mellon, says that he finds the initial crop of reasoning models to be “quite impressive.” In the same breath, however, he told me that he’d “question the motives” of anyone claiming with certainty that they know how far reasoning models will take the industry.

“AI companies have financial incentives to offer rosy projections about the capabilities of future versions of their technology,” Talwalkar said. “We run the risk of myopically focusing a single paradigm — which is why it’s crucial for the broader AI research community to avoid blindly believing the hype and marketing efforts of these companies and instead focus on concrete results.”

Two downsides of reasoning models are that they’re (1) expensive and (2) power-hungry.

For instance, in OpenAI’s API, the company charges $15 for every ~750,000 words o1 analyzes and $60 for every ~750,000 words the model generates. That’s between 3x and 4x the cost of OpenAI’s latest “non-reasoning” model, GPT-4o.

O1 is available in OpenAI’s AI-powered chatbot platform, ChatGPT, for free — with limits. But earlier this month, OpenAI introduced a more advanced o1 tier, o1 pro mode, that costs an eye-watering $2,400 a year.

“The overall cost of [large language model] reasoning is certainly not going down,” Guy Van Den Broeck, a professor of computer science at UCLA, told TechCrunch.

One of the reasons why reasoning models cost so much is because they require a lot of computing resources to run. Unlike most AI, o1 and other reasoning models attempt to check their own work as they do it. This helps them avoid some of the pitfalls that normally trip up models, with the downside being that they often take longer to arrive at solutions.

OpenAI envisions future reasoning models “thinking” for hours, days, or even weeks on end. Usage costs will be higher, the company acknowledges, but the payoffs — from breakthrough batteries to new cancer drugs — may well be worth it.

The value proposition of today’s reasoning models is less obvious. Costa Huang, a researcher and machine learning engineer at the nonprofit org Ai2, notes that o1 isn’t a very reliable calculator. And cursory searches on social media turn up a number of o1 pro mode errors.

“These reasoning models are specialized and can underperform in general domains,” Huang told TechCrunch. “Some limitations will be overcome sooner than other limitations.”

Van den Broeck asserts that reasoning models aren’t performing actual reasoning and thus are limited in the types of tasks that they can successfully tackle. “True reasoning works on all problems, not just the ones that are likely [in a model’s training data],” he said. “That is the main challenge to still overcome.”

Given the strong market incentive to boost reasoning models, it’s a safe bet that they’ll get better with time. After all, it’s not just OpenAI, DeepSeek, and Alibaba investing in this newer line of AI research. VCs and founders in adjacent industries are coalescing around the idea of a future dominated by reasoning AI.

However, Talwalkar worries that big labs will gatekeep these improvements.

“The big labs understandably have competitive reasons to remain secretive, but this lack of transparency severely hinders the research community’s ability to engage with these ideas,” he said. “As more people work on this direction, I expect [reasoning models to] quickly advance. But while some of the ideas will come from academia, given the financial incentives here, I would expect that most — if not all — models will be offered by large industrial labs like OpenAI.”

https://techcrunch.com/wp-content/uploads/2023/07/GettyImages-1463459171.jpg?resize=1200,800

2024-12-14 14:00:00

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button