Google Gemini can now do more in-depth research
Google is upgrading Gemini, its chatbot platform, with the ability to “reason” through a research problem and compile a comprehensive report.
The updated Gemini offers a feature called “Deep Research” that Google says uses “advanced reasoning” and “long context capabilities” to generate research briefs. The briefs are presented in the Gemini apps and can be exported to Google Docs for additional editing.
For now, Deep Research is exclusive to Gemini Advanced, a more sophisticated version of Gemini that’s gated behind the Google One AI Premium Plan, priced at $20 a month.
Diving deep
Google says that Deep Research can analyze information relevant to a query from across the web on a user’s behalf, acting as a sort of research assistant. The results are organized into summaries within the brief and paired with links to the original source material.
Here’s how it works:
- A user writes a question;
- Deep Research creates a “multi-step research plan” for the user to either revise or approve;
- Once the user approves, Deep Research refines its analysis over the course of a few minutes — searching, saving potentially interesting pieces of information, and then starting a new search based on what it’s
learned; - The process repeats multiple times, and once it’s finished, Deep Research generates a report of the key findings.
Deep Research is only available in English on desktop and the mobile web to start. Users can access it by selecting the “Gemini 1.5 Pro with Deep Research” option in the model’s drop-down menu. Google says it’ll come to the Gemini mobile apps in early 2025.
“We’ve built a new agentic system that uses Google’s expertise of finding relevant information on the web to direct Gemini’s browsing and research,” David Citron, product director for the Gemini apps, wrote in a blog post provided to TechCrunch. “Deep Research saves you hours of time.”
But the feature raises all sorts of thorny ethical questions.
Potential harms
Setting aside the fact that all AI makes mistakes and hallucinates (anyone remember glue pizza?), technology like Deep Research could have serious consequences for education.
In a recent op-ed in The New York Times, Jessica Grose wrote about how students are increasingly relying on generative AI to outsource brainstorming and writing. These students, she said, risk losing the ability to think critically and overcome their frustration with tasks that can’t be completed easily.
At least one study has linked heavy usage of ChatGPT among students to higher levels of procrastination, memory loss, and lower grade point averages.
Deep Research also threatens to harm — in the monetary sense, that is — the publishers from which it sources its information. By scraping info from websites and compiling it into briefs, Deep Research could deprive these sites of valuable ad revenue.
The impact of AI Overviews, the AI-generated summaries Google supplies for certain Google Search queries, on publishers may be indicative of what’s to come, should Deep Research take off.
According to one source, publishers have seen a 5% to 10% decrease in traffic from search since AI Overviews launched early this year. On the revenue side, an expert cited by The New York Post estimated that AI-generated overviews could lead to more than $2 billion in losses for publishers.
Google often says it works closely with publisher partners, respects paywalls, and allows websites to block its AI scraping at the domain level. But outlets are often faced with a dilemma: Allow scraping or lose visibility in Google Search (at least for now).
Google is claiming Deep Research could “connect users to relevant websites they might not have found otherwise so they can dive deeper to learn more.” We’ll have to see whether the feature delivers on that discoverability promise, or diverts views away from the broader web.
Gemini 2.0 Flash
Deep Research isn’t the only new capability coming to Gemini.
Starting today, both free and paying Gemini users will get access to Gemini 2.0 Flash, Google’s newest flagship AI model. To be precise, it’s an experimental version of 2.0 Flash optimized for chat — the full version will arrive in January.
Google says 2.0 Flash, which can be selected from the Gemini model drop-down on desktop and the mobile web (but not the mobile apps yet), should deliver better performance across a number of tasks — and faster responses.
Still, the company cautions some Gemini features “won’t be compatible with [the] model in its experimental state.” It didn’t say which features, exactly.
https://techcrunch.com/wp-content/uploads/2024/03/google-gemini.jpg?w=1200
2024-12-11 15:30:00