Gemini 2.0 现已向所有人开放

文章详细介绍了谷歌深思发布的 Gemini 2.0 模型系列,重点讨论了其性能、可用性和应用场景。更新后的 Gemini 2.0 Flash 已通过 API 和平台(如谷歌 AI 工作室 和 Vertex AI)正式上线,面向开发者提供可扩展、高性能的任务支持,特别是需要多模态推理的任务。此外,还推出了实验版本的 Gemini 2.0 Pro,针对编码和复杂推理进行了优化,具有 2 百万上下文窗口和高级工具集成功能。一款新的成本效益模型 Gemini 2.0 Flash-Lite 也进入公共预览阶段,其在速度和成本不变的情况下提供了比前代更高的质量。文章强调了包括强化学习技术和自动化红队测试在内的安全措施,以确保这些模型的安全使用。这些更新使 Gemini 2.0 成为适用于多样化应用的多功能 AI 模型家族,并计划在未来几个月内扩展多模态输入能力,支持文本、图像等多种输入形式并生成文本输出。


In December, we kicked off the agentic era by releasing an experimental version of Gemini 2.0 Flash — our highly efficient workhorse model for developers with low latency and enhanced performance. Earlier this year, we updated 2.0 Flash Thinking Experimental in Google AI Studio, which improved its performance by combining Flash’s speed with the ability to reason through more complex problems.

And last week, we made an updated 2.0 Flash available to all users of the Gemini app on desktop and mobile, helping everyone discover new ways to create, interact and collaborate with Gemini.

Today, we’re making the updated Gemini 2.0 Flash generally available via the Gemini API in Google AI Studio and Vertex AI. Developers can now build production applications with 2.0 Flash.

We’re also releasing an experimental version of Gemini 2.0 Pro, our best model yet for coding performance and complex prompts. It is available in Google AI Studio and Vertex AI, and in the Gemini app for Gemini Advanced users.

We’re releasing a new model, Gemini 2.0 Flash-Lite, our most cost-efficient model yet, in public preview in Google AI Studio and Vertex AI.

Finally, 2.0 Flash Thinking Experimental will be available to Gemini app users in the model dropdown on desktop and mobile.

All of these models will feature multimodal input with text output on release, with more modalities ready for general availability in the coming months. More information, including specifics about pricing, can be found in the Google for Developers blog. Looking ahead, we’re working on more updates and improved capabilities for the Gemini 2.0 family of models.

2.0 Flash: a new update for general availability

First introduced at I/O 2024, the Flash series of models is popular with developers as a powerful workhorse model, optimal for high-volume, high-frequency tasks at scale and highly capable of multimodal reasoning across vast amounts of information with a context window of 1 million tokens. We’ve been thrilled to see its reception by the developer community.

2.0 Flash is now generally available to more people across our AI products, alongside improved performance in key benchmarks, with image generation and text-to-speech coming soon.

Try Gemini 2.0 Flash in the Gemini app or the Gemini API in Google AI Studio and Vertex AI. Pricing details can be found in the Google for Developers blog.

2.0 Pro Experimental: our best model yet for coding performance and complex prompts

As we’ve continued to share early, experimental versions of Gemini 2.0 like Gemini-Exp-1206, we’ve gotten excellent feedback from developers about its strengths and best use cases, like coding.

Today, we’re releasing an experimental version of Gemini 2.0 Pro that responds to that feedback. It has the strongest coding performance and ability to handle complex prompts, with better understanding and reasoning of world knowledge, than any model we’ve released so far. It comes with our largest context window at 2 million tokens, which enables it to comprehensively analyze and understand vast amounts of information, as well as the ability to call tools like Google Search and code execution.


Gemini 2.0 Pro is available now as an experimental model to developers in Google AI Studio and Vertex AI and to Gemini Advanced users in the model drop-down on desktop and mobile.

2.0 Flash-Lite: our most cost-efficient model yet

We’ve gotten a lot of positive feedback on the price and speed of 1.5 Flash. We wanted to keep improving quality, while still maintaining cost and speed. So today, we’re introducing 2.0 Flash-Lite, a new model that has better quality than 1.5 Flash, at the same speed and cost. It outperforms 1.5 Flash on the majority of benchmarks.

Like 2.0 Flash, it has a 1 million token context window and multimodal input. For example, it can generate a relevant one-line caption for around 40,000 unique photos, costing less than a dollar in Google AI Studio’s paid tier.

Gemini 2.0 Flash-Lite is available in Google AI Studio and Vertex AI in public preview.

Our responsibility and safety work

As the Gemini model family becomes more capable, we’ll continue to invest in robust measures that enable safe and secure use. For example, our Gemini 2.0 lineup was built with new reinforcement learning techniques that use Gemini itself to critique its responses. This resulted in more accurate and targeted feedback and improved the model's ability to handle sensitive prompts, in turn.

We’re also leveraging automated red teaming to assess safety and security risks, including those posed by risks from indirect prompt injection, a type of cybersecurity attack which involves attackers hiding malicious instructions in data that is likely to be retrieved by an AI system.

AI 前线

程序员的提示工程实战手册

2025-12-23 11:35:59

AI 前线

上下文工程:为提示词注入工程学的严谨性

2025-12-23 11:36:05

0 条回复 A文章作者 M管理员
    暂无讨论,说说你的看法吧
个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索