Gemini 2.5 的对话式图像分割

本文详细介绍了 Google Gemini 2.5 突破性的对话式图像分割能力,该能力显著提升了 AI 对视觉的理解,超越了传统边界框和基本分割。与之前将像素与名词匹配的模型不同,Gemini 2.5 可以解析复杂的描述性短语,并根据复杂关系、条件逻辑、抽象概念、图像内文本及多语言标签来识别对象。这使得可以进行高度细致的查询,例如“最远处的汽车”或“未佩戴安全帽的员工”。本文展示了在交互式媒体编辑、智能安全监控和精细的保险损失评估中的实际应用。对于开发人员来说,此功能通过单个 API (应用程序编程接口) 提供灵活的语言交互和简化的体验,从而降低了构建复杂视觉应用的技术门槛。它鼓励开发人员通过 Google AI Studio、Gemini API 和 Colab 笔记本利用新功能,并提供实施的最佳实践。




The way AI visually understands images has evolved tremendously. Initially, AI could tell us "where" an object was using bounding boxes. Then, segmentation models arrived, precisely outlining an object's shape. More recently, open-vocabulary models emerged, allowing us to segment objects using less common labels like "blue ski boot" or "xylophone" without needing a predefined list of categories.

Previous models matched pixels to nouns. However, the real challenge — conversational image segmentation (closely related to referring expression segmentation in the literature) — demands a deeper understanding: parsing complex descriptive phrases. Rather than just identifying "a car," what if we could identify "the car that is farthest away?"

Today, Gemini's advanced visual understanding brings a new level of conversational image segmentation. Gemini now "understands" what you're asking it to "see."

Leveraging conversational image segmentation queries

The magic of this feature lies in the types of questions you can ask. By moving beyond simple single-word labels, you can unlock a more intuitive and powerful way to interact with visual data. Consider the 5 categories of queries below.

1. Object relationships

Gemini can now identify objects based on their complex relationships to the objects around them.

1: Relational understanding: "the person holding the umbrella"

2: Ordering: "the third book from the left"

3: Comparative attributes: "the most wilted flower in the bouquet"

2. Conditional logic

Sometimes you need to query with conditional logic. For example, you can filter with queries like "food that is vegetarian". Gemini can also handle queries with negations like "the people who are not sitting".

Within an office meeting, the natural language query "the people who are not sitting" is used to overlay segmentation masks on the two individuals who are standing.

3. Abstract concepts

This is where Gemini's world knowledge shines. You can ask it to segment things that don't have a simple, fixed visual definition. This includes concepts like "damage," "a mess," or "opportunity."

On a kitchen counter, a natural language segmentation overlay highlights a spill in response to the abstract query, "area that should be cleaned up".

4. In-image text

When appearance alone is not enough to distinguish the precise category of an object, the user might refer to it through a written text label present in the image. This requires OCR abilities for the model, one of the strengths of Gemini 2.5.

In a bakery setting, the model uses natural language segmentation to overlay masks on "the pistachio baklava" , distinguishing it from other nearby pastries based on in-image text.

5. Multi-lingual labels

Gemini is not restricted to a single language and can handle labels in many different languages.

A plate of food has natural language segmentation overlays identifying various components, with the model providing corresponding labels in French as requested by the prompt "tous les objects en français".

Let's explore how these query types could enable new use cases.

1. Unlocking creativity: Interactive media editing

This capability transforms creative workflows. Instead of using complex selection tools, a designer can now direct software with words. This allows for a more fluid and intuitive process, like when asking to select "the shadow cast by the building".

An aerial view of a park demonstrates a natural language segmentation overlay identifying "the shadow of the building".

2. Building a safer world: Intelligent safety & compliance monitoring

For workplace safety, you need to identify situations, not just objects. With a prompt like, "Highlight any employees on the factory floor not wearing a hard hat", Gemini comprehends the entire conditional instruction as a single query, producing a final, precise mask of only the non-compliant individuals.

At a construction site, a natural language segmentation overlay is applied to identify "the people not wearing a hard hat".

3. The future of claims: Nuanced insurance damage assessment

"Damage" is an abstract concept with many visual forms. An insurance adjuster can now use prompts like, "Segment the homes with weather damage” and Gemini will use its world knowledge to identify the specific dents and textures associated with that type of damage, distinguishing it from a simple reflection or rust.

In an aerial photo of a subdivision, natural language segmentation is used to overlay masks on each "damaged house".

Why this matters for developers

1: Flexible Language: Move beyond rigid, predefined classes. The natural language approach gives you the flexibility to build solutions for the "long tail" of visual queries that are specific to your industry and users.

2: Simplified Developer Experience: Get started in minutes with a single API. There is no need to find, train, and host separate, specialized segmentation models. This accessibility lowers the barrier to entry for building sophisticated vision applications.

Start building today

We believe that giving language a direct, pixel-level connection to vision will unlock a new generation of intelligent applications. We are incredibly excited to see what you will build.

Get started right away in Google AI Studio via our interactive:

Spatial Understanding demo

Or if you’d prefer a Python environment, feel free to start with our interactive Spatial Understanding colab.

To start building with the Gemini API, visit our developer guide and read more about starting with segmentation. You can also join our developer forum to meet other builders, discuss your use cases, and get help from the Gemini API team.

For best results, we recommend following the following best practices:

1: Use the gemini-2.5-flash model

2: Disable thinking set (thinkingBudget=0)

3: Stay close to the recommended prompt, and request JSON as output format.

Give the segmentation masks for the objects. 
Output a JSON list of segmentation masks where each entry contains the 2D bounding box in the key "box_2d", the segmentation mask in key "mask", and the text label in the key "label". 
Use descriptive labels.
Plain text

Acknowledgements

We thank Weicheng Kuo, Rich Munoz, and Huizhong Chen for their work on Gemini segmentation, Junyan Xu for work on infrastructure, Guillaume Vernade for work on documentation and code samples, and the entire Gemini image understanding team, culminating in this release. Finally, we would like to thank image understanding leads Xi Chen and Fei Xia and multimodal understanding lead Jean-Baptiste Alayrac.


AI 前线

使用 html2canvas+jspdf 生成可编辑的矢量 pdf

2025-12-23 22:46:00

AI 前线

谁最容易被 AI 替代?清华大学教授刘嘉:初级白领最危险

2025-12-23 22:46:26

0 条回复 A文章作者 M管理员
    暂无讨论,说说你的看法吧
个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索