Visual Search

Browse FAQs to learn everything about the Ximilar visual search technology, its functionalities and collection synchronization.

Basics About App & API
Computer Vision Platform
Custom AI Projects & Pricing
Data Security
Image Tools
Ready-to-use Image Recognition
The Right Choice of Service
Visual Search

Implementation

How does Ximilar's visual search implementation via API work?

Our visual search technology begins with the synchronization of your image collection to our cloud. Once synchronized, clients can efficiently search their image collection using image queries, retrieving identical or similar pictures. For clients with highly specific image collections, our system may require initial fine-tuning and customization to meet their unique needs.

The synchronization frequency is flexible, allowing you to choose the frequency that suits your workflow. During each synchronization run, images are analyzed for relevant features, ensuring efficient visual search, and promptly discarded to uphold privacy and data security.

If you would like to get a visual search solution for your website, app or system, contact us, and we will find the optimal solution for you.


Collection Synchronization

Can we have the automatic synchronization on a weekly or daily basis?

Yes, we are able to do automatic synchronization from your export, and you can choose both the way and frequency (daily, weekly, or monthly) of synchronization. We usually charge a small fee for implementation of the synchronization script (depending on complexity and format of the export).


In which formats can my data export for visual search collection synchronisation be?

Your export should be accessible via URLs in standard format as JSON, XML or CSV.


How many images can be in a visual search collection?

Our largest collection contains more than 100 million images, and the speed for searching visually similar items is under a few hundred milliseconds.


How do we keep our visual search collection updated?

You can keep your collection updated via API. We provide standard endpoints for inserting, updating or deleting the items from the collection.

You can also provide your collection as an export file, and we will keep it updated for you. We usually charge a small fee for implementation of the synchronization script (depending on complexity and format of the export). You can choose the way and frequency of collection synchronization.


Technology

How fast is the search for one image?

If the image is already in the collection, then the search is very fast (under 100 ms). If you are searching with an external image, then this can take up to 0.5 seconds. We are able to scale the system to be even faster if needed. Contact us for details.


Can visual search engine built by Ximilar be deployed on our servers?

Yes, the system can be easily deployed on your servers if needed. The server should have a standard Intel or AMD CPU with at least 96 GB of RAM. It would be great if the server also contained an NVIDIA GPU card. We are using Docker technologies that help us with the deployment.


Text-to-Image Search

How fast is the Text-to-image search by Ximilar?

The search itself happens in a matter of milliseconds. This system can handle hundreds of requests per second if needed.


How does text-to-image search powered by AI work? Do I need to provide tags?

The text-to-image search eliminates the need for metadata like tags (keywords). It is powered by AI able to understand both language and complex visual characteristics of images.

Once you synchronize your image collection to our cloud, our AI retrieves distinct structured information (embeddings) from your images. We do not store your images, only the extracted data. When a user enters a text query, our AI thoroughly analyzes it and presents the user with the most relevant content from your collection.

 


Fashion Search

What solutions does Fashion Search combine? How does it work?

Fashion Search is an all-in-one solution for fashion e-shops, websites, price comparators and apps. It includes the following AI-powered solutions:

1. Object Detection & Fashion Tagging

Our Fashion Tagging automatically identifies and tags the fashion apparel in your images. It utilizes a hundred recognition models, hundreds of labels, and dozens of features, all linked into seamlessly interconnected flows, enabling you to add content 24/7. We continually enhance the quality and incorporate new fashion attributes like features, categories, and tags. Custom tags are also encouraged.

Standalone fashion tagging assigns tags to a single dominant fashion item in an image. However, when united with object detection, it’s capable of tagging all recognizable fashion apparel in an image.

2. Product Similarity & Search by Photo

These visual search solutions allow hyper-personalization of customer experience. Product Similarity identifies and proposes items similar to the one your customer is currently viewing, leading up to a 380% boost in clicks.

Search by Photo accepts user-uploaded images, detects fashion apparel in them, and automatically suggests similar items from your inventory. This applies to real-life photos, user-generated content, as well as influencer photos and other social media content.


Which items and how many can Ximilar fashion AI detect in a photo?

There are several options:

  • By default, Fashion Tagging and Search focus on the largest fashion object in the image. This mode is ideal for single-product photos and other images with dominant objects.
  • However, you can opt to detect all fashion items using the detection endpoint, then individually tag and use them for separate visual searches.
  • The tagging can also be enhanced with a meta endpoint describing the background, scene, view, and body part of the person wearing the items.

Can Ximilar fashion AI analyze both product and real-life images?

Ximilar’s Fashion AI can analyze a range of images, from product photos to real-life ones, including user-generated content and social media posts.

Images can be submitted in any standard format (see supported image formats at a link below) either as image URLs or as base64-encoded image data.

For reverse image searches, your image query should ideally contain a single dominant fashion item. These queries can utilize both single-product pictures or parts of more complex images, in which specific items will be detected.

The AI can also process and remove backgrounds from product photos in bulk for a more cohesive catalogue. Additionally, you can choose which image category, such as model-worn product images or standalone product images, will be automatically displayed.


What are the differences between Ximilar fashion solutions for recognition, data enrichment and product search?

Fashion Tagging labels your fashion items, assigning categories (e.g., skirts), subcategories (e.g., A-line skirts) and tags (for color, design, pattern, length, rise, style…). By default, it provides data for one main object in an image. Endpoint meta can also provide tags for the photography background, scene, or body part in the fashion image.

Fashion Search is an all-encompassing solution, wrapping all typical fashion AI services into one. It integrates:

  • Fashion tagging, including Dominant colors
  • Object detection for precise labelling and search of individual items
  • Visual Search, recommending similar items from your collection

Both Fashion Tagging and Fashion Search include color analysis. The colors are supplied as tags and can serve for filtering and search on your website.

I only need a single fashion AI solution

All our fashion AI solutions can also be employed individually. Examples include product similarity, search by photo (reverse image search), fashion apparel detection, or color-based search.


Can Ximilar fashion AI identify apparel on people? Is it necessary to supply product photos for object detection?

Our company’s fashion detection operates with both product images and real-life photographs, including user-generated content or social media posts. The fashion apparel detected may be both standalone and worn by people.

Apparel visibility

To achieve the most accurate results, the fashion apparel you wish to detect should be clearly visible, with minimal overlapping objects or folds, and not worn by individuals in uncommon poses. However, if your collection comprises numerous photos with unconventional positions, we can readily tailor the solution to perfectly suit your use case.


Do I need my own image collection for visual search? Can it be performed on images from social media?

Visual search and similarity search are always performed on a specific image collection.

Each image in this collection is processed exactly once during the collection synchronization run, and then discarded. Our AI then works with a representation of your images and compares them for similarity. That is why we do not store your images used for visual search. They are discarded immediately after processing.

The type and source of the images in your collection are up to you, as well as the frequency of the synchronization runs. Contact us via live chat or contact form for details.


What colors and which palettes can Ximilar AI extract? What is the format of the results?

We offer multiple options for dominant color extraction that you can select from. The outcome is provided in a structured format, usually JSON.

Dominant product vs. whole image

The product endpoint allows you to extract colors from a single dominant object in an image (product photo), whereas the generic endpoint extracts the dominant colors from the entire image, a mode typically used in stock photography.

Basic color for searching & filtering

This mode identifies one main color of the dominant object out of a total of 16 basic colors. The extracted color can be utilized as an attribute for filtering and searching fashion items.

Pantone palette: detailed color analysis

This mode provides a group of dominant colors, their hex codes, the closest Pantone name, and coverage of the image in %. It is ideal for similarity search (search by color).