Home Decor
Collectibles
Image Tools
Other industries
Browse FAQs to learn everything about the Ximilar visual search technology, its functionalities and collection synchronization.
How does Ximilar's visual search implementation via API work?
Our visual search technology begins with the synchronization of your image collection to our cloud. Once synchronized, clients can efficiently search their image collection using image queries, retrieving identical or similar pictures. For clients with highly specific image collections, our system may require initial fine-tuning and customization to meet their unique needs.
The synchronization frequency is flexible, allowing you to choose the frequency that suits your workflow. During each synchronization run, images are analyzed for relevant features, ensuring efficient visual search, and promptly discarded to uphold privacy and data security.
If you would like to get a visual search solution for your website, app or system, contact us, and we will find the optimal solution for you.
Go to:
Read more:
Can we build a smartphone app to search for specific products?
Yes, visual search of products based on user-generated content is a typical application of our technology. Read about our service Visual Product Search to learn more. Contact us to discuss your use case.
What are the examples of visual & similarity search solutions built by Ximilar?
Ximilar built complex solutions for a number of big e-shops and price comparators. For example, we develop visual & similarity search apps, such as Skintory. Read about visual search solutions to learn more.
We also provide a specialized service for fashion websites called Fashion Search. It combines several visual AI solutions, such as object detection, automatic tagging or object detection, to automate all image processing and level up the customer experience in fashion e-commerce.
How do I know if I need a custom visual search solution?
We use our platform to build complex visual & similarity search solutions for the fields of e-commerce, fashion or stock photos. Check out our services Visual Product Search, Fashion Search, Image & Product Matching.
When approached by a new customer, we discuss their use case and data and decide whether we will customize one of the existing solutions or build a completely new system. Either way, we will plan a project and deliver a visual search solution tailored to your data.
How many images do we need to train a visual similarity model?
The more training images you have, the better. We recommend having at least 1000 groups or pairs of images in the beginning. We can iteratively help you build a dataset for training a visual similarity model that can be used in visual search technology.
Go to:
What is Visual Search?
Visual & similarity search technology can analyze the overall visual aesthetic of an image or detected object in an image, independent of the origin of images or metadata (such as keywords). It understands the concept of similarity according to your subjective perception. That is why it can provide the most relevant results to image queries, whether you look for the exact match or similar items.
Go to:
Read more:
What are the typical Visual Search applications?
Visual search typically involves searching images or products using an image query, including photos from social media and user-generated content, like smartphone photos. Our Search by Photo combines this technology with detecting individual products in images. One such solution is Search Fashion by Photo.
The technology is frequently employed for similarity searches, particularly for product recommendations in e-commerce. It assesses image or detected object features, like colour, edges, or patterns, to suggest the most similar alternatives.
Another typical use is image & product matching. Since the technology can identify duplicates or nearly identical items in various images with different quality, it can help with curating product galleries and eliminating all unnecessary content.
Even though applications in e-commerce are the most common, visual search provides endless possibilities even in the industrial sector, scientific research or security systems.
Go to:
Read more:
Can I combine visual search algorithms with custom models created with Ximilar platform?
Yes, you can combine these services. Read more on their pages.
Go to:
Read more:
Can I use visual search applications on third-party data?
The source of the images in your collection is up to you. It can be third-party data (e.g., scraped from public websites or social networks), but it is up to you to collect these images (files or their URLs) and to act in accordance with the copyright law.
Go to:
Read more:
What are the main visual search benefits?
Visual search technology is powered by AI techniques such as computer vision and deep learning. The main benefits of visual search are:
Can I see how visual search tools work on my data?
The ready-to-use solutions based on visual search technology are available for testing in both public demo and Ximilar App.
It is also a standard procedure to prepare custom demos. Read our page How we work and contact us. We will prepare a custom demo tailored to your collection.
What are the visual search capabilities?
Visual search tools and software enable us to search for similar and identical photos, images, or objects, using various visual cues. Visual search algorithms analyze visual content and extract information from each image, enabling us to locate similar items or products, based on a provided image.
This technology finds application in diverse visual search use cases, including e-commerce, where users can search for products by taking a picture. Visual similarity search applications and platforms are used everywhere where visual data are essential.
Last but not least, visual search benefits include its wide use in content management systems to search for images within a database using visual references.
Where do I find visual search solutions by Ximilar?
Ximilar provides a full range of solutions powered by visual search algorithms, such as reverse image search (including searching internal databases), search by photo (combined with object detection), similar product and graphical content recommendations, and collection management with image matching.
Some of our most widely used solutions powered by visual search are available for testing in the public free demo. We tailored dedicated visual search solutions for areas such as fashion, stock photos or collectible items.
If you would like to integrate the visual search into your website, app or systems, let us know anytime via contact form, live chat, direct call or an e-mail. We are here to make computer vision easily accessible to everyone.
How does Visual Search help, and in which industries is it utilized?
Visual search is reshaping industries across the board:
Can we have the automatic synchronization on a weekly or daily basis?
Yes, we are able to do automatic synchronization from your export, and you can choose both the way and frequency (daily, weekly, or monthly) of synchronization. We usually charge a small fee for implementation of the synchronization script (depending on complexity and format of the export).
In which formats can my data export for visual search collection synchronisation be?
Your export should be accessible via URLs in standard format as JSON, XML or CSV.
How many images can be in a visual search collection?
Our largest collection contains more than 100 million images, and the speed for searching visually similar items is under a few hundred milliseconds.
How do we keep our visual search collection updated?
You can keep your collection updated via API. We provide standard endpoints for inserting, updating or deleting the items from the collection.
You can also provide your collection as an export file, and we will keep it updated for you. We usually charge a small fee for implementation of the synchronization script (depending on complexity and format of the export). You can choose the way and frequency of collection synchronization.
How fast is the search for one image?
If the image is already in the collection, then the search is very fast (under 100 ms). If you are searching with an external image, then this can take up to 0.5 seconds. We are able to scale the system to be even faster if needed. Contact us for details.
Go to:
Can visual search engine built by Ximilar be deployed on our servers?
Yes, the system can be easily deployed on your servers if needed. The server should have a standard Intel or AMD CPU with at least 96 GB of RAM. It would be great if the server also contained an NVIDIA GPU card. We are using Docker technologies that help us with the deployment.
Go to:
How fast is the Text-to-image search by Ximilar?
The search itself happens in a matter of milliseconds. This system can handle hundreds of requests per second if needed.
Read more:
How does text-to-image search powered by AI work? Do I need to provide tags?
The text-to-image search eliminates the need for metadata like tags (keywords). It is powered by AI able to understand both language and complex visual characteristics of images.
Once you synchronize your image collection to our cloud, our AI retrieves distinct structured information (embeddings) from your images. We do not store your images, only the extracted data. When a user enters a text query, our AI thoroughly analyzes it and presents the user with the most relevant content from your collection.
What is Furniture & Home Decor Search? How does it work?
Furniture & Home Decor Search (or Home Decor Search) is a complex service automating the detection, tagging and search of home decor products and furniture, powered by visual AI. It combines several solutions:
In Furniture & Home Decor Search, all of these solutions work together, enabling you to automate all image processing on your website with a single AI-powered service accessible via API.
Go to:
Read more:
Do I need to have my own photo collection for Furniture & Home Decor Search?
Furniture & Home Decor Search consists of several AI-powered solutions, one of which is visual and similarity search. The visual search technology is always tailored to a specific type of images (e.g., images of furniture, interior decor and so on) and to use it, your image collection needs to be synchronized to Ximilar cloud. You can choose the frequency of synchronization and then add new images or products 24/7.
What solutions does Fashion Search combine? How does it work?
Fashion Search is an all-in-one solution for fashion e-shops, websites, price comparators and apps. It includes the following AI-powered solutions:
1. Object Detection & Fashion Tagging
Our Fashion Tagging automatically identifies and tags the fashion apparel in your images. It utilizes a hundred recognition models, hundreds of labels, and dozens of features, all linked into seamlessly interconnected flows, enabling you to add content 24/7. We continually enhance the quality and incorporate new fashion attributes like features, categories, and tags. Custom tags are also encouraged.
Standalone fashion tagging assigns tags to a single dominant fashion item in an image. However, when united with object detection, it’s capable of tagging all recognizable fashion apparel in an image.
2. Product Similarity & Search by Photo
These visual search solutions allow hyper-personalization of customer experience. Product Similarity identifies and proposes items similar to the one your customer is currently viewing, leading up to a 380% boost in clicks.
Search by Photo accepts user-uploaded images, detects fashion apparel in them, and automatically suggests similar items from your inventory. This applies to real-life photos, user-generated content, as well as influencer photos and other social media content.
Go to:
Read more:
Which items and how many can Ximilar fashion AI detect in a photo?
There are several options:
Can Ximilar fashion AI analyze both product and real-life images?
Ximilar’s Fashion AI can analyze a range of images, from product photos to real-life ones, including user-generated content and social media posts.
Images can be submitted in any standard format (see supported image formats at a link below) either as image URLs or as base64-encoded image data.
For reverse image searches, your image query should ideally contain a single dominant fashion item. These queries can utilize both single-product pictures or parts of more complex images, in which specific items will be detected.
The AI can also process and remove backgrounds from product photos in bulk for a more cohesive catalogue. Additionally, you can choose which image category, such as model-worn product images or standalone product images, will be automatically displayed.
What are the differences between Ximilar fashion solutions for recognition, data enrichment and product search?
Fashion Tagging labels your fashion items, assigning categories (e.g., skirts), subcategories (e.g., A-line skirts) and tags (for color, design, pattern, length, rise, style…). By default, it provides data for one main object in an image. Endpoint meta can also provide tags for the photography background, scene, or body part in the fashion image.
Fashion Search is an all-encompassing solution, wrapping all typical fashion AI services into one. It integrates:
Both Fashion Tagging and Fashion Search include color analysis. The colors are supplied as tags and can serve for filtering and search on your website.
I only need a single fashion AI solution
All our fashion AI solutions can also be employed individually. Examples include product similarity, search by photo (reverse image search), fashion apparel detection, or color-based search.
Go to:
Read more:
Can Ximilar fashion AI identify apparel on people? Is it necessary to supply product photos for object detection?
Our company’s fashion detection operates with both product images and real-life photographs, including user-generated content or social media posts. The fashion apparel detected may be both standalone and worn by people.
Apparel visibility
To achieve the most accurate results, the fashion apparel you wish to detect should be clearly visible, with minimal overlapping objects or folds, and not worn by individuals in uncommon poses. However, if your collection comprises numerous photos with unconventional positions, we can readily tailor the solution to perfectly suit your use case.
Go to:
Read more:
Do I need my own image collection for visual search? Can it be performed on images from social media?
Visual search and similarity search are always performed on a specific image collection.
Each image in this collection is processed exactly once during the collection synchronization run, and then discarded. Our AI then works with a representation of your images and compares them for similarity. That is why we do not store your images used for visual search. They are discarded immediately after processing.
The type and source of the images in your collection are up to you, as well as the frequency of the synchronization runs. Contact us via live chat or contact form for details.
Go to:
Read more:
What colors and which palettes can Ximilar AI extract? What is the format of the results?
We offer multiple options for dominant color extraction that you can select from. The outcome is provided in a structured format, usually JSON.
Dominant product vs. whole image
The product endpoint allows you to extract colors from a single dominant object in an image (product photo), whereas the generic endpoint extracts the dominant colors from the entire image, a mode typically used in stock photography.
Basic color for searching & filtering
This mode identifies one main color of the dominant object out of a total of 16 basic colors. The extracted color can be utilized as an attribute for filtering and searching fashion items.
Pantone palette: detailed color analysis
This mode provides a group of dominant colors, their hex codes, the closest Pantone name, and coverage of the image in %. It is ideal for similarity search (search by color).