The Magic Behind High-Quality Image Labeling: A Dive into LinkedAI's Magic Tool

August 29, 2023

Steven Parra

In today's rapidly evolving world of computer vision, the quality of labeled data is paramount. Good labeled data can be the difference between a mediocre machine learning model and a top-performing one. For developers, machine learning engineers, and computer vision enthusiasts, the quest for the perfect labeling tool is never-ending.

Meet LinkedAI's Magic Tool, a state-of-the-art automatic segmentation tool that promises not just efficiency, but unparalleled accuracy. But does it live up to the hype? Let's dive in.

Image 1: Magic tool annotating fish

1. The Magic of Automatic Segmentation

In the realm of computer vision, the accuracy of any model depends significantly on the quality of the data it’s trained on. This is particularly true for tasks like instance segmentation where the aim is to identify and outline every individual object in an image. The challenge is twofold: ensuring that each object is correctly identified and that its boundaries are accurately traced.

LinkedAI’s Magic Tool is a revolutionary tool in the world of image labeling. But what powers this magic?

At the heart of the Magic Tool lies SAM, short for “Segment Anything” by Meta. SAM is an advanced method specifically crafted for semantic instance segmentation. It’s designed to identify and outline each object in an image, even if multiple objects overlap or belong to the same category. Its versatility comes from its potential to function efficiently across diverse scenarios and with various object types. The essence of SAM is its ability to “segment anything,” hence its apt naming.

The integration of SAM into the Magic Tool allows for rapid segmentation of objects during the labeling process. When a user uploads a set of images for labeling onto the LinkedAI platform, the Magic Tool leverages the power of SAM to automatically detect and segment objects. This not only accelerates the labeling process but also enhances precision, making the task more efficient and less prone to human error.

The true marvel of the Magic Tool, beyond its name, is rooted in its unique embedding generation process. Upon uploading a dataset, users simply engage a single button. In short the Magic Tool crafts embeddings for every image in the set. This technique not only accelerates the labeling workflow but also assures uniformity and consistency across the entire dataset.

But the capabilities of the Magic Tool don’t stop at embedding and segmentation. It offers an interactive preview, allowing users to visualize segmentations before making them final. Through this interface, users can enrich the segmentation by adding positive points to emphasize objects they deem significant, and negative points to demarcate areas as the background. This dual-pronged approach, combining automation with user-directed refinement, guarantees segmentations that are both rapid and tailored meticulously to user needs.

In summary, the Magic Tool, powered by SAM, offers a blend of automation and precision. It streamlines the labeling process, ensuring that machine learning models, like the one used for fish instance segmentation in this project, are trained on high-quality data. The result? Models that are not just accurate but also efficient, standing testament to the prowess of tools like SAM and the Magic Tool.

2. Dataset Composition & Preprocessing Masks: A Closer Look

Background of the Dataset:

This project utilized a subset of a larger dataset collected from a supermarket in Izmir, Turkey, as part of a university-industry collaboration at Izmir University of Economics. The complete dataset encompasses nine different seafood types, including the ones used in this project, and was published in the ASYU 2020 conference. If you’re interested in the larger dataset and its results, consider referring to the following publication:

O.Ulucan , D.Karakaya and M.Turkan.(2020) A large-scale dataset for fish segmentation and classification. In Conf. Innovations Intell. Syst. Appli. (ASYU)

Dataset Breakdown:

At the heart of any machine learning project lies the dataset, and this project is no exception. The dataset was meticulously curated and consisted of images from five distinct fish categories. Here’s a granular breakdown:

Black Sea Sprat: 22 images

Gilt-head Bream: 16 images

Hourse Mackerel: 18 images

Red Mullet: 21 images

Red Sea Bream: 11 images

Such a well-defined dataset ensures that the model has a diverse range of images to learn from, even if the total count might seem modest. The use of data augmentation techniques during the fine-tuning phase further amplifies the available data, allowing the model to witness a broader range of scenarios and variations. This not only prevents overfitting but also helps in making the model robust, without the need to collect thousands of images.

Masks Preprocessing:

Post-labeling, the next significant step was the preprocessing of these masks. Converting them into binary format is essential to ensure a clear distinction: the background set at a value of 0 and the object of interest is 255. Given the unique structure of the dataset, where each image was focused on a single fish type, renaming the masks was a vital step. By embedding the category information within the mask’s name, potential confusion in the subsequent modeling stages was preempted.

It’s worth noting that the model wasn’t trained from scratch. Instead, leveraging a pre-trained model and fine-tuning it with our specific dataset, combined with data augmentation, proved to be a more efficient and effective approach. This strategy not only saved computational resources and time but also capitalized on the knowledge captured in the pre-trained model, adapting it to our specific task.

Image 2: A sample of an image with its binary mask

3. Detectron: The Powerhouse for Instance Detection

For this project, Detectron2 was the library of choice for instance detection. With Mask RCNN as the backbone, it was primed for success. Mask RCNN, a widely acclaimed model for object detection and segmentation, combined with the high-quality labeled data, set the stage for impressive results.

4. The Proof is in the Results

And the results did impress!, you can follow in our Google Colab Notebook. The evaluation metrics speak volumes about the efficacy of the entire process:

AP (IoU=0.50:0.95) = 91.46%

AP50 = 100.00%

AP75 = 100.00%

Category-wise AP:

Black Sea Sprat: 89.61%

Gilt Head Bream: 90.43%

Hourse Mackerel: 92.01%

Red Mullet: 92.63%

Red Sea Bream: 92.61%

AP (IoU=0.50:0.95) = 94.43%

AP50 = 100.00%

AP75 = 100.00%

Category-wise AP:

Black Sea Sprat: 90.14%

Gilt Head Bream: 100.00%

Hourse Mackerel: 90.88%

Red Mullet: 92.20%

Red Sea Bream: 98.93%

Image 3: Loss functions smoothed

The steady decline in the loss function, as visualized in the graph, indicates that our model effectively learned and adapted during the training phase. A decreasing loss implies that the predictions made by the model progressively aligned with the actual labels of the training data.

While these numbers are impressive, they underscore a vital point: the quality of labeled data plays a pivotal role in achieving top-notch model performance. And this is where the Magic Tool truly shines. The seamless labeling process, combined with the power of Detectron and Mask RCNN, led to these remarkable results.

5. Seeing the Model in Action

Sample Predictions:

Image 4: An unseen image of a “Gilt Head Bream” with the predicted bounding box and segmentation mask overlaid.

Image 5: An unseen image of a “Red Mullet” with the predicted bounding box and segmentation mask overlaid.

Magic Tool Limitations (SAM)

While SAM (Segment Anything by Meta) is a powerful and versatile tool for object segmentation, there are some crucial limitations to be aware of:

  1. Small Object Detection: Depending on the resolution of the images and the model configuration, SAM may struggle to detect and segment very small objects. This can be particularly problematic in images with many small objects or in applications where correctly identifying and segmenting small objects is crucial.
  2. Pixel Border Accuracy: For semantic segmentation, it is crucial that objects are perfectly delineated, right down to the pixel border. When manually labeling, one can ensure that each pixel is correctly labeled, but with SAM, there are sometimes intermediate pixels left unlabeled. This can be a significant issue, especially in applications where pixel border accuracy is critical.
  3. Shadow and Lighting Conditions: SAM may have difficulties in accurately segmenting images with suboptimal lighting conditions or significant shadows. The model was trained on a dataset collected under ideal lighting conditions, which is often not the case in real-world projects. As a result, the performance of SAM (and therefore the Magic Tool) may be suboptimal when dealing with images that have poor lighting or significant shadows.

You can find more information about SAM limitations in “Segment Anything Is Not Always Perfect: An Investigation of SAM on Different Real-world Applications”. It’s important to note that the dataset used for this project was collected under very ideal conditions, but real-world project datasets may not be as ideal, and therefore, the results when labeling with SAM (Magic Tool) may not be the best.

It is crucial to be aware of these limitations when deciding whether SAM is the right tool for a specific project and when interpreting SAM’s results. Although it is a powerful tool, it may not be the best option for all use cases.


For developers and machine learning engineers navigating the vast sea of computer vision, tools like LinkedAI’s Magic Tool are not just a luxury; they’re a necessity. The project with fish image labeling has shown that with the right tools, achieving high-quality results is not just possible, but expected.

As the computer vision community continues to grow and evolve, tools like the Magic Tool will be at the forefront, ensuring that data labeling is efficient, accurate, and, dare we say, a little bit magical.

Impressed by the power of the Magic Tool? Experience the magic of high-quality automatic segmentation for yourself! Click here to start your journey with LinkedAI’s Magic Tool today.

Exploring Generative AI: Use Cases in Computer Vision and Beyond
Thursday, February 22, and Friday, February 23
Register Free

Follow Us

Copyright © 2024 Linked AI, Inc. All rights reserved.