Synthetic images with Flip

December 23, 2022

| Juan Felipe Montenegro Torres

Synthetic images are all those images created by computer processes, from the company logo to the representation of an imaginary city. Currently, a large part of the images found on the Internet are created artificially, that is, they are not photographs or paintings but are created entirely on a computer.

These representations are of great importance in many areas, such as logos in the business world, images in social networks, CGI scenes in movies, and data for computer vision among many other examples.

In this article we will present the new updates to the Python Flip library, which is designed for the creation of synthetic images, specifically an example will be shown to increase the number of images in the “dogs-cats-horses-humans-dataset” dataset presented in Kaggle and compare the result of the images created with the original images.

For this, two horses images will be taken with their respective segmentation carried out on the LinkedAI platform and the objects will be extracted from the code presented in crop_image_from_mask.ipynb as shown below:

Images of horses with segmentation made on the LinkedAI platform

Then an image of a meadow and an image of a desert were taken to be the future backgrounds of the images created.

Image of a prairie and a desert.

Within the updates of the library multiple transformations will allow creating many images from the objects obtained previously, some of them are:

Following the example presented in the repository and using the file, the algorithm was asked to use the color ‘gray’ to obtain the objects in grayscale, to perform a flip on the ‘y’ axis, a random resize Between 0.2 and 0.7, contrast values ​​of 0.8 and saturation of 1.4 were also taken. Additionally, in each of the transformations the ‘force’ parameter was set to false to allow randomness to the library to apply or not the transformations and the spacing of the objects was delimited between 0.5 and 1 of the ‘y’ axis so that the horses will stay close to the ground.

On the other hand, the noise of ‘avg_blur’ was used in the backgrounds, and a brightness value of 1.2 in the image was created to generate greater variety in the final images, as observed in the following lines of code:

flip.transformers.data_augmentation.Noise(noise=’avg_blur’, force=False),]
transform_created = [
flip.transformers.data_augmentation.Brightness(1.2, force=False)]
transform = flip.transformers.Compose(
(x_min=0, y_min=0.3, x_max=0.7, y_max=0.6, mode=’percentage’,
flip.transformers.labeler.CreateMasks(classes_names),, name),, name),, name)]) 

Obtaining images like the ones shown below, which at first glance look like horses standing on the floor of the funds that were chosen, some closer than others and of different colors:

Images created with Flip

Comparing them with the original images of the dataset, it can be seen that Flip achieves images that are quite close to the originals and that it is a fairly successful tool for creating synthetic images as shown below:

Comparison images.

On the other hand, for this specific case a single object was used on the background and realistic transformations, however, Flip allows creating images with more than one object and transformations that allow creating completely random images with multiple noises, objects, cutouts of the images and among other transformations widely used for the data augmentation process in different computer vision projects as seen in the repository documentation.

Finally, this process can also be done with segmentation masks and bounding boxes as shown in “How to create your own Synthetic Data- for computer vision applications” and “Creating Synthetic Images with Flip” respectively.

And you can see the complete code in the library’s GitHub repository.

Exploring Generative AI: Use Cases in Computer Vision and Beyond
Thursday, February 22, and Friday, February 23
Register Free

Follow Us

Copyright © 2024 Linked AI, Inc. All rights reserved.