How Deep Learning Makes Semantic Segmentation More Precise

By NAVEEN JOSHI

Getting semantic segmentation right is one of the more difficult challenges in computer vision, even for seasoned experts in the field. Several studies have found that deep learning methods and models can be used for this purpose. The conceptual and practical solutions to accentuate the quality and accuracy of semantic segmentation using deep learning are promising signs for the future of computer vision.

Like any technology in the modern age, computer vision keeps developing on a near-continuous basis. Along this evolutionary path, there are several computer vision-related roadblocks that must be overcome to make the technology applicable to an even wider array of areas than what we have already. These roadblocks are related to getting certain computer vision-based tasks—such as image classification, instance, and semantic segmentation—carried out correctly for real-world applications.

Semantic segmentation is the process of classifying each pixel in an image based on specific ‘class types.’ So, for example, for a picture featuring two dogs of different breeds (say, Alsatian and Pomeranian) playing near their kennel close to a garden, a semantic segmentation algorithm categorizes the number of pixels occupying each type of entity—grass, dogs, kennel, amongst others—present in the image in such a way that each entity will be highlighted in a different color.

Such ‘semantically segmented’ images can then be used for various purposes, such as handwriting text recognition, face detection systems, autonomous driving and many others.

Segmentation, semantics-based or instance-based, is much deeper compared to object recognition as it, unlike the latter, does not require objects to be identified in an image before labeling them. Object identification in an image is relatively straightforward when compared to identifying difficult-to-discern specificities in each image. On the other hand, ideally, an image segmentation algorithm segments the identified as well as the unknown, new objects.

Semantic segmentation can be used to improve existing algorithms related to satellite imagery analysis, human-computer interaction and other applications. In such types of applications, having access to segmentations can allow researchers to approach a problem at a semantic level. Ideally, the segmentation process should closely follow the process of how we humans mentally categorize—or segment—the objects we see around us not long after we see them.

As stated earlier, getting semantic segmentation right has been one of the main challenges for data scientists, programmers, hardware experts and other individuals working in the field of computer vision. Deep learning and deep neural networks can be used to increase the accuracy of semantic segmentation.

Semantic Segmentation Using Deep Learning

Traditionally, semantic segmentation was carried out using a concept known as clustering. The traditional segmentation algorithms were usually based on clustering, often with the contours and edges of an image. For example, satellite-based image segmentation is performed with this method by clustering pixels based on their respective wavelengths. As a result, clusters would be created based on similar pixels being spatially located close to each other. In this way, reverting to the dogs and kennel example, the pixels containing the animals would be clustered together based on their wavelengths.

Over time, the dated clustering approach has gone through a few iterations and evolutions. One such iteration was a well-known segmentation approach known as the Markov process. While clustering and other modes of segmentation did have their positive attributes in the past, their abilities to deal with precision-related problems in fields such as Virtual Reality, facial recognition, and autonomous vehicles were not enough for researchers to persist with them. As a result, one by one, most of the traditional segmentation methods have now become obsolete.

Deep learning involves a massive number of datasets for the in-depth segmentation of images or videos. Deep learning models, like most modern approaches to segmentation, achieve benchmark performances from semantic segmentation algorithms and are state-of-the-art.

According to this research paper, segmentation with deep learning can be carried out in 3 ways:

a)    Region-Based Semantic Segmentation

This method follows the ‘segmentation using recognition’ approach, which initially extracts free-form regions from an image and describes them. This is followed by region-based predictions and classification. At test time, the region-based predictions transform into pixel predictions and label pixels based on the highest scoring region that contains them. This method uses elements of object detection too. Unlike traditional convoluted neural networks that are mainly intended for image classification, regional convoluted neural networks (RCNN) can address more complicated tasks such as object detection and image segmentation. RCNN can even become an extremely important basis for both fields—object detection and image segmentation.

Despite the success of the method to increase the accuracy in semantic segmentation, the RCNN-based method is not perfect, and the paper goes on to enlist its flaws.

b)   FCN-Based Semantic Segmentation

The Fully Convoluted Network (FCN) network pipeline is an extension of a normal CNN. They consist of convolutional and pooling layers which give them the ability to make predictions on a wider range of inputs as compared to CNNs.

The size of the segmentation output of FCNs depends on the input size—which means that they do not give out a standard-sized output every time. As per the study, these kinds of networks are commonly used for local rather than global tasks (semantic segmentation or object detection, instead of image classification).

The research paper found that this method also increased the precision aspect of semantic segmentation.

c)    Weakly Supervised Semantic Segmentation

Most of the commonly used methods in semantic segmentation rely on many images with pixel-wise masks or distinctly marked areas. However, manually annotating these segmentation masks is quite time-consuming, frustrating and costly for organizations. To counter that, some weakly supervised methods can be used. Such methods are dedicated to facilitating semantic segmentation by utilizing annotated bounding boxes or even image-level labels.

What Deep Learning Brings to Semantic Segmentation

Now that we have seen some of the ways in which precise semantic segmentation using deep learning can be carried out, here are some of the examples of advantageous elements introduced by deep learning in the process:

a)    Semantic Segmentation of High-Resolution Electron Microscopy Images

According to this research study, the precision of semantic segmentation using deep learning models is quite high for electron microscope-generated images as well. As we know, electron microscopes can provide higher visibility even when nanoparticles are involved. So, the resolution of the images generated by such a device is bound to be incredibly high. There are several unique challenges whenever ultra-high-resolution images are used in semantic segmentation. However, modern deep learning models allow for image segmentation of high-resolution images with great speed and accuracy in nearly any circumstances.

Due to deep learning’s success in this regard, researchers can use electron microscopes to study chemical reactions involving solvents, industrial catalysts and other nanoscopic materials with accurate segmentation.

b)   Semantic Segmentation Brought to Edge Devices

AI researchers at the University of Waterloo and DarwinAI, a Canadian IT organization, have created a neural network architecture that makes segmentation possible on low-power edge computing devices too. This is a massive breakthrough as, generally, it was believed that segmentation of any kind, instance-based or semantics-based, necessitated organizations to use large, resource-intensive neural networks. So, the running of such deep learning models became difficult without the project being connected to cloud servers.

DarwinAI and the University of Waterloo’s partnership has yielded a system that provides accurate and fast segmentation but is portable enough to fit on smaller-sized devices. The name of the neural network is AttendSeg. According to the creators of the technology, their main target was to bring semantic segmentation to device level on endpoint devices everywhere.

The AttendSeg is described as a “low-precision, highly compact deep semantic segmentation network” made for “TinyML networks.” The ‘low-precision’ adjective in that statement does not imply that the semantic segmentation operations carried out by the system will be highly imprecise. In fact, the system, which facilitates semantic segmentation using deep learning, performs at an accuracy almost equal to other deep learning-based semantic segmentation networks.

Astonishingly, the neural network model just takes up to 1 megabyte of storage memory. So, it can be downloaded and installed on any edge device without fears of system crashes or a great deal of lag. To reduce the size of models, AttendSeg uses “attention condensers” so that performance in segmentation is not changed much as compared to bigger devices.

The neural network’s compact, efficient nature means that it will soon become a hot favorite amongst prospective buyers, and it can be used for a whole host of operations that include medical applications, manufacturing and many others.

Although the extensive involvement of deep learning for semantic segmentation work is relatively absent today, one can certainly feel that it will become a commonly used combination in the future.

LEAVE A REPLY

Please enter your comment!
Please enter your name here