Machine Learning For Dummies
Book image
Explore Book Buy On Amazon
Machine learning on images works because it can rely on features to compare pictures and associate an image with another one (because of similarity) or to a specific label (guessing, for instance, the represented objects). People can easily choose a car or a tree when they see one in a picture. Even if it’s the first time that you see a certain kind of tree or car, you can correctly associate it with the right object (labeling) or compare it with similar objects in memory (image recall).

In the case of a car, having wheels, doors, a steering wheel, and so on are all elements that help you categorize a new example of a car among other cars. It happens because you see shapes and elements beyond the image itself; thus, no matter how unusual a tree or a car may be, if it owns certain characteristics, you can figure out what it is.

An algorithm can infer elements (shapes, colors, particulars, relevant elements, and so on) directly from pixels only when you prepare data for it. Apart from special kinds of neural networks, called convolutional networks, which rank as the state of the art in image recognition because they can extract useful features from raw images by themselves, it’s always necessary to prepare the right features when working with images.

Feature preparation from images is like playing with a jigsaw — you have to figure out any relevant particular, texture, or set of corners represented inside the image in order to recreate a picture from its details. All this information serves as the image features and makes up a precious element for any machine learning algorithm to complete its job.

Convolutional neural networks filter information across multiple layers, training the parameters of their convolutions (kinds of image filters); thus they can filter out only the features relevant to the images and the tasks they’re trained to perform. Other special layers, called pooling layers, help the neural net catch these features in the case of translation (they appear in unusual parts of the image) or rotation.

Applying deep learning requires special techniques and machines able to sustain the heavy computational workload. The Caffe library, developed by Yangqing Jia from the Berkeley Vision and Learning Center, allows building such neural networks but also leverages existing pretrained ones.

A pretrained neural network is a convolutional network trained on a large number of varied images, thus learning how to filter out a large variety of features for classification purpose. The pretrained network lets you input your images and obtain a large number of values that correspond to a score on a certain kind of feature previously learned by the network as an output. The features may correspond to a certain shape or texture.

What matters to your machine learning objectives is that the most revealing features for your purpose are among those produced by the pretrained network, so you must choose the right features by making a selection using another neural network, an SVM, or a simple regression model.

When you can’t use a convolutional neural network or pretrained library (because of memory or CPU constraints), OpenCV or some Scikit-image functions can still help. For instance, to emphasize the borders of an image, you can apply a simple process using Scikit-image, as shown here:

from skimage import measure

contours = measure.find_contours(image, 0.55)

plt.imshow(image, cmap=cm.gray)

for n, contour in enumerate(contours):

plt.plot(contour[:, 1], contour[:, 0], linewidth=2)

plt.axis('image')

plt.show()

You can read more about finding contours and other algorithms for feature extraction (histograms; corner and blob detection) in the tutorials.

About This Article

This article is from the book:

About the book authors:

John Mueller has produced 114 books and more than 600 articles on topics ranging from functional programming techniques to working with Amazon Web Services (AWS). Luca Massaron, a Google Developer Expert (GDE),??interprets big data and transforms it into smart data through simple and effective data mining and machine learning techniques.

John Mueller has produced 114 books and more than 600 articles on topics ranging from functional programming techniques to working with Amazon Web Services (AWS). Luca Massaron, a Google Developer Expert (GDE),??interprets big data and transforms it into smart data through simple and effective data mining and machine learning techniques.

This article can be found in the category: