
- Scikit Image – Introduction
- Scikit Image - Image Processing
- Scikit Image - Numpy Images
- Scikit Image - Image datatypes
- Scikit Image - Using Plugins
- Scikit Image - Image Handlings
- Scikit Image - Reading Images
- Scikit Image - Writing Images
- Scikit Image - Displaying Images
- Scikit Image - Image Collections
- Scikit Image - Image Stack
- Scikit Image - Multi Image
- Scikit Image - Data Visualization
- Scikit Image - Using Matplotlib
- Scikit Image - Using Ploty
- Scikit Image - Using Mayavi
- Scikit Image - Using Napari
- Scikit Image - Color Manipulation
- Scikit Image - Alpha Channel
- Scikit Image - Conversion b/w Color & Gray Values
- Scikit Image - Conversion b/w RGB & HSV
- Scikit Image - Conversion to CIE-LAB Color Space
- Scikit Image - Conversion from CIE-LAB Color Space
- Scikit Image - Conversion to luv Color Space
- Scikit Image - Conversion from luv Color Space
- Scikit Image - Image Inversion
- Scikit Image - Painting Images with Labels
- Scikit Image - Contrast & Exposure
- Scikit Image - Contrast
- Scikit Image - Contrast enhancement
- Scikit Image - Exposure
- Scikit Image - Histogram Matching
- Scikit Image - Histogram Equalization
- Scikit Image - Local Histogram Equalization
- Scikit Image - Tinting gray-scale images
- Scikit Image - Image Transformation
- Scikit Image - Scaling an image
- Scikit Image - Rotating an Image
- Scikit Image - Warping an Image
- Scikit Image - Affine Transform
- Scikit Image - Piecewise Affine Transform
- Scikit Image - ProjectiveTransform
- Scikit Image - EuclideanTransform
- Scikit Image - Radon Transform
- Scikit Image - Line Hough Transform
- Scikit Image - Probabilistic Hough Transform
- Scikit Image - Circular Hough Transforms
- Scikit Image - Elliptical Hough Transforms
- Scikit Image - Polynomial Transform
- Scikit Image - Image Pyramids
- Scikit Image - Pyramid Gaussian Transform
- Scikit Image - Pyramid Laplacian Transform
- Scikit Image - Swirl Transform
- Scikit Image - Morphological Operations
- Scikit Image - Erosion
- Scikit Image - Dilation
- Scikit Image - Black & White Tophat Morphologies
- Scikit Image - Convex Hull
- Scikit Image - Generating footprints
- Scikit Image - Isotopic Dilation & Erosion
- Scikit Image - Isotopic Closing & Opening of an Image
- Scikit Image - Skelitonizing an Image
- Scikit Image - Morphological Thinning
- Scikit Image - Masking an image
- Scikit Image - Area Closing & Opening of an Image
- Scikit Image - Diameter Closing & Opening of an Image
- Scikit Image - Morphological reconstruction of an Image
- Scikit Image - Finding local Maxima
- Scikit Image - Finding local Minima
- Scikit Image - Removing Small Holes from an Image
- Scikit Image - Removing Small Objects from an Image
- Scikit Image - Filters
- Scikit Image - Image Filters
- Scikit Image - Median Filter
- Scikit Image - Mean Filters
- Scikit Image - Morphological gray-level Filters
- Scikit Image - Gabor Filter
- Scikit Image - Gaussian Filter
- Scikit Image - Butterworth Filter
- Scikit Image - Frangi Filter
- Scikit Image - Hessian Filter
- Scikit Image - Meijering Neuriteness Filter
- Scikit Image - Sato Filter
- Scikit Image - Sobel Filter
- Scikit Image - Farid Filter
- Scikit Image - Scharr Filter
- Scikit Image - Unsharp Mask Filter
- Scikit Image - Roberts Cross Operator
- Scikit Image - Lapalace Operator
- Scikit Image - Window Functions With Images
- Scikit Image - Thresholding
- Scikit Image - Applying Threshold
- Scikit Image - Otsu Thresholding
- Scikit Image - Local thresholding
- Scikit Image - Hysteresis Thresholding
- Scikit Image - Li thresholding
- Scikit Image - Multi-Otsu Thresholding
- Scikit Image - Niblack and Sauvola Thresholding
- Scikit Image - Restoring Images
- Scikit Image - Rolling-ball Algorithm
- Scikit Image - Denoising an Image
- Scikit Image - Wavelet Denoising
- Scikit Image - Non-local means denoising for preserving textures
- Scikit Image - Calibrating Denoisers Using J-Invariance
- Scikit Image - Total Variation Denoising
- Scikit Image - Shift-invariant wavelet denoising
- Scikit Image - Image Deconvolution
- Scikit Image - Richardson-Lucy Deconvolution
- Scikit Image - Recover the original from a wrapped phase image
- Scikit Image - Image Inpainting
- Scikit Image - Registering Images
- Scikit Image - Image Registration
- Scikit Image - Masked Normalized Cross-Correlation
- Scikit Image - Registration using optical flow
- Scikit Image - Assemble images with simple image stitching
- Scikit Image - Registration using Polar and Log-Polar
- Scikit Image - Feature Detection
- Scikit Image - Dense DAISY Feature Description
- Scikit Image - Histogram of Oriented Gradients
- Scikit Image - Template Matching
- Scikit Image - CENSURE Feature Detector
- Scikit Image - BRIEF Binary Descriptor
- Scikit Image - SIFT Feature Detector and Descriptor Extractor
- Scikit Image - GLCM Texture Features
- Scikit Image - Shape Index
- Scikit Image - Sliding Window Histogram
- Scikit Image - Finding Contour
- Scikit Image - Texture Classification Using Local Binary Pattern
- Scikit Image - Texture Classification Using Multi-Block Local Binary Pattern
- Scikit Image - Active Contour Model
- Scikit Image - Canny Edge Detection
- Scikit Image - Marching Cubes
- Scikit Image - Foerstner Corner Detection
- Scikit Image - Harris Corner Detection
- Scikit Image - Extracting FAST Corners
- Scikit Image - Shi-Tomasi Corner Detection
- Scikit Image - Haar Like Feature Detection
- Scikit Image - Haar Feature detection of coordinates
- Scikit Image - Hessian matrix
- Scikit Image - ORB feature Detection
- Scikit Image - Additional Concepts
- Scikit Image - Render text onto an image
- Scikit Image - Face detection using a cascade classifier
- Scikit Image - Face classification using Haar-like feature descriptor
- Scikit Image - Visual image comparison
- Scikit Image - Exploring Region Properties With Pandas
Scikit Image - ORB Feature Detection
ORB, which stands for "Oriented FAST and rotated BRIEF," is a popular feature detection and descriptor extraction technique in computer vision and image processing tasks. It is designed for detecting distinctive feature points in images and describing them using binary descriptors. It combines two key components: the "Oriented FAST" feature detection method and the "rotated BRIEF" descriptor extraction technique.
The scikit image library (skimage) provides a user-friendly and flexible way to work with ORB through its orb class, which is located within the "feature" submodule of the skimage library.
Using the skimage.feature.ORB class
The skimage.feature.ORB class is a feature detector and binary descriptor extractor in the scikit-image library (skimage).
Syntax
Following is the syntax of this class −
class skimage.feature.ORB(downscale=1.2, n_scales=8, n_keypoints=500, fast_n=9, fast_threshold=0.08, harris_k=0.04)
Parameters
Here's an explanation of its key parameters −
n_keypoints (int, optional): This parameter specifies the number of keypoints to be returned. The function will return the best n_keypoints keypoints according to the Harris corner response if more than n_keypoints are detected. If the number of detected keypoints is less than n_keypoints, then all the detected keypoints are returned.
fast_n (int, optional): fast_n is the n parameter used in the FAST corner detection algorithm ( skimage.feature.corner_fast). It determines the minimum number of consecutive pixels out of 16 pixels on the circle that should all be either brighter or darker with respect to the test-pixel. A point c on the circle is considered darker with respect to the test pixel p if Ic < Ip - threshold and brighter if Ic > Ip + threshold.
fast_threshold (float, optional): it is the threshold parameter used in the FAST corner detection algorithm (feature.corner_fast). It is the threshold used to decide whether the pixels on the circle are brighter, darker, or similar with respect to the test pixel. You can decrease the threshold when more corners are desired and increase it when fewer corners are needed.
harris_k (float, optional): it is the sensitivity factor (K parameter) used in the Harris corner detection algorithm (skimage.feature.corner_harris). It is typically in the range [0, 0.2], and small values of k result in the detection of sharp corners.
downscale (float, optional): downscale factor for the image pyramid. The default value is 1.2, chosen to have more dense scales, enabling robust scale invariance for subsequent feature description.
n_scales (int, optional): it specifies the maximum number of scales from the bottom of the image pyramid to extract features from.
The ORB class has several attributes associated, which store the information about the keypoints and descriptors detected and extracted from images.
Here's an explanation of these attributes −
keypoints (N, 2) array: It contains the coordinates of keypoints as (row, col).
scales (N, ) array: This array holds the corresponding scales for the keypoints.
orientations (N, ) array: The orientations array contains the corresponding orientations of the keypoints in radians.
responses (N, ) array: This array stores the corresponding Harris corner responses for each keypoint.
descriptors (Q, descriptor_size) array of dtype bool: A 2D array of binary descriptors with a size of descriptor_size is provided for a total of Q keypoints. These descriptors are computed after excluding keypoints near the image borders. In the binary descriptor array, each value at index (i, j) is either True or False. These values indicate the result of the intensity comparison for the i-th keypoint on the j-th decision pixel-pair. The value of Q is equal to the sum of non-zero elements in the mask used for keypoint filtering.
This class provides several methods for detecting keypoints and extracting binary descriptors from images.
Here is an explanation of each method and its parameters −
detect(image): This method detects oriented FAST keypoints along with the corresponding scale.
Parameters
image (2D array): The input image in which keypoints will be detected.
detect_and_extract(image): This method detects oriented FAST keypoints and extracts rBRIEF descriptors. This operation is faster than first calling detect and then extract.
Parameters
image (2D array): The input image in which keypoints will be detected and descriptors will be extracted.
extract(image, keypoints, scales, orientations): This method extracts rBRIEF binary descriptors for a given set of keypoints in an image. It is important to note that the provided keypoints must match those extracted using the same downscale and n_scales parameters.
Parameters
image (2D array): The input image from which descriptors will be extracted.
keypoints ((N, 2) array): Keypoint coordinates as (row, col) for N keypoints.
scales ((N, ) array): Corresponding scales for the N keypoints.
orientations ((N, ) array): Corresponding orientations in radians for the N keypoints.
Example
This example demonstrates the use of ORB feature detector and descriptor on two images and then matches keypoints between the images.
from skimage.feature import ORB, match_descriptors from skimage import io from skimage.transform import rotate # Load two images img1 = io.imread('Images/image5.jpg', as_gray=True) img2 = rotate(img1, 90) # Initialize ORB detectors and extractors detector_extractor1 = ORB(n_keypoints=5) detector_extractor2 = ORB(n_keypoints=5) # Detect and extract keypoints in both images detector_extractor1.detect_and_extract(img1) detector_extractor2.detect_and_extract(img2) # Perform feature matching matches = match_descriptors(detector_extractor1.descriptors, detector_extractor2.descriptors) # Get the keypoints corresponding to the matches in the first image keypoints_img1 = detector_extractor1.keypoints[matches[:, 0]] # Get the keypoints corresponding to the matches in the second image keypoints_img2 = detector_extractor2.keypoints[matches[:, 1]] # Print the keypoints in both images for the selected matches print("Keypoints in Image 1:") print(keypoints_img1) print("Keypoints in Image 2:") print(keypoints_img2)
Output
Keypoints in Image 1: [[ 46. 310.] [127. 122.] [280. 172.] [140. 119.]] Keypoints in Image 2: [[ 70. 71.] [258. 152.] [208. 305.] [261. 165.]]
Example
This example demonstrates the use of ORB feature detector and descriptor on two images and then matches keypoints between the images.
from skimage import io, transform from skimage.feature import match_descriptors, ORB, plot_matches from skimage.color import rgb2gray import matplotlib.pyplot as plt # Load an image and convert it to grayscale original_img = rgb2gray(io.imread('Images/book.jpg')) # Create two transformed versions of the image rotated_img = transform.rotate(original_img, 180) tform = transform.AffineTransform(scale=(1.3, 1.1), rotation=0.5, translation=(0, -200)) scaled_and_translated_img = transform.warp(original_img, tform) # Initialize the ORB descriptor extractor descriptor_extractor = ORB(n_keypoints=200) # Detect and extract keypoints and descriptors for each image def detect_and_extract_features(image): descriptor_extractor.detect_and_extract(image) keypoints = descriptor_extractor.keypoints descriptors = descriptor_extractor.descriptors return keypoints, descriptors keypoints1, descriptors1 = detect_and_extract_features(original_img) keypoints2, descriptors2 = detect_and_extract_features(rotated_img) keypoints3, descriptors3 = detect_and_extract_features(scaled_and_translated_img) # Perform feature matching between the original image and transformed images matches12 = match_descriptors(descriptors1, descriptors2, cross_check=True) matches13 = match_descriptors(descriptors1, descriptors3, cross_check=True) # Visualize the matching results fig, ax = plt.subplots(nrows=2, ncols=1) plt.gray() plot_matches(ax[0], original_img, rotated_img, keypoints1, keypoints2, matches12) ax[0].axis('off') ax[0].set_title("Original Image vs. Rotated Image") plot_matches(ax[1], original_img, scaled_and_translated_img, keypoints1, keypoints3, matches13) ax[1].axis('off') ax[1].set_title("Original Image vs. Scaled and Translated Image") plt.show()
Output
