Talking about computer vision algorithms, we have to say that there are many varying algorithms that we can name, but the following ones, to our point of view, deserve a special mention:
- Scale-invariant feature transform (SIFT) – it is used to recognize and characterize local features in an image. SIFT has been of great help when you have to deal with feature abstraction and pattern matching. In fact, you can use it to track images, detect and identify objects in a snap. Needless to note, recognition under Occlusion is what makes SIFT absolutely stunning.
- Markov random fields ( MRF) – it’s a physics algorithm but is also applicable in computer vision. In image processing, it gives a basis for modeling contextual constraints. You can use it to optimize principles to generate and solve different low-level computer vision issues.
- Convolution theorem – it is rather easy, infinitely useful as it simplifies many calculations. By the way, we have to admit that it is biologically plausible. Indeed, it has been hypothesized that in the spectral domain, the brains process sensory data.
- Hough transform algorithm – it is created to detect basic shapes, for instance, cycles. It can run within different spaces, not to mention, the parameter space, as it is the case for many “smart” signal processing units with time vs frequency domains. For instance, discrete cosine transform and its app to lossy compression.
- Random Sample Consensus (RANSAC) – it is a wonderfully powerful yet absolutely simple algorithm for robust models fitting coming with a lot of data outliers. What makes it so great is that even if you code it wrong, it is likely to work right. But it is so simple to comprehend that you will surely code it right.
And what about you? Do you agree with the algorithms we have stated out? Do you have other computer vision algorithms that you want to tell us about?
Please share your thoughts below!