Artificial intelligence (AI) has fascinated people for ages. The concept of creating a machine that would be able to independently think, make decisions, and react appropriately holds much promise for both the hobbyist and professional realms. It isn’t difficult to think of its potential applications in various fields like retail, medicine, and security.
Where are we now with AI?
Whenever people talk about this topic, several terms like "machine learning" and "deep learning" gets thrown around a lot. In order to understand where we are at with AI at the moment, it would help to have a brief and, in this case, a very simplified overview of the different approaches programmers and developers have taken in order to further the development of this technology along.
Pattern Recognition vs. Machine Learning vs. Deep Learning
Since the idea of AI had been conceived many years ago, several approaches have been used to try to get computers to think on its own. The earliest method consisted of pattern recognition. Essentially, developers created programs that instructed the computers on what specific features or patterns to look for in order to identify an object.
Later on, people developed this approach further to include algorithms in an effort to teach the computers to "think", which led to what is now referred to as machine learning. Using these algorithms and a whole lot of data (to serve as examples of right and wrong answers), the computer uses mathematics and statistics to extract patterns and classify the results so in a way, do some rudimentary thinking. However, problems with this approach would arise if there isn’t enough data for it to learn from. Also, the system wouldn't work well in cases wherein there is no clear identifiable patterns such as when the environment is constantly changing.
The next step in the evolution of AI, which is the one that's gaining all the attention nowadays, is that of deep learning. The biggest difference of this approach lies in its ability to discern which features are important for identification on its own, without the help of a programmer/operator.
Taking its cue from how the human brain works, this type of system is composed of several neural layers that utilizes different algorithms to try to recreate high-level data abstractions. The bottom layer of neurons serves as "sensors" that would look at small parts of the picture and relay the yielded information to higher layers which would try to combine and fit that information into the context of a bigger picture or pattern. This would continue until the system is able to understand and identify the image.
It is easy to see how deep learning can fit in the security setting, especially since a key element to “learning” is data and there is plenty of that available in video surveillance. Owing to its ability to correctly recognize objects, the addition of this technology can be make systems more smart and intelligent. Hence, it can potentially be used in a variety of applications and security systems, ranging from facial recognition, vehicle detection, license plate recognition to crowd behavior analysis.
The system could also be programmed to automatically analyze data coming in from surveillance systems and speed up searches, freeing the human operators to focus on the important details and not waste time looking though huge amounts of images or footages.
As the field of artificial intelligence continues to advance, it is exciting to think about how these developments could be incorporated into existing surveillance technologies to enhance security. Who knows, maybe the future is nearer than we think.
Source: a&s Magazine