We all know that A.I. has come a long way, particularly in recent years. The evolution of A.I. can be seen in the amount of development in the technology, software, and computer vision industries that have brought us to where we are today. Smartphones are commonplace, computer simulations and V.R. are becoming less and less distinguishable from reality, and the newest iPhone uses face ID as its unlocking function.
But how did we get here? What research and development have we gone through to allow us the technological freedom we have today? And where could A.I. end up in the future?
Perhaps somewhat surprisingly, the earliest mention of Artificial Intelligence can be located in literature. 1920 was the year in which writer Karel Čapek published a science fiction play named Rossumori Univerzální Roboti (Rossum’s Universal Robots), or R.U.R which first introduced the word “robot”.
However, any form of computer didn’t really become what we know and recognize today until 1949. Before then, computers simply didn’t have a key prerequisite for intelligence as they could only execute commands, not remember or learn from them. Not to mention image recognition, as before it came into existence, any kind of image analysis had to be done manually, including x-rays, MRIs, and space photography.
Alan Turing noted during his research that humans use available information and reason when it comes to problem-solving and decisionmaking. But he also suggested that if humans can do this, then why couldn’t machines do the same thing? This logic was the framework for his 1950 paper, Computing Machinery and Intelligence. In it, he explored not only how to build intelligent machines, but also how to test their intelligence.
During the 1950s, computers were expensive, which was the main problem holding A.I. back. We didn’t really see it take off until the 1960s.
From 1957-1974, Artificial Intelligence boomed. Examples of demonstrations of A.I. in the early stages are Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA. Both of these ventures were major steps towards progressing the goals of problem-solving and the interpretation of spoken language.
Government backing also helped with funding. Thanks to the successes of the 60s and 70s, government agencies such as the Defense Advanced Research Projects Agency (DARPA) funded A.I. research at several institutions. However, there was still a long way to go.
When it came to image recognition technology and image analysis, the 1960s saw some development in computer vision. During this time, emulating human vision systems was at the forefront of this development as we began to ask computers to tell us what they can see.
In the 1980s A.I. was reignited, which was largely due to an increase in funding. Additionally, scientist John Hopfield and psychologist David Rumelhart popularized “deep learning” techniques. This meant that computers could now learn user experience. When it came to financial backing, the Japanese government heavily funded A.I. strives as part of their Fifth Generation Computer Project (FGCP).
Despite all of this, many of the goals the Japanese had wanted to achieve weren’t met during this decade. Because of this, the A.I. hype began to wade as did the financial support.
Strangely, A.I. began to thrive again once government funding and the hype surrounding A.I. dwindled, so much so that the 1990s and 2000s saw many of its goals being accomplished. For example, in 1997, world chess champion Gary Kasparov was defeated by IBM’s ‘Deep Blue’, a chess-playing computer program which was a landmark moment for Artificial Intelligence as it showed the public what machines were capable of.
When it came to image recognition, we didn’t see many major developments until 2001 when computer vision researchers, Paul Viola and Michael Jones invented a simultaneous face detection algorithm, allowing human figures to be identified through their facial characteristics. Then in 2005, researchers Navneet Dalal and Bill Triggs published Histograms of Oriented Gradients (HOC). Their research surrounded a theory involving a feature detector for the recognition of pedestrians in security system circuits.
This is the decade in which developments in image recognition and object detection really took off. Since the limit of computer storage was no longer holding us back, we have now reached a point where not only are we meeting our needs, but in many cases, we are exceeding them.
In 2012, researchers Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton designed a new object recognition algorithm, ensuring an 85% level of accuracy, which was a massive step in the right direction. In 2015, the Convolutional Neural Network (CNN) developed IR tools and their level of accuracy of facial recognition surpassed 95%.
Today, there are many companies such as Google and Amazon, that are really focusing their R&D efforts on improving technologies capable of integrating image recognition. And there are many strands of image recognition that are constantly cropping up and thriving including logo detection and recognition, as well as object and scene detection.
In its infancy, the main issue with computers was that they were very expensive. In fact, in the early 1950s, it could have cost up to $200,000 a month to lease a computer. As we have seen, a lack of funding and backing is a common theme throughout the evolution of Artificial Intelligence. The hype surrounding it appears to come in waves or cycles because of this. But as time passes and many aspects of technology become less expensive, A.I. starts to gain momentum, which is why today, advances in the industry are happening so quickly.
Perhaps today’s most expensive resource for building computer vision systems is training data. At VISUA our founding team spearheaded Google’s early research to minimize the effort required to teach an A.I. system new objects by watching countless hours of YouTube videos. This philosophy is the primary guiding principle at VISUA, allowing to train the best logo recognition technology in just minutes for any number of logos, from global conglomerates to niche regional brands.
It’s safe to say that A.I. has undergone a major evolutionary journey and the most important thing to note about that journey is that it doesn’t seem to be slowing. In fact, it’s accelerating at a rapid rate. We can’t say for sure where Artificial Intelligence will bring us in a year, let alone in ten or twenty. But, what we do know is that wherever there is a gap in the market for an automated, more efficient solution, A.I. will find a way.
Book A DemoReading Time: 4 minutes Exclusive partnership sees Vision Insights integrate VISUA’s Sports Sponsorship Monitoring Computer Vision Suite into its new…
Featured Sponsorship Monitoring Technology VISUA NewsReading Time: 3 minutes Dublin and New York-based VISUA now allows API-less use of complete Computer Vision Technology Stack Leading…
Featured Technology VISUA NewsReading Time: 4 minutes The explosion in world travel after the easing of lockdowns and travel restrictions has seen airports…
Everything Else TechnologySeamlessly integrating our API is quick and easy, and if you have questions, there are real people here to help. So start today; complete the contact form and our team will get straight back to you.