Friday, July 30, 2021
Home » Artificial Intelligence » The Rapid Pace Of Artificial Intelligence

The Rapid Pace Of Artificial Intelligence


Trying To Keep Up!

In January 2016 Europe’s reigning GO champion, Fan Hui was defeated by Deep Mind’s AlphaGo program. Based on strategy and intuitive thinking, each move of the black or white stones of the 2500-year-old game of GO leads to a possible 250 moves. Every time a stone is moved, 250 possible moves exist beyond that. Not since IBM’s Deep Blue beat Garry Kasparov has the world of AI been so excited. Computers are now learning-this is how far AI has come in the past couple of years.

At a recent World Economic Forum DAVOS

2016 conference the official theme was “Mastering the Fourth Industrial Revolution.” Throughout the conference multiple conversations, presentations and forums focused on artificial intelligence, its rapid progression, safety and security issues. According to the WEF, this revolution reflects a “fusion of technologies that is blurring the lines between the physical, digital and biological spheres.” (Wearden, 2016)

And it looks like a revolution too. Because of the rapid changes in artificial intelligence in 2015, the world is finally paying attention to the extreme potential in AI. Of course, to us AI equals our cellphone and we all know about the power of our cellphones. In our hands lies better processing speed and battery life as well as increased network processing speeds. To us, this means better accessibility. But in the world of AI, this means rapid and visible movement in machine and deep learning. Now even the big guns like Stephen Hawking, Elon Musk, and Bill Gates are asking, “exactly where are we going in this whirlwind?”


In terms of economic value, it means, well, trillions. According to the McKinsey Global Institute, AI and robotics is anticipated to be a $50 trillion-dollar enterprise through 2025

027-3(Manyika, 2013) Large investment dollars are being poured into artificial intelligence by Google, Facebook, Microsoft, China’s Baidu and other startups. One that astound- ed everyone was the 2014 purchase of Deep Mind, a British AI firm headed up by Demis Hassabis and the company that designed the algorithms used to defeat reigning European GO champion. It was bought for $500 million. (Shu, 2014)

Steve Omohundro, Advisor, Machine Intelligence Research Institute, and Self- Aware Systems believes that the rapid progression of AI is attributed to a combination of several factors. “All the big tech companies are investing tons of money,” said Omohundro. “The hardware has also gotten faster.” Increased training data too is now available, and it requires a huge amount of computations which says Omohundro, graphical processing units (GPUs) have made more practical.


According to Professor Jack Dongarra, Innovative Computing Laboratory at the University of Tennessee, “high-end data (big data) and high-end computing (exascale) are now both essential elements of an integrated computing research and development agenda.” Neither, says Dongarra, should be sacrificed or minimized to advance the other.

Whereas just a few years ago, computing and large data storage systems contained only a few terabytes of secondary disk storage, says Dongarra, now commercial and research cloud-computing systems each contain many petabytes of secondary storage. To increase computing speed, computers are augmented with computational accelerators in the form of GPU’s from Nvidia and Intel coprocessors. (Reed, 2015)

027-2With data and data sharing, we see a couple of new perspectives emerged. We see companies are beginning to uncover the profit potential of big data and exploring what they can do with the data that they col- lect from us. (Gupta, 2016)

There has also been open release of large datasets for development and research. Recently Yahoo announced that it was releasing 13.5 terabytes of data, the largest ever machine learning dataset for the academic research community. It will be available publicly and includes anonymous user data among the metrics used are age range, gender, generalized geographic data, key phrases for articles and other access content and summaries. (King, 2016) Previously Criteo, a performance marketing technology company had opened 1 terabyte dataset to advance academic space and drive industry research. (Ferns, 2015)

Finally, companies like Google are open sourcing their software. In November 2015, Google announced TensorFlow, a deep learning search engine software. (Oremus, 2015) Facebook has done the same through their AI Research arm, TORCH, an open source development environment for machine learning, numeric and computer vision with emphasis on deep learning. (Chintala, 2015)

Of course the big question is connecting all the data from our machines, smartphones  to the Internet, devices, and the world. The reality is that even now we are not as connected as we might be despite the fact that Juniper research predicated that by 2020 the world will encompass 38.5 billion devices. Keep in mind, however, this is how big data is created. This is how we generate data that will teach computers how to learn.

But how will all of this be connected? We have devices. Sensors are embedded in our roads, bridges, homes, appliances and other things. All of these things shed data. The Internet of Things then uses cloud based applications to transmit the data that is coming in from all those sensors. While it seems like a made-up entity, the Internet of Things is real and represents, as defined by Juniper Research, “the combination of devices and software systems, connected via the Internet, which produce, receive and analyze data.” (Smith, 2015)

The big questions about the Internet of Things are how to protect the data that is collected, what are the privacy issues and how should it be shared and stored. These issues are still on the table, but you can bet they won’t be for long. Machine learning and deep learning are taking advantage of all that data. So computers are now learning, getting smarter, more accessible, and perhaps more integrated than ever before.


Until recently machine learning was considered a subset of artificial intelligence. However, with these rapid advances, machine learning is now considered in its own right. Machine learning is artificial intelligence that involves the use of algorithms that can learn from the data that it is provided. On the other hand, deep learning is a class of machine learning techniques inspired by the neural networks of our brain. The goal of deep learning is to create algorithms that will make a computer or robot more human-like in their decision making abilities. While we often don’t quite understand how these neural networks work, big advancements in ML and DL essentially evolved from five areas of intelligence: abstracting across environments, intuitive concept understanding, creative abstract thought, dreaming up visions and dexterous motor skills. (Mallah, 2015)

In advances in intuitive concept understanding, New York University’s Moore- Sloan Data Fellow and Computer Scientist Brenden Lake was part of a team that explored human concept learning in a visual domain. While you and I can recognize a bicycle based on what we have seen, experienced, read or learned, a computer needs large datasets in order to pinpoint an image or concept. By using handwritten characters from the world’s alphabets, Lake’s goal was to reverse engineer how we think about problems then develop algorithms that could capture these abilities.

Reverse engineering concept was critical, says Lake because it helped them to better understand how people learn and then, in turn create more human-like learning algorithms based on more intuitive learning. Lake’s team developed the Bayesian Program Learning algorithm that taught the computer to recognize handwritten characters after seeing only a single or few examples.

“While most machine learning algorithms treat concepts as patterns and learning as a process of finding and recognizing patterns,” said Lake, “our algorithm treats concepts as simple models of the world and learning as a process of building models.” The BLP can perform a range of tasks in ways that are difficult to distinguish from human behavior, says Lake, like classification, generating new examples or concepts so the key insight was learning the right form of representation and not only learning from bigger and bigger data sets. Now Lake is studying whether people and algorithms can infer when given a small amount of data, more complex programs.

As to the recent advances in machine learning, Lake is inspired by what he is seeing. “I am excited by the trend of incorporating more psychological ingredients like attention and working memory into deep learning algorithms,” said Lake.


While academic institutions are doing research in these areas, most of the big advancements have come from companies like Google and Facebook. So the question becomes are these big companies pushing the boundaries of deep learning? Author of The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, Padro Domingo believes that companies like Google and DeepMind are moving beyond the old definitions of ML. They are pushing deep learning beyond its original vision like image, speech, and facial understanding, says Domingo to more ambitious things like understanding language, reasoning and planning.

028-1In fact, he’s right. Recent advances in machine and deep learning include Facebook’s photo and facial recognition which now provides a way for the computer to describe images to blind people and teaching a computer how to pain faces. Google has a new even more intuitive photo app and TensorFlow. Microsoft now has a new Skype system that can automatically translate one language to another. (Clark, 2015) Where Google had developed a system that was learning to play old Atari games, that same system recently beat the Go champion and has now challenged the world champion to a match in mid- March.

Where we go from here is really anyone’s guess. For Padro Domingo, the ultimate quest is the master algorithm, an algorithm which can derive all knowledge from the world from data. According to Domingo’s new book, it is the ultimate learning program. A master algorithm, says Domingo, is an algorithm that makes other algorithms. Ten years ago, we used batch learning and gave the system a big database to learn from, says Domingo, now companies are using algorithms that are continuously learning. This system is called online learning. As long as data keeps coming in, the algorithm continues to evolve. It keeps learning like a human being though ultimately, it will make its own algorithms.

Whether or not this can be done with realistic amounts of data and computing, Domingo isn’t sure. “I personally don’t think that anybody has a master algorithm now, but the chances are for the foreseeable future that we will get there,” said Domingo.


With all the advancements in machine learning and deep learning, the bigger questions such as safety and ethical issues are getting more consideration. For Steve Omohundro, he’s been pondering these issues for the last ten years when, as he says, nobody wanted to hear about them. “I think that these systems are becoming more powerful and integrated into our society,” said Omohundro. “So we need to think about the nature of our society with this technology.”

According to Omohundro, he believes that we are entering into three phases of AI. The first phase he says is the one that we are in now. “People are starting to apply AI and robotic systems to existing tasks like manufacturing. It is creating a lot of economic value even though it has some social consequences like job loss,” said Omohundro. He doesn’t see a lot of catastrophic danger to this particular phase. The second phase is the use of AI by the military who has budgeted approximately $23.9 billion on drones and unmanned technology. (Lewis, 2014) “Every military on the planet right now is developing autonomous military drones, submarines, robot vehicles and soldiers,” said Omohundro.

The final phase is where the longer term issues of safety and ethics become more apparent. It is the time, says Omohundro, when machines become more and more intelligent. “They become more a part of society almost like citizens,” said Omohundro. “So we had better design them so that they have social values that are pro- human, pro-social and going in the direction that humanity wants to go in.”


Broadening awareness of these issues have led to the creation of ethics boards and institutes. One prominent accomplishment has been Elon Musk’s Future of Life Institute which has created Open AI, open sourcing AI research to the public. In a recent interview with Medium, Musk was quoted as saying, “I think the best defense against the misuse of AI is to empower as many people as possible to have AI.” (Levy, 2015) Other ventures include the U.K.’s Cambridge University’s Leverhulme Centre for the Future of Intelligence which will work together with Cambridge’s Centre for the Study of Existential Risk and collaborate with the Oxford Martin School at the University of Oxford.

In the sale of DeepMind to Google, one condition of the sale was to establish an ethics board. While Google has not been forthcoming on whether or not it has been established, in a January 2016 interview with Medium, Demis Hassabis, co-founder of DeepMind, was quoted as saying that the board was formed, but the make-up for the board was confidential and it did include “some top professors on this in computation, neuroscience and machine learning.” (Levy, 2016)

James Barrat, author of The Final Invention: Artificial Intelligence and the End of the Human Era, is encouraged by Musk’s Open AI initiative saying that open AI is necessary because it takes the profit motive out of the research and development of AI. “The biggest prob-em is that companies like Google, Facebook, IBM and Baidu are doing really rapid product development, but not the basic research into the basic nature of intelligence and ethics control.” Musk, he says, want to take AI out of the quarterly report world and put it in a safer, open market place.

While there are some who believe that this issue is overly inflated, others believe that the current discussions seen at Davos and in other forums are healthy. However, as we talk, AI continues to expand, computers will continue to learn, and we as humans will continue to watch this progress with excitement, anticipation and perhaps a little trepidation.

Chintala, S. (2015, January 16). Research at Facebook. Retrieved from Facebook:

Clark, J. (2015, December 8). Why 2015 Was a Breakthrough Year in Artificial Intelligence. Retrieved from Bloomberg News:

Ferns, E. (2015, June 18). Criteo Releases Industry’s Largest Ever Dataset. Retrieved from Criteo:

Gupta, M. (2016, January 26). Data is the New Dollar: Turning Data into Business Profit. Retrieved from Dataconomy:

King, R. (2016, January 14). Yahoo Opens up 13.5 TB machine learning dataset for academic research. Retrieved from ZDNet:

Levy, S. (2015, December 11). How Elon Musk and Y Combinator Plan to Stop Computers from Taking Over. Retrieved from Medium:

Levy, S. (2016, January 16). The Deep Mind of Demis Hassabis. Retrieved from Backchannel:

Lewis, C. W. (2014, January 7). US Military to Spend $23.9 billion on drones and unmanned systems. Retrieved from Robtenomics:

Mallah, R. (2015, December 29). Top AI Breakthroughs of 2015. Retrieved from Future of Life Institute:

Manyika, J. M. (2013). Disruptive technologies: Advances that will transform life, business,and the global economy. McKinsey Global Institute.
Oremus, W. (2015, November 9). What is TensorFlow and Why is Google so Excited about It. Retrieved from Slate:

Reed, D. A. (2015, July). Exascale Computing and Big Data. Retrieved from ACM:

Shu, C. (2014, January 26). Google Acquires Artificial Intelligence Startup Deep Mind for More than $500 Million. Retrieved from TechCrunch:

Smith, S. (2015, July 28). Internet of Things Connected Devices to Almost Triple to Over 38 Billion Units by 2020. Retrieved from Juniper Research:

Wearden, G. (2016, January 19). Davos 2016: Eight Key Themes for the World Economic Forum. Retrieved from The Guardian: