Back More
Salem Press

Table of Contents

Opinions Throughout History – Robotics & Artificial Intelligence

10 Seasons of AI

Development and Stagnation in AI Research (1950s–Present)

Introduction

Artificial intelligence has been an ongoing field of activism and research since the late 1950s, but research on the topic has not been consistent. Science historians have, in fact, broken the history of artificial intelligence work into distinct periods, which are typically called seasons. During the “AI summers,” major breakthroughs occur, stimulating further research. During the “AI winters,” progress slows and research stagnates. As America and the rest of the world enter the 2020s, a number of prominent researchers in the field have opined that the world is now entering a new “AI Winter,” in which advancement will be limited. How and why this is occurring reflects the overlap between governmental and social priorities and science.

Topics Covered in this Chapter Include:

  • Artificial intelligence

  • AI winter

  • AI summer

This Chapter Discusses the Following Source Document:

Schuchmann, Sebastian, “Probability of an Approaching AI Winter,” Medium, August 17, 2019

The effort to create artificial intelligence is one of the most difficult endeavors in human history. This scientific, philosophical, and technological mission is nothing short of an attempt to understand the function of the human mind and the various processes that can be used to encode the phenomena known as “mind.” It is unsurprising that, even after decades of research, true artificial intelligence remains elusive, and opinions are divided on whether or not it can ever be achieved. Reaching the “next step” in AI research often proves difficult, and research in the field often stagnates while researchers struggle to make a breakthrough that will take AI research to the next level. This is one of the factors that creates surges of activity followed by periods of stagnation in the AI field, a phenomenon that researchers and followers of AI development have called “seasons” of AI research.

Another factor influencing the apparent seasons in AI research is public interest. Over the years, polling organizations have detected significant public interest in AI and related technologies. In the United States, for instance, polling often reveals that Americans believe that the field is moving forward rapidly (much more rapidly than it actually is). In one of the most recent polls, conducted by Elon University in 2019, 63 percent of people said they thought AI would lead to a majority of people being “better off” by 2030, while 37 percent said that people will not be better off. Public opinion influences progress in AI because the public perception of research can influence whether or not politicians and other organizations invest in studies and research programs. Opinion polls also indicate that, while Americans certainly hope that AI will advance human life in the future, there are many unresolved issues of concern, such as the potential for AI to cause job losses and even the more unrealistic fear that AI will become a threat to humanity in an existential way, a fear generated more by fantasy and science fiction than by reality.1

However, public opinion is not one of the prime agents that furthers or curtails research in the field. More than public will, AI research thrives or withers based on the availability of funding, and funding priorities are often made on the basis of corporate or economic applicability. Thus, when powerful economic entities perceive a benefit, investment in AI research increases and politicians tend to be more receptive to providing grants for AI research. These are the factors that typically lead to a surge in AI research and also coincide with periods when AI will appear in the news and become a more prominent part of the pop culture discussion. Opinions about AI may form and change during these periods, as news items introduce readers to the potential positives and perils of AI development.

Typically, after a surge in research pushed by new investment and the establishment of new programs, interest begins to decline, often because progress is slow and researchers need a breakthrough that may or may not be forthcoming. The search for profit is based on short-term goals and any economic decision or investment must be continually and convincingly justified in terms of marketable gains. Lacking this, investment declines and progress slows.

Since research in AI began in earnest, in the early 1960s, there have been periods in which AI research surged forward and periods when progress stagnated, typically called “AI winters” by those in the field. The history of these periods can be informative with regard to the kinds of factors that influence progress and for those considering whether there is another AI winter on the horizon.

Arthur Samuel wrote a famous program that enabled a computer to play checkers, an early advance in the field of AI. By Xl2085, via Wikimedia.

OP13AI_p0133_1.jpg

The First Winter

More than any other single factor, the first AI winter began with the Lighthill Report, a review of AI research and academic progress published in the United Kingdom in 1973. However, understanding the impact of the Lighthill Report requires an understanding of how AI research had developed in the previous decades, and how this resulted in unrealistic expectations. Some earlier proponents of AI research made claims that were virtually impossible to justify.

During the 1950s, advancements were made in machine learning and processing that seemed to indicate a substantial future for AI. These include a now famous program written in 1955 by Arthur Samuel that enabled a computer to play the game of checkers. The computer was even featured on a television program and made national news, where it was often hailed as proof that thinking machines were around the corner. Likewise, early experiments in language processing proved promising. For instance, in 1954 a computer program translated a 250-word dictionary from English to Russian.2 Successes like these inspired the creation of the US government’s Defense Advanced Research Projects Agency, or DARPA, which funded some of the biggest projects in AI of the era. It was not until 1956 that the term “artificial intelligence” was formally coined,3 but by this point research on the subject was already well underway. Some believed and professed that major achievements were within their grasp.

Another perceptively important leap forward came in 1957, when American psychologist Frank Rosenblatt published the results of his research into the function of the brain, resulting in a new mathematical and technological model of a neuron, which he called a “perceptron.” Rosenblatt’s discovery was sufficient to cement his name in the annals of AI history and the Institute of Electrical and Electronics Engineers (IEEE) has since named one its most prestigious award in his honor.4 Perceptrons are still seen as one of the most important artificial intelligence breakthroughs and as one of the major steps toward building functional neural networks, but accounts of the discovery at the time were exaggerated. For instance, the New York Times treated the discovery as if Rosenblatt had brought the nation to within a few years of true artificial intelligence. Their 1958 article on the subject stated boldly, “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”5

By the end of the 1960s, millions had been invested in various AI projects, including machine learning, neural networks, machine translation, and so on. There were many breakthroughs, and many of those breakthroughs are still driving research in the field, but the progress was too incremental for the American military and politicians. One of the key figures in AI, John McCarthy, later commented on the growing disillusion with AI research by explaining that creating true machine intelligence proved to be a much more difficult task than he and many other pioneers in the field thought it would be. Skepticism of AI was growing in the United Kingdom as well, especially during the 1960s, with more prevalent social and political issues taking center stage. The British Science Research Council appointed Lighthill to conduct a survey of AI research, both in the United Kingdom and the United States, to determine if the millions spent on AI research could be justified by economic gains or any other significant advancement. Lighthill’s 1973 report was extensive, and its essential findings could be summarized as that investment had been substantial and results, less so. “In no part of the field have the discoveries made so far produced the major impact that was then promised.”6

The end result was that the British government withdrew support from all but two university-based research programs in artificial intelligence. Researchers were shocked and fought Lighthill’s statements in the press. A planned television debate between skeptics arguing AI progress with specialists in the United Kingdom never materialized. After the United Kingdom cut funding, a number of other European governments followed suit. In the United States the military, through DARPA and other programs, continued to fund AI research, but at a much lower level. Further, because AI successes had depended to a large degree on cross-cultural sharing, the decline in research in Europe also limited intercontinental sharing and cooperative projects.

James Lighthill’s 1973 report concluded that AI research was not living up to its potential, resulting in the British government’s withdrawal of support for AI research, via Wikimedia.

OP13AI_p0135_1.jpg

And the Seasons, They Go Round and Round

The first AI winter lasted from around 1973 to around 1980, but the gradual discoveries made by researchers in the United States and in Europe gradually began to recapture interest from major funding agencies and institutions. Over the course of the 1970s, humanity became familiar with computers, not just as bulky academic or commercial machines, but as increasingly practical, household tools. As interest in personal computing spread among the public, new corporations emerged. The 1980s was the beginning of the true Digital Age as computer technology spread through consumer markets, and the fascination with technology spawned a new generation of fiction writers and philosophers wondering where the fusion of human and machine would lead societies around the world.

In American culture, the proliferation of technological fantasies in both literature and film spoke to the growing technological fascination that gripped America during the 1980s. Researchers in AI capitalized on this surge of public interest by promoting a series of key discoveries in AI that pointed to impending commercial applications. AI programs were thus funded and implemented in a number of fields. Machines described as “intelligent” were introduced into financial planning, medicine, geological exploration, and computer design, and there were again numerous media reports and testimonies from key researchers that overhyped the potential and progress in the field. This time, several key leaders in the field, like McCarthy, were more circumspect about the rise in public and institutional support. McCarthy argued that actual progress was limited, consisting mainly of incremental steps, and warned against promising too much and risking backlash by failing to deliver.

Ironically, it was McCarthy and other researchers who brought about what is now called the “Second AI winter” by arguing that true machine intelligence was much further away than some claimed. By the 1990s, interest in AI was waning. A 1986 conference on AI attracted more than 6,000 visitors, while the same conference in 1991 attracted 2,000. New tech companies that had emerged on the scene in the 1980s hoping to capitalize on the AI craze went out of business, and others shifted their focus away from intelligent machines and toward more achievable goals.7 It was not until the mid-1990s, when the economy began to improve slightly, that heavy investment in advanced mechanical systems again became a priority.

The Current Boom

By the mid-1990s, interest in AI had picked up again, as evinced by the proliferation of machine learning programs and other kinds of AI research, which expanded from the few central universities working on AI programs. Over the thirty years that have passed since the end of the last major AI winter, there have been periods of partial stagnation, during which AI research has slowed and historians have identified periods that might be described as “partial AI winters” or an “AI autumn,” but never again did interest or funding drop to levels seen during the 1970s, 1980s, and 1990s. One reason that interest and investment in AI has remained more steady is because progress has been steady. AI research has yet to produce true thinking machines of the type imagined in science fiction, or in the more optimistic predictions of the 1950s, but machine learning and other subfields of AI have been proven effective by significant discoveries and technological products that have appeared on the market. Systems like the “virtual assistants” that debuted in the 2010s on mobile telephones, for instance, though not really intelligent, continued to stimulate interest in the idea of AI and its potential. On a more general level, the degree to which humanity embraced and even became dependent on digital technology justifies continued investment in advanced technological discoveries, and this drives investment in both robotics and AI.

However, there are some within the AI community who believe that a new AI winter may be coming. In some countries, AI research is threatened by growing skepticism over government and military spending; in others, concern about the ways that governments have used emerging digital technology has heightened skepticism about AI and other forms of advanced computational systems. Most Americans, for instance, do not trust that the government has regulated the Internet or mobile technology to protect user privacy. There is evidence that the US government has allowed access to the private data of American citizens shared and transmitted through social media and other digital networks. Government misuse of technology stimulates skepticism about government spending on technology and can cause public opinion to shift against investing in technological advancement. In addition, though machine learning and intelligence has come a long way since the mid-1990s, advancement has been more limited than proponents had hoped. In this article from Medium’s Towards Data Science, Sebastian Schuchmann discusses the current state of AI, recent advancements in the field, and whether or not another AI winter is likely to occur.

“Probability of an Approaching AI Winter”

by Sebastian Schuchmann

Medium, August 17, 2019

Source Document

Motivation

Both industries and governments alike have invested significantly in the AI field, with many AI-related startups established in the last 5 years. If another AI winter were to come about many people could lose their jobs, and many startups might have to shut down, as has happened before. Moreover, the economic difference between an approaching winter period or ongoing success is estimated to be at least tens of billions of dollars by 2025, according to McKinsey & Company.

This paper does not aim to discuss whether progress in AI is to be desired or not. Instead, the purpose of the discussions and results presented herein is to inform the reader of how likely progress in AI research is.

Analysis: What Has Led to the AI Winters?

For a detailed overview of both AI winters check out my first and second medium article on the topic.

In this section, the central causes of the AI winters are extracted from the above discussion of previous winters.

First, a recurring pattern that can be observed are that promises that kindled initial excitement, but which later turned out to be inflated have been the leading cause for the AI winters. For instance, government funding was cut during both AI winters after honest assessments of the results compared to the promises. The progress was overestimated because AI initially led to significant improvements in various fields, very quickly. This suggested most of the work was done with only some minor problems to solve. However, as it later turned out these problems were not so minor in the first place. The Lighthill report, a primary contributor to the first AI winter stated: “in no part of the field have discoveries made so far produced the major impact that was then promised.” Similarly, the 1984 panel at AAAI expressed: “This unease is due to the worry that perhaps expectations about AI are too high [. . .].”

Second, the cut in funding had a major impact on research in both AI winters. In the first AI winter, the Lighthill report led to a cut of funding for all but two universities in the U.K. and further led to cuts in Europe and the U.S. In the second AI winter, funding from DARPA was reduced. Moreover, the commercial failure of many AI-related startups in the late 1980s marked the second AI winter.

Third, technological limitations, like the perceptron experienced in the 1960s, inhibited progress. The perceptron, which was first expected to soon “be conscious of its existence” could not solve the XOR-problem at that time. Similarly, limitations were faced with expert systems in the 1980s. They could not solve fundamental problems like vision or speech and lacked common sense.

Consequently, in evaluating the likelihood of another AI winter, the following aspects should be examined closely:

  1. Expectations and promises compared to the actual results;

  2. Funding from governments and industries;

  3. Technological limitations.

Many technologies exhibit similar patterns to those mentioned above. To further narrow the focus, it is necessary to figure out how AI deviates from other technologies. Though similar in some regard, AI appears to be very susceptible to inflated estimations and technological limitations. Some reasons why AI differs from other technologies are:

1. Intelligence is highly multidimensional:

At some point, AI researchers believed that by solving chess, the riddle of intelligence would be solved. This turned out to be wrong, because intelligence involves more than the dimension of conscious, strategic thinking. Chess is only a tiny particle in the cosmos of intelligence. Researchers gave it such a central position because it turns out to be hard for humans, which leads to reason number two.

2. Moravec’s Paradox

Chess, which requires higher level thinking, is a very new skill in our evolutionary history, which might be the reasons why it is relatively difficult for humans and therefore associated with Intelligence. Vision, on the other hand, is old and mainly subconscious, which leads people to believe it is easy, but there is no reason to assume it is not as hard or even more difficult than higher level thinking. This is Moravec’s Paradox, and one can argue AI researchers have fallen prey to this statement by underestimating the processes we do subconsciously, like sensorimotor skills or common sense.

3. Hype and fear associated with achieving human-level intelligence

As I. Jordan pointed out, the hype and fear surrounding machines that are capable of achieving intelligence easily leads to exaggerations and creates media attention less common in other fields.

With these reasons in mind, the possibility of a coming AI winter can be analyzed with the appropriate framing.

Probability of an Approaching AI Winter

Subsequently, the possibility of an upcoming AI winter is assessed. The current landscape of artificial intelligence and its public reception is studied. Furthermore, the present and the historical pre-winter times are compared regarding the key areas extracted beforehand. As a recap, these areas are:

  • Expectations and promises compared to the actual results;

  • Funding from governments and industries;

  • Technological limitations.

Expectations and Promises

Many public figures are voicing claims that are reminiscent of those of early AI researchers in the 1950s. By doing this, the former group creates excitement for future progress, or hype. Kurzweil, for instance, is famous for not only predicting the singularity, a time when artificial superintelligence will be ubiquitous, will occur by 2045, but also that AI will exceed human intelligence by 2029. In a similar manner, Scott is predicting that “there is no reason and no way that a human mind can keep up with an artificial Intelligent machine by 2035.” Additionally, Ng views AI as the new electricity.

Statements of this kind set high expectations for AI and spark hype. Consequently, the phenomenon of hype and how it relates to the present state of AI is investigated.

Hype and the Hype Cycle

A tool often used when looking at hype is Gartner’s Hype Cycle. It has practical applications that let us make predictions easily, but its validity is not scientifically established. First of all, it is not a tool developed for a scientific approach. It is stylized graph made for business decisions. That said, attempts to empirically validate the Hype Cycle for different technologies have been made. It can be concluded that the Hype Cycle exists, but that its specific patterns vary a lot.

The key phases of the cycle are the peak, where interest and excitement are at their highest, and the trough of disillusionment, where the initial expectation cannot not be met. Here, interest in the field is at its lowest. Then, the field slowly recovers and reaches the plateau of productivity.

As Menzies demonstrates, the Hype Cycle is well represented in the AAAI conference attendee numbers in the 1980s. First, the conference started with a rapid increase in ticket sales leading to a peak, and then those numbers quickly dropped down. Currently, conference attendee numbers for conferences like NIPS reach or even exceed the peak of AAAI in the 1980s, and they are quickly gaining in size.

Similar patterns for interest in the field can be observed in venture capital funding for AI startups, job openings, and earning calls mentions. Researchers of hype point out that the quantity of coverage is important, but that it has to be supported by qualitative sentiments. Sentiment analysis in media articles shows that AI-related articles became 1.5 times more positive from 2016 to 2018. Especially in the period from January 2016 to July 2016, the sentiment shifted. This improvement could be correlated with the public release of Alpha Go in January 2016, and its victory against world champion Lee Sedol in March.

Following the trend of the Hype Cycle, this could lead to another trough of disillusionment with ticket sales, funding, and job openings quickly plummeting. However, AI is a very broad term describing many technologies. This further complicates the matter, as each technology under the umbrella term AI can have its own Hype Cycle and the interactions of Hype Cycles both with each other and to AI, in general, remain unclear.

Going further, a more in-depth look into these claims is made, evaluating if the quick rise in AI interest is just the consequence of exaggerated promises or if the claims stand on firm ground.

Comparison to Expert Opinion

Now, the statements and promises made by public figures are compared to a survey of leading AI researchers. In 2017, a survey of 352 machine learning researchers, who published at leading conferences, was conducted. This survey forecasts high-level machine intelligence to happen within 45 years at a 50% chance and at a 10% chance within the next nine years. However, full automation of labor was predicted much later, with a 50% probability for it to happen within the next 122 years.

This study presents results far from the predictions of futurists like Kurzweil. Further, a meta-study on AI predictions has found some evidence that most predictions of high-level machine intelligence are around 20 years in the future no matter when the prediction is made. In essence, this points to the unreliability of future predictions of AI. Moreover, every prediction of high-level machine intelligence has to be viewed with a grain of salt.

In summary, a Hype Cycle pattern is present in the current AI landscape, leading to a potential decline in interest soon. Furthermore, optimistic predictions are made by public figures, but empirical evidence questions their validity.

Nevertheless, statements like those from Ng, who views AI as the new electricity, refer more to the current state of the industry. Accordingly, the industry and governments funding is examined next.

Investment and Funding

Funding has always had a significant role in AI research. As Hendler points out, cuts in government funding are only felt years later, since existing research programs continue. Thus, time passes until the lack of new research programs becomes evident. This means that a reduction in funding would need to be in place presently to be perceived in the years to come.

In April 2018, EU members agreed to cooperate on AI research. A communication on AI was issued that dedicated 1.7 billion dollars of funding for AI research between 2018 and 2020. Then, in June 2018, the European Commission proposed the creation of the Digital Europe funding program with a focus in five key areas and total funding of 9.2 billion euros, of which 2.5 billion is dedicated to AI research.

In March 2018, the U.S. Administration stated the goal of ensuring that the United States “remains the global leader in AI.” Later, in September 2018, DARPA announced a two billion dollar campaign to fund the next wave of AI technologies. In direct opposition, China has declared the goal of leading the world in AI by 2030. Consequently, several Chinese AI initiatives have been launched. These conflicting statements have motivated many to adopt the term, “AI Race,” to refer to the battle of leadership in the AI realm between the U.S. and China. It is similar to the Space Race in the 20th century between the U.S. and the Soviet Union with the countries fighting for dominance in space travel. Back then, the race sparked much funding and research. Likewise, the “AI Race” mentality could make any reduction in funding unlikely in the next years. This is a strong point against an upcoming AI winter, as previous winters were accompanied by a decline of government funding.

Another key point is the growing AI industry. Past AI researchers have been very reliant on government funding but, according to McKinsey & Company, non-tech companies spent between $26 billion and $39 billion on AI in 2016, and tech companies spent between $20 billion and $30 billion on AI.

The market forecasts for 2025, on the other hand, have an enormous variance ranging from $644 Million to $126 billion. This disparity demonstrates the economic difference between an upcoming AI winter and another period of prosperity.

To summarize, government funding is very solid, and the “AI Race” mentality makes it likely that this situation will continue. Additionally, the industry is currently thriving. Market forecasts, however, diverge drastically.

To determine which forecasts are more convincing, the progress AI has made in the last years is related to the criticisms of current approaches.

Evaluating Progress

To view the criticisms of present AI techniques in the appropriate frame, the progress that has been made since 2012 until today (April 2019) is evaluated.

As we have seen before, AI and machine learning have risen in popularity across many measurements. A few key events stand out in the shaping of the landscape. In 2012, a convolutional neural network won the ImageNet competition by a wide margin. This, combined with progress in object detection, changed the field of computer vision completely from handcrafted feature-engineering to learned representations, thereby enabling autonomous cars to become viable in the foreseeable future. Similarly impressive results have been made in the natural-language understanding space. Deep learning has enabled all the popular voice assistants, from Alexa and Siri to Cortana.

Reinforcement learning with deep neural networks has had impressive results in gameplaying. In 2014, DeepMind used a deep-q-learner to solve 50 different Atari games, without changing the model’s architecture or hyperparameters. This flexibility in tasks was unprecedented, which led to them being acquired by Google soon after and subsequently leading the space of reinforcement learning with achievements like AlphaGo, and AlphaStar.

Finally, in the last few years, generative adversarial networks (GAN) have achieved impressive results in generating images of e.g. human faces. In essence, deep learning has had groundbreaking results across many industries.

The Criticism of Deep Learning

In this chapter, criticisms of deep learning are discussed. As demonstrated, deep learning is at the forefront of progress in the field of AI, which is the reason a skeptical attitude towards the potential of deep learning is also criticism of the prospects of AI in general. It is a similar situation to the 1980s, when expert systems dominated the field and their collapse led to a winter period. If deep learning methods face comparable technological obstacles as their historical counterpart, similar results can be expected.

There are a few categories that have been identified in which most criticisms of deep learning fall: limitations of deep learning, brittleness, and lack of unsupervised learning.

Limitations of Deep Learning

“Today more people are working on deep learning than ever before—around two orders of magnitude more than in 2014. And the rate of progress as I see it is the slowest in 5 years. Time for something new.”

Francois Chollet, creator of Keras on Twitter

As this quote is taken from Twitter, its validity is questionable, but it seems to fall in line with similar arguments he has made and it captures the general feeling well. In his book “Deep Learning with Python,” Chollet has a chapter dedicated to the limitations of deep learning, in which he writes: “It [deep learning] will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold.” As a thought experiment, he proposes a huge data set containing source code labeled with a description of the program. He argues that a deep learning system would never be able to learn to program in this way, even with unlimited data, because tasks like these require reasoning and there is no learnable mapping from description to sourcecode. He further elaborates that adding more layers and data make it seem like these limitations are vanishing, but only superficially.

He argues that practitioners can easily fall into the trap of believing that models understand the task they undertake. However, when the models are presented with data that differs from data encountered in training data, they can fail in unexpected ways. He argues that these models don’t have an embodied experience of reality, and so they can’t make sense of their input. This is similar to arguments made in the 1980s by Dreyfus, who argued for the need of embodiment in AI. Unfortunately, a clear understanding of the role of embodiment in AI has not yet been achieved. In a similar manner, this points to fundamental problems not yet solved with deep learning approaches, namely reasoning and common sense.

In short, Chollet warns deep learning practitioners about inflating the capabilities of deep learning, as fundamental problems remain.

Deep Learning Is Brittle

The common terminology used to describe deep learning models is brittle. There are several examples of why such a description would be accurate, including adversarial attacks, lack of ability to generalize, and lack of data. A detailed discussion of these flaws and eventual prevention mechanisms is pursued.

  1. Adversarial attacks: It has been demonstrated that deep learning algorithms can be susceptible to attacks via adversarial examples. Adversarials use data modified in a non-recognizable way for humans to affect the behavior of deep learning models drastically. There are multiple methods for creating adversarial examples. In one technique, noise is added to the image by another learning algorithm in order to affect the classification, without being visible.

With this technique, it is possible for an image to be changed in such a way that a specified classification can be achieved, even if it is very different like from the original classification (like “panda” and “gibbon” which humans can easily distinguish). When the method of adversarial attack is known, it can be possible to defend against it by augmenting the training set with adversarial examples. To clarify, defending against a specific adversarial attack can be possible, but protecting against adversarials, in general, is hard. Nonetheless, there are successful methods recently developed that show promise in the issue. A formal way to defend against general adversarials has been used by bounding the output space of the model. Techniques like interval bound propagation have state-of-the-art accuracy in different popular image sets.

Alcorn et al. point out that extreme misclassification also happens when familiar objects appear in strange poses. Examples like these reveal that deep learning models, understanding of objects can be quite naive.

Furthermore, adversarial attacks demonstrate an underlying problem that is more profound—the lack of explainability. Due to the black box nature of deep learning models, predicting what the network is doing is hard. These adversarial attacks show that a model may have found an optimal way to classify an object in the training data, but it may still fail to capture the vastness of the real world.

That said, there has been much work in improving the interpretability of models, mostly in the vision space through methods like semantic dictionaries, saliency maps, and activation atlases. These works represent attempts to gain insights into the hidden layers of deep learning models.

  1. Lack of ability to generalize: Furthermore, deep learning models have problems generalizing beyond the training data provided. Kansky et al. demonstrated that a model trained on the Atari game Breakout failed when small changes were made to the environment. For example, changing the height of the paddle slightly resulted in very poor performance of the agent. Similar criticism can be applied to any reinforcement learning system.

Cobbe compares the evaluation of reinforcement learning agents with supervised learning and concludes that evaluating an agent in the environment he trained in is like evaluating the performance of a supervised learner with the test set. The difference is that the first case is well accepted and practiced, and the second one is not tolerated in any sense.

To solve this problem, Cobbe, as part of OpenAI, devised a benchmark for generalization to promote work in this area. Additionally, transfer learning in the domain of reinforcement learning has recently seen impressive results with OpenAI’s Dota agent. They announced that they were able to continue training on the agent despite substantial changes in rules and model size by using a transfer learning technique. Using similar methods, the lack of generalization in agents could be improved.

3. Lack of data: As “The Unreasonable Effectiveness of Data” demonstrates, data is essential in deep learning. Moreover, the rise in available data was one of the main contributors to the deep learning revolution. At the same time, not every field has access to vast amounts of data.

That said, there are two ways to tackle this problem: by creating more data or by creating algorithms that require less data. Lake et al. show that humans are able to learn visual concepts from just a few examples.

Recent approaches in one-shot or few-shot learning, where an algorithm is only presented with one or a few data points e.g., one image of a given category, have made substantial improvements. At the same time, transfer learning approaches have improved immensely. By using a model pre-trained on a large data set as a basis, it is possible to significantly reduce training time on new data sets.

To summarize, deep learning models are fairly described as brittle. That said, researchers are working on promising solutions to this problem.

The Dominance of Supervised Learning

Most achievements realized by deep learning have been through supervised or reinforcement learning. However, as LeCun points out, humans mostly learn in an unsupervised manner by observing the environment. Additionally, approximate estimates predict that around 95 percent of data is unstructured. Moreover, labeling is a time-consuming and expensive process, but labels only contain very little information about each data point. This is why LeCun believes the field has to shift more to unsupervised learning.

A particular type of unsupervised learning, sometimes called self-supervised learning, has gained traction in the last couple of years. Self-supervised learning procedures exploit some property of the training data to create a supervision signal. In a video clip, for example, all frames are sequential, and researchers exploit this property by letting the model predict the next frame of the clip, which can easily be evaluated because the truth is inherent in the data. Similar methods can be used for text or audio signals. Additionally, different characteristics of the data can be used such as rotating an image and predicting the correct angle. The intuition is that in order to turn a rotated image back to its original form, a model needs to learn properties about the world that would also be useful in different tasks like object recognition. This proves to be correct, as this model can achieve great results in classification tasks via transfer learning. When looking at the first layer of the network, the filters are very similar to supervised models and even more varied.

This criticism could be detrimental to deep learning and AI in general, if researchers dismissed it, but it seems not to be the case. OpenAI has presented some promising results, achieved by using unsupervised learning earlier this year, with the GPT-2 transformer language model. This model can generate very human-like texts by using a very large model and vasts amount of data from Reddit. They used a type of self-supervised learning by exploiting the sequentiality of text and letting the model predict the next word. Using the same architecture, MuseNet, a model that composes music, has been created recently.

Unsupervised learning has the potential to solve significant obstacles in deep learning. Current research evidence suggests optimism regarding the progress of this learning technique.

Conclusion

A complex interplay is present between AI researchers, companies, the technology and the perception of AI on many different levels. Therefore, making any prediction is hard. However, there are a few key things we can observe about the field that differ from historical pre-winter times.

In the past, reliance on government funding was very strong and the industry weak. That is far from the case today; many large companies like Google, Facebook, and Alibaba are investing more in AI technologies alone than the AI industry was worth during its boom times in the 1980s. Even more importantly, those companies did not only invest heavily in AI, but they also incorporated it heavily into their products. This provides the field a solid foot on the ground, even when public sentiment starts to shift. Similarly, stability is provided by the “AI Race” mentality, which reduces the risk of a decline in funding from governments.

Equally important are the criticisms regarding deep learning and its limitations. Though most of it is valid, the evidence suggests that researchers are already working on solutions or are aware of the innate limitations of the technique.

Furthermore, unsupervised learning, especially self-supervised learning presents promising opportunities by enabling the use of vasts amount of unlabeled data and by saving immense amounts of tedious labor.

That said, the expectations of the field are too high. Predictions about machines reaching human intelligence are unreliable. Furthermore, a Hype Cycle pattern can be exhibited in current conference attendee numbers, with the field growing quickly on many scales. As Hype Cycle patterns vary, no certain statements or predictions can be made.

Finally, the historical perspective demonstrates the wave-like nature of the field. New technologies are being created every day; a vast amount of them die out; and some are being revived. In this light, it seems adequate to be prepared for current methods to die out, as well as to be on the lookout for some forgotten technology worth reviving.

To summarize: The funding for further AI research appears stable at the moment. However, there are some technological limitations which may, coupled with very high expectations, lead to another AI winter.

“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”7

Pedro Domingos

What Schuchmann did not know when writing his August 2019 article was that the Covid-19 virus was going to dramatically disrupt the global economy. The degree to which this will ultimately shift spending priorities remains unclear, but it seems likely that at least some research programs lacking in direct and immediate profitability or viability will be abandoned. The full consequences of the coronavirus will not be understood for some time, and it is likely that many large-scale social changes will greatly impact progress in many fields.

Conclusion

The appearance of AI winters and AI bubbles, booms, or summers that follow them reflect differing levels of public interest in artificial intelligence and its potential to change human life. In general, Americans, like citizens around the world, tend to be more skeptical than supportive about AI. New discoveries can spark public interest in AI and this can lead to higher levels of corporate and governmental investment, thus ending an AI winter or leading to another AI boom. Another major factor that determines whether the United States will enter an AI winter is military funding. Military investment has driven many of the major discoveries in robotic and artificial intelligence. Discoveries that suggest future military uses therefore stimulate the growth of the AI field by encouraging more intensive military investment in AI research programs.

Discussion Questions

  • Will AI benefit or harm humanity? Explain your answer.

  • How is military funding related to the perception of AI winters? Use examples from the text.

  • Why hasn’t AI technology reached levels envisioned by key figures in the field? Use examples from the text.

  • Given the other issues facing American citizens, should investment in AI be increased, decreased, or should it remain the same? Explain your answer.

Works Used

1 

Anyoha, Rockwell. “The History of Artificial Intelligence.” SITN. Science in the News. 28 Aug 2017, sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/.

2 

Hutchins, John W. Early Years in Machine Translation: Memoirs and Biographies of Pioneers. John Benjamins Publishing Company, 2000.

3 

“Lighthill Report: Artificial Intelligence: A Paper Symposium.” Chilton Computing. 1973, pdfs.semanticscholar.org/b586/d050caa00a827fd2b318742dc80a304a3675.pdf.

4 

Loiseau, Jean-Christophe. “Rosenblatt’s Perceptron, the First Modern Neural Network.” Medium. 11Mar. 2019, towardsdatascience.com/rosenblatts-perceptron-the-very-first-neural-network-37a3ec09038a.

5 

“New Navy Device Learns by Doing: Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser.” New York Times. 8 July 1958, www.nytimes.com/1958/07/08/archives/new-navy-device-learns-by-doing-psychologist-shows-embryo-of.html.

6 

Schuchmann, Sebastian. “History of the Second AI Winter.” Medium. 12 May 2019, towardsdatascience.com/history-of-the-second-ai-winter-406f18789d45.

7 

Schuchmann, Sebastian. “Probability of an Approaching AI Winter.” Medium. 17 Aug. 2019, towardsdatascience.com/probability-of-an-approaching-ai-winter-c2d818fb338a.

8 

“Survey X: Artificial Intelligence and the Future of Humans.” Elon University. 2020, www.elon.edu/e-web/imagining/surveys/2018_survey/AI_and_the_Future_of_Humans.xhtml.

Citation Types

Type
Format
MLA 9th
"10 Seasons Of AI." Opinions Throughout History – Robotics & Artificial Intelligence, edited by Micah L. Issitt, Salem Press, 2020. Salem Online, online.salempress.com/articleDetails.do?articleName=OP13AI_0014.
APA 7th
10 Seasons of AI. Opinions Throughout History – Robotics & Artificial Intelligence, In M. L. Issitt (Ed.), Salem Press, 2020. Salem Online, online.salempress.com/articleDetails.do?articleName=OP13AI_0014.
CMOS 17th
"10 Seasons Of AI." Opinions Throughout History – Robotics & Artificial Intelligence, Edited by Micah L. Issitt. Salem Press, 2020. Salem Online, online.salempress.com/articleDetails.do?articleName=OP13AI_0014.