
From the origins of AI to ChatGPT: waves of optimism and questions
The history of AI is marked by cycles of optimism and skepticism. As early as the 1950s, researchers imagined a future populated by machines capable of thinking and solving problems as efficiently as humans. This enthusiasm led to ambitious promises, such as the creation of systems capable of automatically translating any language or perfectly understanding human speech. However, these expectations proved unrealistic given the limitations of the technologies of the time. Thus, the first disappointments led to the “AI winters” of the late 1970s and then again in the late 1980s, periods when funding fell in the face of the inability of the technologies to live up to their stated promises. However, the 1990s marked a major turning point thanks to three key elements : the explosion of big data, the increase in computing power, and the emergence of more powerful algorithms. The Internet facilitated the massive collection of data, essential for training machine learning models . These vast datasets are crucial because they provide the examples needed for AI to “learn” and perform complex tasks. At the same time, advances in processors made it possible to run advanced algorithms, such as deep neural networks, which are the basis of deep learning . They made it possible to develop AIs capable of performing previously inaccessible tasks, such as image recognition and automatic text generation. These increased capabilities have rekindled hopes of seeing the revolution anticipated by the pioneers of the field, with AIs ubiquitous and efficient for a multitude of tasks. However, they come with major challenges and risks that are beginning to temper the enthusiasm surrounding AI.A gradual realization of the technical limits that today weigh on the future of AI
Recently, stakeholders attentive to the development of AI have become aware of the limits of current systems , which can slow down their adoption and limit the expected results. First, deep learning models are often referred to as “black boxes” due to their complexity, making their decisions difficult to explain. This opacity can decrease user trust, limiting adoption due to fear of ethical and legal risks. Algorithmic bias is another major issue. Current AIs use huge volumes of data that are rarely free of bias. AIs thus reproduce these biases in their results, as was the case for example with Amazon’s recruitment algorithm , which systematically discriminated against women. Several companies have had to backtrack because of bias detected in their systems. For example, Microsoft removed its chatbot Tay after it generated hateful remarks, while Google suspended its facial recognition tool that was less effective for people of color. These risks make some companies reluctant to adopt these systems , for fear of damaging their reputation. The ecological footprint of AI is also a concern. Advanced models require a lot of computing power and generate massive energy consumption . For example, training large models like GPT-3 would emit as much CO₂ as five round trips between New York and San Francisco . In the context of the fight against climate change, this calls into question the relevance of a large-scale deployment of these technologies. Overall, these limitations explain why some initial expectations, such as the promise of widespread and reliable automation, have not been fully realized, and face real-world challenges that may slow down the enthusiasm for AI.Towards a measured and regulated adoption of AI?
AI, already well integrated into our daily lives, seems too entrenched to disappear, making an “AI winter” like those of the 70s and 80s unlikely. Rather than a lasting decline in this technology, some observers instead speak of the emergence of a bubble . The announcement effects, amplified by the repeated use of the term “revolution,” have indeed contributed to an often disproportionate excitement and the formation of a certain bubble. Ten years ago, it was machine learning ; today, it is generative AI. Different concepts have been popularized in turn, each promising a new technological revolution.
Google trends.