In recent years, generative artificial intelligence (AI) has become a prominent topic in the tech world. It has brought forward remarkable advancements like Google’s Bard, Microsoft’s Copilot, and the popular OpenAI’s ChatGPT, a chatbot that can craft text resembling human-written content. But what exactly is generative AI, and how does it differentiate from traditional AI models?
AI experts at the Massachusetts Institute of Technology (MIT) have helped break down the ins and outs of this increasingly popular and ubiquitous technology. The AI experts answered questions such as how powerful generative AI systems like ChatGPT work, why these systems seem to be finding their way into practically every application imaginable, and what makes them different from other types of artificial intelligence.
These experts are Phillip Isola, an associate professor of electrical engineering and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). Devavrat Shah, MIT’s Andrew and Erna Viterbi, a Professor and member of IDSS and the Laboratory for Information and Decision Systems, and Tommi Jaakkola, MIT’s Thomas Siebel Professor and member of CSAIL and the Institute for Data, Systems, and Society (IDSS).
How Generative AI Differs from Traditional AI
In their explanation, traditional AI primarily focused on machine-learning models capable of making predictions based on data. For example, these models could predict medical conditions from X-rays or assess the creditworthiness of borrowers using historical data.
Generative AI, on the other hand, is designed to create new data rather than predict an outcome from existing data. It learns to generate content resembling the data it was trained on, marking a significant shift in AI capabilities.
Origin of Generative AI
The experts trace generative AI’s origins back to simpler models like Markov chains, which were limited in their ability to generate plausible text. However, the field has evolved with larger datasets and more complex deep-learning architectures. The base models behind systems like ChatGPT are similar to Markov models but are vastly larger, with billions of parameters and training on extensive data sources from the internet.
In text prediction, a Markov model generates the next word in a sentence by looking at the previous word or a few previous words. But because these simple models can only look back that far, they aren’t good at generating plausible text, says Tommi Jaakkola.
“We were generating things way before the last decade, but the major distinction here is in terms of the complexity of objects we can generate and the scale at which we can train these models,” he explains.
Exploring the Differences
The fundamental models that underpin ChatGPT and related systems function much like a Markov model. However, with billions of parameters, ChatGPT is far bigger and more sophisticated than the other. And it has been trained on an enormous amount of data – in this case, much of the publicly available text on the internet.
“When it comes to the actual machinery underlying generative AI and other types of AI, the distinctions can be a little bit blurry. Oftentimes, the same algorithms can be used for both,” says Phillip Isola.
But while generative models can achieve incredible results, they aren’t the best choice for all types of data. For tasks that involve making predictions on structured data, like the tabular data in a spreadsheet, generative AI models tend to be outperformed by traditional machine-learning methods, Devavrat Shah explains.
“The highest value they have, in my mind, is to become this terrific interface to machines that are human friendly. Previously, humans had to talk to machines in the language of machines to make things happen. Now, this interface has figured out how to talk to both humans and machines,” says Shah.
Generative AI’s Future Direction
One promising future direction Isola sees for generative AI is its use for fabrication. He believes that generative AI will alter economics across a wide range of fields in the future. Additionally, he anticipates using generative AI systems in the future to create AI entities that are more broadly intelligent.
“There are differences in how these models work and how we think the human brain works, but I think there are also similarities. We have the ability to think and dream in our heads, to come up with interesting ideas or plans, and I think generative AI is one of the tools that will empower agents to do that, as well,” Isola says.
While generative AI holds great promise, it also presents challenges. These models can inherit biases from training data, potentially amplify hate speech, and create content that resembles the work of specific individuals, leading to copyright issues. Worker displacement is another concern as AI chatbots replace human roles.
Despite these challenges, generative AI has the potential to empower artists, change various fields, and even be used for fabrication and creating more generally intelligent AI agents. Generative AI represents a paradigm shift in artificial intelligence, bridging the gap between machines and human-like creativity, and its future applications are ripe with possibilities. But adequate regulations must be in place to attain these potentials.