The History of Generative AI and the Metaverse

Matt White
3 min readFeb 14, 2023

--

The concept of the Metaverse has been around for 15 years and generative AI an active area of research for over a decade but it is in the next 10 years where the two will coalesce to create something amazing.

Midjourney rendering of the prompt “Generative AI generating the Metaverse”

Up until Goodfellow et al. published their paper on GANs the most I had seen on generative deep learning was from a deep learning summer school program run by UCLA where notable AI leaders Geoffrey Hinton, Yoshua Bengio, Yann LeCun, Andrew Ng and many other brilliant minds in the field of artificial intelligence delivered talks on deep learning in the areas of their own research (the schedule and videos are still available here.)

During the summer school, Ruslan Salakhutdinov a professor from the University of Toronto at the time (now at Carnegie Mellon) gave a compelling lecture on generative deep learning and deep Boltzmann machines demonstrating deep learning models generating images of airplanes, querying a trained model on its belief of what Sanskrit should look like and predicting how the other half of an image should appear. Although these capabilities existed prior to the summer school it was the first time I had seen generative AI in action and it really got me thinking about the future potential of generative AI.

In 2014, innovative research was being conducted by Ian Goodfellow under the advisement of Yoshua Bengio at the Université de Montréal on a new modeling architecture called Generative Adversarial Networks (GANs). Although the first results produced by the adversarial model that pitted a generator against a discriminator in order to incrementally improve generated images were not at the level of today’s GAN architectures like StyleGAN and CycleGAN, the results were impressive nonetheless and they inspired me to consider the potential of generative AI and how far it could be taken and how it may be used in the Metaverse.

About a decade earlier, the birth of the concept of an Open Metaverse began around 2006, during the peak of the Linden Lab’s Second Life craze. The SL community had reversed engineered the Linden Lab’s protocol and wanted to be able to run their own virtual worlds and use their avatars and digital assets as they moved freely between virtual worlds. This kicked off the first Open Metaverse movement and saw the creation of communities and projects like OpenSim, OpenGrid and the Open Metaverse Foundation, the latter a project that I moved into the Linux Foundation with Royal O’Brien in 2022.

The prospect of using generative deep learning models to produce highly plausible synthetic data such as images, audio, video and 3D assets and scenes was a ways away in 2014 and although we are closer to achieving viable results at the end of 2022, we still have some distance to go. However we will see some major advancements over the course of 2023 and 2024 in generative AI research in producing results that are high fidelity and reduce some of the current pain points such as training times and computational complexity. New model architectures will make generative models much more viable and accessible as tools to enable the creation, design and manipulation of synthetic media (3D, images, audio, videos), text and code generation and manipulation, as well as enabling hyper-personalization. AI will also be heavily leveraged for non-generative applications like hyper-automation, fraud detection in decentralized environments, AI arbitration, safety enforcement and AI smart contracts.

--

--

Matt White
Matt White

Written by Matt White

AI Researcher | Educator | Strategist | Author | Consultant | Founder | Linux Foundation, PyTorch Foundation, Generative AI Commons, UC Berkeley