January 23, 2022

Introduction to the usage of AI for computer graphics

Artificial Intelligence helps content creators with a wide range of tasks and has countless applications in this field.

Introduction

In this article, we discuss the usage of Artificial Intelligence to improve computer graphics. Computer graphics is a whole sub-field of computer science, aiming at simulating and rendering 3D visual content. It has applications in various industries, including movie animations, movie visual effects, product design, architectural walkthroughs, medical imaging, and many more.

We chose not to enter the details of how these techniques are working, as we focus on demystifying how AI is currently used to help creators in various ways. However, you can find links to interesting articles for each use case!

🎨 Note: This article is part of our series on content creation. We hope to provide you with a better idea about the usages of AI for content creators, on a wide range of domains. Don't hesitate to check our other articles.

Dealing with customizability: synthesis of 3D assets

Automating 3D asset generation is a vast topic in the 3D industry. Indeed, 3D assets are often required to be generated at an extremely high level of detail, which means that we need to spend a lot of time and resources to develop them. Automating this process means reducing production costs, which leads to a better overall quality of the available content.

Automated methods to generate 3D models usually use pictures as inputs, using knowledge about different items to generate the missing information.

For example, it is possible to create 3D assets of cars just by using a picture. Such an algorithm is, of course, designed specifically for vehicles, which is why we can obtain this excellent result.

Example of car generation using a picture as an input

Like the example above with cars, we can do the same thing with human faces. By analyzing a database of 3D faces, it is possible to create algorithms transforming pictures of 2D faces into 3D assets by understanding what a face looks like and interpolating new pixels from there.

It won't generate your exact face as it lacks information, just like a human couldn't do it either. Still, it will find a very accurate and possible solution for all around the head from a single portrait picture!

Example of 3D face generation using pictures as input

A recent work that can provide us a glimpse of what may be possible in the future is called NeRF. It allows the creation of basic scenes from different images of the same item.

These results are outstanding and are promising for a future revolution of the way we are creating 3D assets.

Once we have such 3D models, we can use them in our virtual worlds. For these models to be as realistic as possible, we want their motions to be as close to reality. How can we do that with the resources we possess?

Improving the quality of 3D animations with Machine Learning

Here, we will discuss the usage of Machine Learning to improve the quality of our animations.

We are going to discuss this topic in two parts:

  1. How Artificial Intelligence can better simulate laws of physics and objects that are not moving by their own will.
  2. How Artificial Intelligence can help us better simulate how living beings are moving. Living beings have their way of moving that need particular techniques to animate.

Still, these two use cases have something in common: our goal is to improve the quality of our experience with 3D simulations. It is done by finding the right balance between three components:

  1. The realism of our simulations
  2. The number of computational resources used
  3. The amount of time necessary for the computation

Different media have different requirements. For example, 3D animated movies are asynchronous media where we want our simulations to be as realistic as possible, so we can use a lot of time and computational resources to improve our results and have higher quality.

On the other hand, media like video games require rendering in real-time, so we need to compromise the quality of graphics. Having simulations using little computational resources also allow us to embed them into devices with low computational abilities.

Optimizing computer graphics to meet these constraints is challenging. Complex simulations involve interactions between thousands of particles with different characteristics. We need to compute how light travels and bounces around the scene.

Machine learning is an excellent tool in both these contexts: machine learning algorithms can contain knowledge obtained by reviewing thousands of examples and can be queried quickly. As we will see in the two parts below, it is used for various tasks regarding realism and motion of particles/objects.

Better simulate laws of physics with AI

Video game designers often want to have their game as realistic as possible. To this end, all the different materials should behave as in real life.

How can we do that? Well, we can represent each material as a set of particles and calculate how each of these particles interacts with each other.

However, such a process would require an extraordinary amount of computation. Instead, we can use Machine Learning models to estimate how each of these particles should behave on a bigger scale.

Different kinds of particles have different behaviours, and we need to create a wide range of models to address the simulation of each behaviour.

To estimate the quality of the results, we are doing test simulations. Researchers are always highly creative regarding their simulations!

As an example, a great list of simulations made for the paper "Learning to simulate complex physics with graph networks" is accessible here.

How can we simulate water's behaviour if dropped in a cube-shaped container? By using artificial intelligence!

Improving the quality of assets motion

While accurately simulating laws of physics is extremely important for the quality of our simulations, we often want to include living beings whose motions are not only decided by the laws of physics.

So how can we have beautiful animations in this context?

Once again, we can see that artificial intelligence significantly impacts the industry by offering easier and prettier ways of depicting how living beings are moving.

Motion capture

Motion capture is a popular process to create 3D animations by recording the movement of real objects and people rather than making the animation on a computer.

As an example, the animation of a human face can be created by recording the movement of a real human face in 3 dimensions.

Traditionally, this recording is made with dedicated equipment, including sensors. However, it is now possible to extract the same information solely using a video of the moving objects.

This technology is not yet precise enough to fully replace motion capture installations in the industry. Still, it will happen sooner than we think. It is already applied with good lighting and clear background, just like background replacement is being done using green screens in movies.

Improving characters movements in video games.

Moreover, Artificial Intelligence has been used to improve the way characters move in video games, perfectly mimicking how a human would behave. Just like it does for non- living beings, as we saw with water. Historically, we simulate these characters thanks to prerecorded animations that we loop together:

However, such animations can often appear unrealistic because the character is not adapting his behaviour to the surrounding environment. Different kinds of obstacles and items need different interactions, and the player may notice any unnatural behaviour quickly. Ultimately, a character walking in a wall can keep doing the same movement, appearing like an explicit limitation of this technique.

We can use AI to learn how to interact with the game world in the most natural way possible. This technology is still in development, but it has already shown promising results.

You can learn more about this technology by reading this paper and from this Nvidia conference, where they explain the technology.

Animating pianist's hands

We saw that we can animate matter and now simulate living beings in video games. We believe that another great success of such computer graphics applications would be animating a human, with all its complexity. For example, for the animated movie Soul, which got released in 2020, creating the animation for the piano hand was a tough challenge.

The creators had to reproduce manually the gestures of a pianist playing this track. However, recent Machine Learning works (like this one) allow us to reproduce how hands move on the piano solely based on the music sheet. The algorithm has been trained for this achievement by observing videos of many pianists playing different tracks. It is a lovely example of how Artificial Intelligence helps create beautiful animations.

Surprisingly, the algorithm developed by Massive technologies was able to reproduce the human hand movements: the wrist and finger movements were perfectly reproduced.

Conclusion

These were some of the most exciting applications of AI we wanted to share covering computer graphics. Still, many more are out there, and many more are to come.

We invite you to read our other articles about content creation. They are highly related to this one using similar algorithms with different types of data, like images, sounds, etc., all with the common goal of helping creators.

Introduction to the usage of AI for computer graphics

Artificial Intelligence helps content creators with a wide range of tasks and has countless applications in this field.

Malrick Costantini
January 23, 2022

Introduction

In this article, we discuss the usage of Artificial Intelligence to improve computer graphics. Computer graphics is a whole sub-field of computer science, aiming at simulating and rendering 3D visual content. It has applications in various industries, including movie animations, movie visual effects, product design, architectural walkthroughs, medical imaging, and many more.

We chose not to enter the details of how these techniques are working, as we focus on demystifying how AI is currently used to help creators in various ways. However, you can find links to interesting articles for each use case!

🎨 Note: This article is part of our series on content creation. We hope to provide you with a better idea about the usages of AI for content creators, on a wide range of domains. Don't hesitate to check our other articles.

Dealing with customizability: synthesis of 3D assets

Automating 3D asset generation is a vast topic in the 3D industry. Indeed, 3D assets are often required to be generated at an extremely high level of detail, which means that we need to spend a lot of time and resources to develop them. Automating this process means reducing production costs, which leads to a better overall quality of the available content.

Automated methods to generate 3D models usually use pictures as inputs, using knowledge about different items to generate the missing information.

For example, it is possible to create 3D assets of cars just by using a picture. Such an algorithm is, of course, designed specifically for vehicles, which is why we can obtain this excellent result.

Example of car generation using a picture as an input

Like the example above with cars, we can do the same thing with human faces. By analyzing a database of 3D faces, it is possible to create algorithms transforming pictures of 2D faces into 3D assets by understanding what a face looks like and interpolating new pixels from there.

It won't generate your exact face as it lacks information, just like a human couldn't do it either. Still, it will find a very accurate and possible solution for all around the head from a single portrait picture!

Example of 3D face generation using pictures as input

A recent work that can provide us a glimpse of what may be possible in the future is called NeRF. It allows the creation of basic scenes from different images of the same item.

These results are outstanding and are promising for a future revolution of the way we are creating 3D assets.

Once we have such 3D models, we can use them in our virtual worlds. For these models to be as realistic as possible, we want their motions to be as close to reality. How can we do that with the resources we possess?

Improving the quality of 3D animations with Machine Learning

Here, we will discuss the usage of Machine Learning to improve the quality of our animations.

We are going to discuss this topic in two parts:

  1. How Artificial Intelligence can better simulate laws of physics and objects that are not moving by their own will.
  2. How Artificial Intelligence can help us better simulate how living beings are moving. Living beings have their way of moving that need particular techniques to animate.

Still, these two use cases have something in common: our goal is to improve the quality of our experience with 3D simulations. It is done by finding the right balance between three components:

  1. The realism of our simulations
  2. The number of computational resources used
  3. The amount of time necessary for the computation

Different media have different requirements. For example, 3D animated movies are asynchronous media where we want our simulations to be as realistic as possible, so we can use a lot of time and computational resources to improve our results and have higher quality.

On the other hand, media like video games require rendering in real-time, so we need to compromise the quality of graphics. Having simulations using little computational resources also allow us to embed them into devices with low computational abilities.

Optimizing computer graphics to meet these constraints is challenging. Complex simulations involve interactions between thousands of particles with different characteristics. We need to compute how light travels and bounces around the scene.

Machine learning is an excellent tool in both these contexts: machine learning algorithms can contain knowledge obtained by reviewing thousands of examples and can be queried quickly. As we will see in the two parts below, it is used for various tasks regarding realism and motion of particles/objects.

Better simulate laws of physics with AI

Video game designers often want to have their game as realistic as possible. To this end, all the different materials should behave as in real life.

How can we do that? Well, we can represent each material as a set of particles and calculate how each of these particles interacts with each other.

However, such a process would require an extraordinary amount of computation. Instead, we can use Machine Learning models to estimate how each of these particles should behave on a bigger scale.

Different kinds of particles have different behaviours, and we need to create a wide range of models to address the simulation of each behaviour.

To estimate the quality of the results, we are doing test simulations. Researchers are always highly creative regarding their simulations!

As an example, a great list of simulations made for the paper "Learning to simulate complex physics with graph networks" is accessible here.

How can we simulate water's behaviour if dropped in a cube-shaped container? By using artificial intelligence!

Improving the quality of assets motion

While accurately simulating laws of physics is extremely important for the quality of our simulations, we often want to include living beings whose motions are not only decided by the laws of physics.

So how can we have beautiful animations in this context?

Once again, we can see that artificial intelligence significantly impacts the industry by offering easier and prettier ways of depicting how living beings are moving.

Motion capture

Motion capture is a popular process to create 3D animations by recording the movement of real objects and people rather than making the animation on a computer.

As an example, the animation of a human face can be created by recording the movement of a real human face in 3 dimensions.

Traditionally, this recording is made with dedicated equipment, including sensors. However, it is now possible to extract the same information solely using a video of the moving objects.

This technology is not yet precise enough to fully replace motion capture installations in the industry. Still, it will happen sooner than we think. It is already applied with good lighting and clear background, just like background replacement is being done using green screens in movies.

Improving characters movements in video games.

Moreover, Artificial Intelligence has been used to improve the way characters move in video games, perfectly mimicking how a human would behave. Just like it does for non- living beings, as we saw with water. Historically, we simulate these characters thanks to prerecorded animations that we loop together:

However, such animations can often appear unrealistic because the character is not adapting his behaviour to the surrounding environment. Different kinds of obstacles and items need different interactions, and the player may notice any unnatural behaviour quickly. Ultimately, a character walking in a wall can keep doing the same movement, appearing like an explicit limitation of this technique.

We can use AI to learn how to interact with the game world in the most natural way possible. This technology is still in development, but it has already shown promising results.

You can learn more about this technology by reading this paper and from this Nvidia conference, where they explain the technology.

Animating pianist's hands

We saw that we can animate matter and now simulate living beings in video games. We believe that another great success of such computer graphics applications would be animating a human, with all its complexity. For example, for the animated movie Soul, which got released in 2020, creating the animation for the piano hand was a tough challenge.

The creators had to reproduce manually the gestures of a pianist playing this track. However, recent Machine Learning works (like this one) allow us to reproduce how hands move on the piano solely based on the music sheet. The algorithm has been trained for this achievement by observing videos of many pianists playing different tracks. It is a lovely example of how Artificial Intelligence helps create beautiful animations.

Surprisingly, the algorithm developed by Massive technologies was able to reproduce the human hand movements: the wrist and finger movements were perfectly reproduced.

Conclusion

These were some of the most exciting applications of AI we wanted to share covering computer graphics. Still, many more are out there, and many more are to come.

We invite you to read our other articles about content creation. They are highly related to this one using similar algorithms with different types of data, like images, sounds, etc., all with the common goal of helping creators.

Thanks for reading!

Create your own illustrations

designstripe allows you to create unique illustrations in a few clicks 🎉