A glimpse into the making of digital humans

The term ‘digital human’ has been on the rise, especially in entertainment industry. But what is it exactly and how do you make one?

This post is also available in Dutch.

The term ‘digital human’ has been on the rise, especially in entertainment industry. But what is it exactly and how do you make one?

Digital humans are on the rise

There is something compelling about looking at real people—a feeling not evoked by looking at digital humans. However, thousands of people who flocked to an arena in east London at the end of May for the first concert of ABBAtars may disagree with this. ABBAtars are the digital renditions of real ABBA members (a Swedish pop group, active in 1972 – 1982) created using state-of-the-art computer graphics and motion capture. While the use of digital humans—defined as embodied lifelike characters that can interact socially with humans—can be applied in various cases (e.g., education, healthcare, and the service industry), the entertainment industry (which includes movies and gaming) currently makes the most use of this technology. The likely reason for this is that, compared to entertainment, domains such as education and healthcare require a higher level of linguistic and social skills in order for digital humans to be useful apart from their human-like appearance.  

Appearance

Digital humans are modelled after real people. First, multiple images of a real person’s face and/or body from different angles are taken. The images are then stitched together to create a 3D model. Next, texture is incorporated into the model which adds color and detail to the model. Texture is extracted from a 2D image and applied to a 3D model. In the case of ABBA, since the ABBAtars are the versions of the band members from 1979, they were created using the fusion of 3D modelling and archive footage.

Image credits: abbavoyage

Movement

To animate those 3D models, captures of live performances were used. In practice, this means that the person whose motion is to be captured is fitted with a motion capture (mocap) suit that enables body motion tracking. Facial movements are detected by marking multiple dots on the person’s face. The real ABBA played every song clothed in the mocap suits in front of many cameras which allowed digital artists to capture every movement and mannerism of the band members. In addition, a motion capture of a group of young actors was later fused with the movements of the ABBA bandmates to create youthful moving digital humans.

Speech

The tracks to be performed in London concerts were concocted from previous recordings of ABBA while pre-recorded messages were used to address the audience. Pre-recorded speech is suited to situations that do not demand high interaction since concerts are arguably slightly lower in interaction than real-life conversations. However, when it comes to digital humans in general, their ability to produce coherent conversation is still rather limited. This limit may be overcome in the future, however, at the moment it may explain why digital humans are not applied more widely beyond the entertainment industry.

The past two years have significantly contributed to blurring the lines between real and digital. How far this blurring goes remains to be seen. Digital human technology still has to overcome some limits (e.g., the ability to converse naturally), yet the transition to a more general use of digital humans, not just as performers but as teachers, nurses, and sales assistants is likely.

Credits
Author: Julija Vaitonyte
Buddy: Kim Beneyton
Editor: Christienne Gonzalez Damatac
Translator: Floortje Bouwkamp
Editor translation: Felix Klaasen

Image: abbavoyage

Leave a Reply

Your email address will not be published.

Categories