Hey, how did you sleep last night? Well, I hope you enjoyed it, because tonight you’re going to be wracked with fevered nightmares about a computer animator with an uncanny obsession with reanimating human skin.
By now, you’ve probably seen how 3D scanning can record a remarkably realistic dimensional portrait of a face. It’s cool technology, but as the authors of a technical paper from the University of Southern California’s Institute for Creative Technologies highlighted by Prosthetic Knowledge explain in a new video, it only skims the surface — literally. A scan of human skin can’t capture the complex “microstructures” that sit more than a tenth of a millimetre below your skin’s surface. As a result, computer generated skin often looks unrealistic when it stretches or scrunches.
In their paper, the USC team explain how the microstructures beneath our skin contain important markers that determine our facial expressions. Now, they have figured out a way to map a high-resolution displacement map onto a surface to create a simulated version of our complex epidermis:
When skin stretches, the microstructure flattens out and the surface appears less rough as the reserves of tissue are called into action. Under compression, the microstructure bunches up, creating micro-furrows which exhibit anisotropic roughness…
We approximate the skin being flattened under stretching, and bunched up under compressions by convolving a 16K displacement map. We blur the microgeometry displacement map in the direction of stretching, and sharpen it in the direction of compression using the surface normal distribution histogram as a guide. This entire computation can be efficiently implemented on GPU shaders.
The entire video shows some fascinating examples of the technology. And boy, they don’t shy away from moles, do they?

[Institute for Creative Technologies; h/t Prosthetic Knowledge]