Will Unreal Engine 5 Render 3D Modellers Obsolete?
Epic’s jawdropping Unreal Engine 5 tech demo, “Lumen in the Land of Nanite” has been released just recently, granting a sneak peak into next-gen video game graphics. While the Tomb Raider-esque presentation was very engaging and fun to watch, I couldn’t help but feel some level of uncertainty about how much this technology is going to change the 3D artist’s job. Short answer: not that much. But of course, it is far not so simple, so here’s what I think.
“Art That Just Works”
“I just want to be able to import my ZBrush model, my photogrammetry scan, my CAD data, without wasting any time optimising, creating LOD’s, or even lowering the quality to make it hit framerate. In the end, that’s what it’s all about. Art that just works,” say the guys in the introduction, and while it sounds very nice, it also sounds suspiciously generic and obscure. So what does it actually mean?
The new system Nanite, that is a new way of rendering geometry seems to be quite capable of dealing with unoptimised models. Millions and millions of triangles; even the smallest detail used as geometry and not a normal map on top of a low-poly model? That is great news for environment artists. As long as they don’t want to assemble a larger piece of scenery outside of Unreal, of course. I would be very happy to see for example, Maya’s viewport dealing with 100 million polygons, especially when textured and lit in real-time. I’m afraid that isn’t going to happen for a while. Similarly, if I’d want to manually paint that dense a geometry, Substance Painter will also have to be able to let me do that. With 8k maps, as mentioned in the video. Call me narrow-minded, but it does sound a little bit like science-fiction to me.
In a video game context, “art that just works” means that it works according to game design, and for the primary purpose of supporting gameplay. Photogrammetry is great, but a lot of environments we see in video games simply do not exist on Earth. What doesn’t exist can’t be scanned, and altering photogrammetry data without optimising it first sounds like a nightmare. And then we only talked about static models that never move.
Film Quality Assets
It is also mentioned in the video, that the engine is now capable of dealing with “film quality assets”. Obviously that statement does not include animated models and characters. I say obviously, because even in feature film productions, animators work with highly optimised proxy models, that are much lower in polygon numbers and texture resolution than those used in the final rendering process. Rigging and animation require a lot of resources, and it simply would not make sense to use unoptimised models anywhere in the workflow.
If we take a closer look at the statue of 33 million triangles featured in the demo, we see that its surface is covered with two different materials, a stone-like one and a metallic one. Some kind of procedural dirt has also been applied to the whole model. And it does look great, and serves its purpose. But now, let’s imagine a similar armour to be worn by a character, interacting with other objects. Suddenly, it becomes quite obvious that this workflow will simply not work, regardless of which one we would choose from the two scenarios I’ll describe just now.
If we shoot for high levels of realism, then the armour’s pieces should remain rigid, while interacting with the character and one another dynamically. Physics engines work either utilising the bounding box of the model, or the vertices that define its surface. If we talk about millions of vertices, the dynamic simulation will quickly overpower any computer, for no good reason. So in order to control all that, it would be necessary to create an optimised model anyway, just to drive the simulation.
If we go for the more common practice, where the armour slightly bends with the character underneath it (or more practically speaking, the armour is actually part of the character and it needs to bend in order to be able to move in a satisfactory fashion), then you’ll have to apply a skin deformer and paint weight maps on it, so the deformation is controlled. Weight map painting is dependent on vertices, and it’s neither trivial, nor it is easy on the processor. I personally can’t even imagine how any 3D application would handle weight painting on 33 million triangles (if I want to make an educated guess, that would mean at least 40-45 million vertices). But I can imagine how I would handle it. And that isn’t something I would put down in words publicly.
How Will All This Change the Learning Path of the 3D Artist?
Since every learning path in 3D graphics starts with modelling, wherever this technology may take us, the need for a thorough understanding of object structure, topology, UV layout, anatomically accurate and animation-friendly modelling methods, and the expectation to sport all these skills will remain the same. Just as every guitar player can tell you that even if you only want to play the electric guitar, you must learn to play the acoustic guitar first, skipping classic modelling will dramatically limit your ability to solve unexpected problems, simply because you will be lacking the basics. And believe me, we face plenty of unexpected problems all the time.
Okay, I guess I may sound a little bit sceptical right now, but I don’t mean I doubt that this technology will reform game development procedures and open new possibilities. On the contrary, I am positively amazed and excited about it. But I do think that it will not revolutionalise the fundamental tehcniques we use today; just as every attempt to revolutionalise the way we use a keyboard and a mouse has failed so far, not because the inventions weren’t brilliant, but because the majority of users are very content with the way we use a keyboard and a mouse today. In other words, there is no point trying to fix something that isn’t broken.
Classic modelling skills, optimisation principles, good topology and UV layout, these things can be made to be easier to achieve, but they will always require intelligence (human or human-level artificial) to make crucial decisions throughout the process. And those decisions are always based on knowledge, comprehension, and experience. And yes, you guessed. Our 3D modelling course provides a tried-and-true way to start this journey. So go on, and read about it.
As always, feel free to comment, or argue, and if you have any questions, don’t hesitate.