Get Even: info blowout with exclusive interview - 3D scanning in games
After the announcement of Get Even, most people were overwhelmed by the jaw-drapping graphics and the insane amount of detail. How is it possible that a small developer team that is using the somehow rusty Unreal Engine 3 can achieve this level of detail? PC Games Hardware talked to Wojciech Pazdur from Get Even developer Farm 51.
PCGH: Your press announcements hint at the fact that your game is built upon Epic's Unreal Engine 3. Can you confirm this or do you utilize another base technology? If so, what engine did you buy a license for e.g. UE4, Cry Engine 3, Frostbite 3? Or did you even program a new technology from scratch?
Wojciech Pazdur: Get Even uses Unreal Engine 3 and it seems we will stay with that. We're not huge company and we don't have enough people to create next-gen technology from the scratch when we're working at the game simultaneously. It makes much more sense to take a good base for renderer, netcode, AI, input, physics and so on and focus only on what's really unique for our game and what can make it stand out of the crowd – in this case it's a world scanning technology that requires a lot of custom work on tech side. Of course Unreal 4 is tempting, but because we've started with Get Even before it was fully available, it could be hard to move to new technology now, especially when our game doesn't really need it's next-gen features cause we're very untypical on visual side. All experience with Unreal 3 we had so far also makes it a better choice, cause we've been working with this engine for more than 3 years and we know it's good and weak points.
PCGH: If Epic's Unreal Engine 3 or another licensed engine what are the major modifications that were necessary to tailor the technical base to you special needs regarding your 3D Scan Technology and other technical aspects of the game?
Wojciech Pazdur: There are three key things that we needed to address atypically and they will be our main focus until the end of production: use of scans, AI behaviors and virtual reality systems.
For scanned objects usage it was obvious from the very beginning that we need to re-solve problems of enormous amounts of mesh and texture data. I think there is no other game at the moment which uses so many unique and detailed textures, cause in classic art production pipeline it makes no sense to go for such a uniqueness and resolutions due to costs of making a single asset.
Meshes complexity is something we need to handle in more classic way – having very effective pipelines and tools for more or less automatic triangular optimization and keeping amounts of polygons scalable for different factors (display complexity, shaders, physics accuracy, AI navigation etc.).
For effective display of theoretically unlimited textures we had to find a data streaming solutions that are better than native tools of Unreal or any other typical engine. Our key assumption is that in full-HD resolution there should be no visible texture pixels and we've already achieved that, however it's clear that for the whole game we'll need to look for special compression methods cause now the disk size of the game data becomes a big issue.
With enemies artificial intelligence we have to deal in very tricky way, because we want to make them similar to the real humans who can appear at the same time in the game. Note – humans-players, not the best AI possible. So it includes simulating not only believable behaviors of your combat opponent, but also human mistakes or players (not simulated characters) behavior patterns. Then for example we need to decide if disable jumping button or to make our AI bunnyhop from time to time.
Last, but not least, is the focus on making Get Even a unusual virtual reality experience. The game story tells about traveling between different levels of reality and we want to start it from player perspective. Device like Oculus Rift may be not only tool of player's interaction with the game world, but also element of gameplay mechanics when characters inside the game use similar devices to enter other alternate realities. We believe that question "what is real" asked many time inside the game world can get more different answers thanks to that.
PCGH: Before we ask you to describe your special 3D Scan technology in detail can your first of all tell us which API is utilized. Developing the game for PS4/Xbox One too we assume that it will be DX11. Is that right? If so, what makes the API so suitable for your game/Scan technology/cross platform game development in general? Have you discussed to offer an API fallback for older platforms (PC, Xbox 360, PS3)?
Wojciech Pazdur: We're not sure yet. Actually we're a little cautious about doing any API related stuff directly, cause at this moment we don't really need any of the most sophisticated DX11 features. For now it's better for us to support a broader range of possible hardware and operating system with the best unified set of quality features instead of focusing on something which is limiting our scope of platforms or art design.
Ideally we'd like to be able to release Get Even on as many platforms as possible, starting from PC as the leading one which gives us the best possibilities of visual and technical quality, next-gen consoles that goes in pair with initially assumed technical requirements, Steam Machines which performance have to be checked and benchmarked including all possible configurations at release date, and later with others like Mac OS.
Technically today we can run Get Even code on PS3 or Xbox 360, however due to memory limitations it seems it will make no sense to try to squeeze our scanned visuals there. We've discussed it already and even without testing builds we know that our game will be too big even cause of build and disk capacity issues (especially on Xbox 360 having no Blu-ray). But you've just suggested a great idea to try – first, I'm really curious how it would look, second, I'd love to see faces of my team when I'm going to ask them to do it. Yeah.
PCGH: Can you give a detailed overview of your 3D Scan technology, by answering the following questions: How do you take the pictures your extreme detailed graphics are based upon?
Wojciech Pazdur: We mostly use photogrammetric data acquired with more or less sophisticated cameras and scanners. Sometimes it can be so easy as taking pictures with some standard dSLR camera, sometimes requires laser measurement and remotely controlled drones flying over landscape or building. On scanned location we need to consider lighting conditions (sometimes altering them physically with use of blends or additional light sources) and material structures of objects. For example shiny objects can't be properly scanned to photogrammetric data unless you cover them with some anti-reflective powder or use laser scanner.
PCGH: How do you manage to get your visuals very realistic as well as fully three dimensional?
Wojciech Pazdur: First, it's almost all about acquiring as much of 3D detail as possible – and it's simply achievable through brute-force inclusion of shapes for each small stone or rubbish in the location. With 3D reconstruction we can go as deep as we want, the only limit is a common sense and a workload on amount of scanned data to process and display in real-time. Or, to be more precise, also the size of objects relatively to scanner or camera size – let's say, that very accurate scanning of cigarette box interior would be a little problematic (however possible if we use microcameras or similar stuff).
Second important subject is set up of the lighting and scanning conditions to be able to play later with dynamic lights in real-time engine. Not being able to use in-engine lights at all would limit us with creating proper depth, mood and postprocessing the scene toward the expected artistic look. Actually the look and feel of scenes where we've scanned proper lighting is much better than of those where we set up light completely by hand, cause no renderer in the world can give us so soft, sophisticated and believable light interactions as in case of real world light recreation. But game obviously require to be interactive and dynamic, so we need to combine both staticly lighted scenes with dynamic lights and full dynamic or adjustable light setups as well.
PCGH: Does it also work in outdoor-scenes? Or are those build manually?
Wojciech Pazdur: Scanning outdoor scenes is basically the same as indoor, all depends on types of objects and their general structure. For outdoor sceneries it makes very little sense to try to recreate them 1:1, cause some objects are not well manageable after scanning in certain scale, but basically almost all components of outdoor scenes can be scanned.
Imagine that you've a huge field of grass – in flying simulator it can be scanned and displayed from the top perspective as more or less detailed texture and shader, so it works. But from first person perspective you expect a grass straws to not only have proper shape and color in close-ups (what is hard enough to be re-created with scan), but also to have very complex interaction with light, possibly also with physics.
There is no real-time hardware platform today which would allow to properly render and animate a huge location with foliage where each single leaf and straw is treated as unique part of level geometry and texture. So just for this reason, in outdoor scenes we have to place objects by hand or to re-use them even if they're scans. Then we're placing some objects manually and some procedurally. Many objects are being constructed as hybrids of scanned and procedurally placed elements, for example base of tree with roots is scanned, and top of it uses thousands of leaves distributed in 3D software or procedurally animated (and each of leaves is a very small scanned object reduced to triangular plane with alpha-blended texture).
But we're manually arranging indoor locations as well, because scans usage doesn't release us from duty of providing good gameplay layouts and pacing. And real-world sceneries are rarely a perfect action game levels at the very base form. We need to customize some parts by adding new objects, shifting some walls or putting new doors and corridors for gameplay reasons. All these added elements can be a new scans or re-worked modular parts of other locations.
PCGH: The scenes do look very detailed. Are you using Tessellation or a similar tool to cope with the data?
Wojciech Pazdur: So far we've found that it's better to take more details from textures than from geometry. Of course our meshes are very detailed, but in scanned environments you can use relatively simple meshes if they're only perfectly matched with textures in terms of shape. That solves some of the optimization problems cause textures are generally more manageable and scalable than meshes and there is many aspects of the game which benefits from not so complicated geometry. The problem with tessellation is that it can greatly increase mesh detail, but without considering the detail included in textures, so it often contradicts each other – tessellated mesh disjoints from the texture and vice versa. We'll be experimenting with tessellation, but it seems to be usable only for specific objects and landscapes.
PCGH: Do you have already an idea about the memory consumption of your special technology? Does the level geometry require lots of memory? Does the combination of texture data and world geometry require a video card with at least 2 Gig of VRAM? Do you utilize compression techniques? Does you development for the consoles profit from the fact that they have a lot of RAM by now?
Wojciech Pazdur: Well, even before we've started placing any scanned objects into Get Even engine, it was clear that this is the main concern regarding this technology usage. We could use scans couple of years ago when we've been working on our previous titles for Xbox 360 and PlayStation 3, but it quickly appeared that memory overkill would blow up these consoles after inserting the game disc. So we had to wait until new generation of hardware comes true and there is enough RAM to handle so detailed objects and textures. And still, we've never been so cautious about planning virtually every asset production with having memory limits in regard.
The source data for one room or building wall contains millions of polygons and texture resolutions are limited only by amount of data we see as resonable to record to the disks at scanning location. It's not problem to have 100 000 x 100 000 pixels texture of one wall, it just usually doesn't have so much sense to grab this resolution, cause in game, with given camera and player control setup it would be not possible to display everything. And there comes amount of work (and computing time on render-farm) needed to process and optimize scanned data into usable level structures. As mentioned, geometry data is not so huge problem, cause in textures lies the most information to be handled. Our goal is to have zero pixelization effect on textures in full-HD resolution, so the relatively small location (like 4-5 minutes of gameplay) takes 2 GB of VRAM filled to the top and the only solution to this when we wanted bigger levels was streaming. Besides of idTech-based games (RAGE, Quake Wars) huge scale streaming solutions were not so popular, but now it seems they may come to glory again, cause scanning technologies should become more popular and there's not just us who may need this solutions.
PCGH: Is the CPU somehow involved in the rendering process of your special technology? Does the performance of your game in general profit for multicore architecture? Can you utilize the potential modern CPUs and their up to eight cores to full extend?
Wojciech Pazdur: We don't need to do extremely heavy computations for shaders, lighting and geometry processing, but we'll be still experimenting extensively with general texture streaming routines and there new CPUs capabilities may come handy. We're still at early phase with this part of tech and it will finally depend on many factors like final locations sizes and requirements coming from this side. So far we know we could somehow work without customizing the tech toward this direction, but obviously we still want to keep raising the bar in visual fidelity and to break some actual limits it makes sense.
PCGH: Are you planning to use specific third-party software to implement specific effects like PhysX, Speed Tree or Face-Gen?
Wojciech Pazdur: We're already using some, it's a little too early to reveal all details, mostly because we didn't finally make up our minds about few of them or which one to choose. We will definitely use third-party physics solution, with foliage system it's not decided yet. For facial animation it's likely that we'll use very custom solution made internally in our studio, in connection with some R&D project we're doing together with some scientific facilities.
PCGH: Your game will merge Singleplayer with Multiplayer. Could you give us some details on how this is going to work?
Wojciech Pazdur: It's the core of Get Even design. Game story is about two ultimate enemies trying to take a revenge one on each other at the same time. It's very emotional and shaped around their personal issues, so it has to be told through single-player campaign with strong storytelling.
But it was also crucial for us to provide sense of other human presence into the game, to not make separate gameplay modes and to deliver something very fresh. That resulted in blend of single-player and multiplayer experience where you're playing your story, but other players may invade your world.
Get Even tells about travelling between different levels of reality (memories or alternate timelines), so despite of realism in visuals and gameplay mechanics, there is a huge room to develop the interaction and emotions based on meeting other people in your game. The question "what is real" is the core and heart of game construction. Not being able to judge if your enemy is a computer-controlled AI or second human is a part of the main experience. Just imagine what level of unpredictability it brings to the virtual battlefield when your enemies may be truly much smarter than you are or, at the opposite side, simply lame. Of course it requires to handle many design and balance issues, but we believe that this, not technology, may be the biggest strength of Get Even when game finally comes out.
PCGH: What do you think about True Audio and Mantle?
Wojciech Pazdur: They're both interesting, but we didn't use them yet so the only thing I can say now is that we have them on the radar.Werbefrei auf PCGH.de und im Forum surfen - jetzt informieren.