Optimization & Target Requirements
Target Requirements for SPACE
All models should be accurately scaled to a 1.7 meter tall human.
All models should have clean geometry (see details below).
Ideal polycount: under 2,000.
Intermediate: 2,000 and 5,000.
Textures & Materials
Format: PNG or JPG
Max dimensions: 2046 x 2046
All textures should be in power of 2 dimensions
Maximum number of materials per object: 5
Maximum number of materials for an entire scene: 25
Ideal: 5 MB or less
Acceptable: 7 MB or less
Maximum: 14 MB
Theory & Purpose of Optimization
Optimization is a critical aspect of developing 3D content for real time applications and is one of the key hallmarks of a professional designer. No matter how good your asset looks no one will appreciate it if the scene is running at ten frames per second because you used 25 materials when you could have used 1.
Optimization has a lot to do with the internal components of a computer system; there’s only so many resources to go around. When running an application, everything is loaded from the hard drive and into RAM. The RAM is temporary storage that feeds it to the Central Processing Unit. The primary calculator and instruction executor of a PC. and the Graphics Processing Unit, a chip specialized to render 3D images. Lives on a graphics card. (graphics processor), which work together to render the scene.
If you have a very large file, it will take awhile to load from the hard drive and will take up a lot of room in RAM. If you have many large files then your hard drive will constantly be streaming things to the RAM, which will be loading and unloading those files. All of this adds delays and can slow down the machine – and we haven’t even gotten to the graphics processing yet.
If your model has too many polygons or too many materials, it will take a long time to load it into the RAM that’s onboard the GPU. Then it will take the GPU a long time to render it, causing low framerates and hitches. If your files are too big then some GPUs won’t even have enough RAM onboard to load it in so it will have to stream it from system RAM. This all adds up to a slow and unenjoyable experience for the user.
Finally, since we are developing for the web we also have to be very conscious of the file size on disc, because the larger it is the longer it will take to download.
The simplest rule of thumb is to keep polygon counts to a minimum and to produce clean geometry (no single edge faces, loose edges, flipped normals, etc.). The skill comes in deciding where to make those sacrifices.
The Decimate Modifier
Blender has a very good polygon reduction modifier called “Decimate”. This will procedurally remove polygons from your model while preserving UVs. The “Collapse” function is generally very good however none of them should be used on a model that is animated. This is because the resulting changes to the topology of the model will typically ruin any animation data.
Before exporting your model, make sure to apply the modifier using the drop down arrow to the right of the modifier name or in the export to GLB settings.
LODs (Levels Of Detail) are the process of reducing the polygon count of a model the further away it is from the viewer. The idea is that if something is far away and very small on the screen, it does not need to be rendered with as much detail as something close and/or very large on screen.
While many engines can dynamically switch the LOD depending on view distance, Hubs/SPACE does not have this ability. Instead, you should keep in mind how close a viewer is going to be allowed to get to an object and adjust accordingly. If you have some trees or shrubs far away in the background that the player will never get to, consider using a 2D cutout instead of an actual model.
One easy way to solve for too many polygons is to simply removes the ones that people will not see. For example, if you have a large rock that players can’t get around then there’s no need to have polygons on the back side.
Reducing Unnecessary Polygons
If two objects are intersecting then you can remove the polygons of each object that are inside one another. Likewise, you should not connect objects if you don’t have to. Booleans and other modeling techniques can create a lot of extra polies.
On the same topic; when creating extrusions like insetting a pipe don’t actually inset the pipe. Create a flat face for the top of the larger pipe to save polygons.
Use transparent materials for text instead of actual text polygons.
You can give even a cube the impression of being a smooth cylinder by using smoothed normals between each of the faces.
Avoid using very long and thin polygons. These are harder for the GPU to render and can lead to aliasing (stair stepping) artifacts. In these cases it’s alright to add a few extra polygons to shorten these lengthy little things.
Finally, make sure you create clean geometry. This means no flipped normals (polygon facing the wrong direction), interior faces, faces right on top of one another, etc.
Textures, RAM and File Size
Textures are the largest contributor to file size and RAM usage so it’s important to make sure they’re done properly.
Memory on a RAM chip is allocated in blocks of bits, set to a power of two. So for example, you might have 128kb or 256kb blocks.
Textures should follow this – they should always be a power of 2, such as 256 x 256, 512×512, etc. You can even have a texture like 256 x 512. But what you should not have is a texture with a non-power-of-2 resolution, like 234 x 514. These textures will take up extra blocks, because the remaining odd numbers will spill over into a memory block – which will be completely taken up by just two pixels worth of data. So again – always powers of two:
16 x 16
256 x 256
512 x 512
1024 x 1024
2048 x 2048
4096 x 4096
Use JPGs and PNGs
Images stored in a compression format like these are much smaller than uncompressed raw images. This is important to keep download size low. One note about JPGs and normal maps though – the compression artifacts caused by the JPG algorithm often induce distortions into the color of the image. Generally these are fine for human-readable images but for machine-readable ones like normal maps it can create really screwy lighting effects. For this reason it’s best to store normal maps in a lossless format like PNG.
If you can, reuse your textures – for example, using one map as one material’s albedo and another material’s roughness. This will help keep your file size low and your memory allocation low as well.
Use Multiple Smaller Textures
It is better to split your detail across smaller textures than one large one because of RAM allocation. Firstly, one very large texture takes up a lot of memory so it is harder to load and unload it from RAM than several smaller ones.
Second and most important; image files must be loaded into RAM in an uncompressed state. This means that while on disc your 4096 x 4096 PNG is “only” 23 megabytes, when it gets uncompressed in RAM it balloons up to 49 MB!
Meanwhile, four 1024 x 1024 PNGs are 1.4MB on disc each, or 5.6 MB total. When loaded into RAM they are only 3MB each and 12 MB total. So make sure to use multiple smaller textures whenever possible!
Minimize Texture Dimensions
This one is pretty obvious – the smaller your texture is the smaller it is in file size. Of course, you also lose detail the smaller you go. However for tiled textures this is less of an issue – simply increase the tile count and the lack of detail will be a lot less noticeable.
GLTF ORM Textures
One way the GLTF format reduces file size is by combining three different textures – Ambient Occlusion (O), Roughness (R ) and Metalness (M) into three channels (RGB) in a single texture map.
Occlusion: Red Channel
Roughness: Green Channel
Metalness: Blue Channel
This is done automatically during the export process but there’s one important thing to note: if any one of these textures are a larger size than the rest then they will all be scaled up to that size. If you plug your 1024 x 1024 lightmap into the Occlusion channel to get your darks extra dark in SPACE then your 256 x 256 Metalness and Roughness maps will get upscaled and you’ll end up with a larger file – though the quality won’t be visually any different.
This one is a big one. A CPU runs the program and sends the data to the graphics card (the GPU). In simplified terms, the CPU then tells the GPU to render what it has sent and send it to the screen. Every time this happens, that is called a “draw call”. All your GPU does is run draw calls, but if there’s too many of them then the GPU can’t keep up and you start to get lag and drops in framerate.
Draw call batching is when multiple objects are sent in a single draw call. Draw call batching is based on the material of an object. This makes it very important to keep your material count as low as possible, because it saves the GPU from having to render multiple times. If you have 25 materials on one object that’s twenty five draw calls just for that object. So use different materials sparingly and try to reuse them whenever possible.