Saturday 14 June 2014

2nd Week 3-D Modelling Blog


Week 2 Fundamentals of 3-D Modelling Tutorial:

In this weeks tutorial we briefly recapped on some of the basic parts of 3DS Max and then modeled the body of the treasure chest for the scene we are building. We touched on techniques like box modeling, extruding, dividing up planes using multiple edges, basic tools and techniques used for modeling. I found the guided process in the tutorial to be really simple and easy, I have not modeled the lid of the chest so I'm not sure if I will get lost along the way when modeling the chest lid by myself.

Trends in 3D Modelling for Games and Film:

We have been tasked this week with finding out about trends in 3-D modelling for both video games and films. Below is a list of common techniques used by both sides of 3-D modelling that I found from an article about common modeling techniques for film and games:

- Box/Subdivision Modeling:
Box modeling is a polygon modeling technique, this is where an artist starts with a primitive (cube, sphere, prism, etc.) they then refine its shape until the correct appearance is achieved. Box modelers a lot of the time work in stages, they start off with a low resolution mesh, they refine the shape, then they sub-divide the mesh to smooth the hard edges and add details. The process of subdividing and refining is repeated until the mesh has enough polygonal detail to display the intended concept. It is the most common form of polygonal modeling, it can also be used in conjunction with edge modeling techniques.
 - Edge/Contour Modeling:
This is another polygonal technique, it is very different from box modeling, rather than starting with a primitive and refining, the model is basically built piece by piece, this is done by placing loops of polygonal faces along prominent contours, any gaps between them are then filled in. Contour modeling brings a level of precision to the table that is not achieved in box modeling.
- NURBS/Spline Modeling:
NURBS is a type of modeling technique that is used heavily for automotive, and industrial modeling. Comparing it to polygonal geometry, the NURBS mesh doesnt have any faces, edges or vertices. NURBS models are made up of smoothly interpreted surfaces, this is made by 'lofting' a mesh between two or more Bezier curves (AKA splines).
- Digital Sculpting: 
Digital sculpting is a technique that allows artists to intuitively make models in a way that's described as sculpting digital clay. Meshes are created organically in digital sculpting, using a graphic tablet to mold and shape a model pretty much exactly how a sculptor would. Digital sculpting has taken character and creature modeling to a whole other level. The process is faster, more efficient and allows artists to work with a high resolution mesh which may contain millions of polygons. Sculpted meshes are well known for incredible levels of surface detail and a very natural aesthetic.
- Procedural Modeling:
Procedural, in computer graphics means anything generated via an algorithm instead of being manually made by an artist. In procedural modeling scenes or objects are made all based off user entered rules or parameters. Entire landscapes can be generated by setting and modifying environmental parameters like foliage density and elevation range, or by deciding from landscape presets like desert, alpine, coastal, etc.
Procedural modeling is used for organic constructs quite a bit, like trees and foliage, things where there is almost infinite variation and complexity which would be incredibly time consuming for someone to do by hand. An application called SpeedTree uses an algorithm to create unique trees and shrubbery, which can all be tweaked through hundreds of editable settings. CityEngine does the same thing but to create cityscapes instead.
- Image Based Modeling:
Image based modeling is when transformable 3-D objects are algorithmically derives from a set of static 2-D images. This technique is used when restrictions are not allowing for a fully fleshed out and completed 3-D asset to be made manually.
An example of image based modeling is in The Matrix this was because the team didn't have the time or resource to create full 3-D sets. They would film action sequences with 360 degree camera setups, they then used an intepretive algorithm that would allow 'virtual' 3-D camera movement through a traditional real-world set.
- 3D Scanning:
3D scanning is a method of getting real life objects and copying them into a software, this is used when high levels of realism is needed. A real life object (or an actor) is scanned, analyzed and the data (most of the time a x,y,z point cloud) is used to create a accurate polygonal or NURBS mesh. Scanning is often used when a digital representation of a real-life actor is required, an example of this is found in The Curious Case of Benjamin Button where Brad Pitt, the lead actor aged in reverse.

Typical 3D Modelling Pipeline for a Current Generation Game Asset:

Character Production Pipeline for Unreal Tournament 3 by Epic Games

I found these images and explanations on a research dissertation by Kane Forrester, it is incredibly educational and very helpful in describing a 3-D Modelling production pipeline for a current generation game.

Concept Drawing

The concept phase is where character designs are originally created, it is crucial in these early stages for artists to work together closely to create effective character designs as this step precedes everything. When the creative control of the project approve a design it is given to the modeler who simplifies it into sections this process is known as blocking it out, it is used so the modeler can figure out the most effective way to model the base mesh. 

Low Polygon Base Mesh

It is important to be sure a model is made up of quads only when creating a base mesh for a high polygon sculpting software like Mudbox or Zbrush. The reason for this is that to high poly sculpt something it must be subdivided efficiently, apparently triangles and n-gons do not subdivide quite as well. When the base mesh is finished it is usually a good idea to check the silhouette, to make sure the asset is easily recogniseable in the game and to analyse the negative space. The model is then exported to a OBJ file and imported to the high sculpting software.

High Polygon Sculpt

Now that the mesh is imported to a high poly sculpt program it is divided up into a polygon count that differs depending on the type of mesh. This can be limited by PC specs and memory. Sections of character mesh can be more than a million polygons  in the highest subdivision level of for sculpting. Sculpting, like concept work  is completed in iterations and the details get more in depth as the modeling progress continues. The above image is a base mesh that has been divided into basic defined muscle groups. The 2nd iteration has emphasized the muscle groups giving the asset more definition. Then the final iteration adds in details such as visible veins etc. Importing this very high polygon mesh into a 3D application may be slow and efficient, this is why the mesh is usually divided up and exported as OBJ then importing it back into a 3D software.

Polygon Cruncher

Polygon cruncher gets the mesh and reduces the amount of polygons whilst keeping the topology of the first high polygon mesh. Polygon cruncher has a 'batch optimisation' feature which means that chunks of mesh can be put inside a directory and the program automatically puts them into optimised percentages depending on settings you provide. This means the processor intensive and time consuming work is left to the machine to do by itself, this leaves the artist to continue their work on something else. The files are place into a directory which specifies them with relevant names. This is considered very helpful when someone is creating character that need level of detail models (LOD's).

Unwrap

When the artist is happy with the optimised mesh and composition, they either retopologise a new mash centered around the polycrunched version, or they use the polycrunched version. This in a lot of instances can be cut down the middle on an x-axis and symmetrised. 

Render to Texture

When the model is finished being unwrapped and optimised for the highest amount of resolution in engine then it's ready for maps to be rendered, applying the render to texture methodology. They use projection modifiers on the high poly mesh compared to the low poly mesh; Diffuse, Ambient Occlusion and Normal maps are rendered out.

Ways to get the best out of maps:

Diffuse Map: Set ray trace materials to areas of the mesh this way, it will be easy to see and define how the UVW transfers to the model when texture painting.

Ambient Occlusion: Materials are set to a Ambient or Reflective Occlusion material.

Normal Map: Restart the projection cage then they use universal push and where required edit vertexes. Also increase samples on render setup.

Texture Paint

The three textures can be edited in photoshop once they have been successfully rendered out. Most of the time colour levels and detailing of a normal map has to be cleaned up in order to bring out the details, and smooth areas where the normal map hasn't rendered out as well as intended. This is done with a levels mask and layered unsharp masks on top of each other. Ambient Occlusion maps are used on top of the diffuse map as an overlay, this gives immediate depth and shading to the diffuse map before an artist starts to paint colour and detail on top. Exercising Photoshop's tools and photo manipulation the diffuse texture is normally the result of time and patience. Although Epic Games at times use Deep Paint 3D, this goes double for organic texturing. Deep Paint 3D by Right Hemisphere lets areas of images to be applied to a UVW map of a model by projecting it on the 3D model, then the program works out how it would transfer to 2D on the UVW map.

Engine Import

Now that the model is textured the artist is left with the following assets:
- A low polygon mesh with LOD for importing into Unreal Engine
- A high polygon mesh for kit bashing
- six 2048 x 2048 maps (2 Diffuse, 2 Ambient Occlusion and 2 Normal Maps)
- 1 Specular if required made in Photoshop

Characters need a bit of work before they are imported. The mesh should be in sections and skinned to a unreal skeleton. Now that's done a plug-in known as Actor X must be installed on the 3D program of choice. The mesh is then exported as an ASE file with correct settings.
When the ASE file is exported it then needs to get imported into the Unreal Engine 3. If exported correctly it should show up in engine as a skeletal mesh; textures should then be imported in the same way with appropriate compression settings applied to them. When these are all in engine materials need to be setup for a character, this can be done by taking an existing character and replacing texture samples with new ones which are then applied to meshes in the browser.
By this stage there are many piece of the character in the normal browser textured and prepared to be socketed together. Opening the socket manager in model browser allows various joints to be applied to each piece of the character. When the engine knows which meshes go with each piece of the body it pieces it all together into a functional character. If a standard unreal skeleton is used all animations within Unreal Tournament 3 should transfer to the character seamlessly. The only thing left to do after that is to cook the UPK file and export the text file that has to get added to the custom chair text file within the unpublished area of my documents, the character should then be available to use in Unreal Tournament 3.


References
Forrester, K. (2014). Investigation into pipelines and asset creation for unreal engine 3. Found online, retrieved from: http://www.academia.edu/178918/Investigation_into_Pipelines_and_Asset_creation_for_Unreal_Engine_3 (Accessed on - 15/06/2014).

Slick, J. (nd). Seven Common Modeling Techniques for Film and Games. Found online, retrieved from: http://3d.about.com/od/3d-101-The-Basics/a/Introduction-To-3d-Modeling-Techniques.htm (Accessed on - 15/06/2014).

No comments:

Post a Comment