AWS Cognito User Account Signup And Login with Unity

Let's get this out of the way up front: If you are mostly interested in the 3D geometry things I do at gradientspace, this post is Not For You. Sorry!

Although I also prefer to do the 3D geometry, once in a while I need to do something else. And the thing I needed to do today (for a client project) was figure out how to use the Amazon AWS Cognito service to manage user accounts. The Cognito APIs let you implement things like user registration and log-in, password resets, and so on. And AWS has a very extensive .NET SDK, so you can do all these things from a C# mobile or desktop app.

Which sounds great. Except, there is a special Unity version of the AWS .NET SDK. And this specific thing I needed to do - Cognito Signup and Login - is not supported in the Unity version of the SDK. 

So, I made it work.

If you would also like to make it work, continue reading. If not, really, check out these awesome signed distance fields!

Step 1 - Update the AWS SDK

The AWS .NET SDK for Unity seems...wildly out-of-date. The documentation is going on and on about Unity 5, but Unity doesn't even have version numbers anymore. Unity 2017 was released 6 months ago! So, this thing, it's kind of broken. Luckily, it's also open-source - right there on github. Although it's not clearly documented, the entire SDK is auto-generated, and so making large-scale changes just requires tweaks to a few template files. It was a one-character change to have the generator output visual studio projects for .NET 4.5 instead of .NET 3.5 (you're not still using .NET 3.5, are you? gross!). It was also not linking to the new split UnityEngine.dll system. These are fixed in the few commits I added to my fork of the project.

The other nice thing about this auto-generator is that adding the CognitoIdentityProvider service - the thing that is missing from the Unity SDK - was pretty straightforward. I can't say I really understand how it all works. But, there are some config files, you cut here, paste there, run the generator, and voila - it's in the Unity build. 

So, if you want to make your own Unity 2017.3 .NET 4.5 dlls, you can grab my fork, open sdk\AWSSDK.Unity.sln, and build all. Somebody else can figure out how to make it spit out the .unitypackage file, I just collected up the DLLs I needed manually from the bin\Release subfolders.

Step 2 - AWS User Pool Configuration

So, building the CognitoIdentityProvider for Unity turned out to be pretty easy. Using it? Not so much. Even using it in non-Unity "supported" .NET is a bit of a mess, based on the hundreds of confused developers you will find in your Googles. So, I have prepared for you a fully functional Unity project - it is here also on github. Open this project in Unity, and, after you configure your AWS appropriately, it should allow you to register a Cognito user account, as well as sign in and get those delicious tokens. At least, it did today, July 25 2018, on my Windows 10 computer with Unity 2017.3.0f3. Tomorrow, who could say? This is the world we have created for ourselves.


First you need to configure your Cognito User Pool appropriately. If you are reading this, you probably already know what a User Pool is. If not, the AWS Docs have a pretty good walkthrough of this, but there are a few critical settings you will need to know about. At some point in this setup, you are going to create an App Client and associate it with your User Pool. It will look like the screen on the right. By default, Generate client secret is checked. Un-check it!

If you don't do this, then AWS will generate and associate a secret key with this app client, and for each request you will have to combine this key with some other math things to create a hash. You'll be able to get this to work for the Signup request, because the request object it has a field for the hash. But, the Sign-in request does not. You can't change this setting after you create the App Client (but you can create multiple App Clients for a given User Pool, if you already have a User Pool you don't want to throw away)


Two other things. By default, email verification for your user pool will be enabled, but instead of sending an email verification link, it will send a numeric code. I have no idea what the user would do with this code. But, you can change it to email a clickable link instead. After you have created the user Pool, go to the Message Customizations page and change Verification Type to Link


Finally, you can set a domain for your App Client as well. You will find this in the the Domain Name page (one of the links in the left bar of the User Pool page).  Once you set that, you can actually go to this URL in a browser, and if you View Source, it looks like maybe this is a page to sign up and/or manage accounts. However, the page only renders a blank white background. Confusing. But until I did this, I got an exception when signing up for accounts. 

Step 3 - Unity!

You are almost there. Open my Unity sample project. Open the C# project. Open the CognitoThings.cs file. You will find this at the top:


This is the AWS Configuration strings you need to set up before you can talk to AWS. Note that this is much simpler than some other AWS things you might want/need to do, because there are no secret keys involved. The Cognito Signup/Login APIs do not need any secret keys, which is great because these are a pain to handle safely in Unity. The code comments tell you were to find everything. Note that the UserPoolName is a substring of UserPoolID. Don't forget to set the right RegionEndpoint.


Once you have set the magic strings, you are ready. Open the scene named CognitoTestScene. Hit Play. You will see an ugly form. Enter an email address and a password (that complies with the password policy you set in your User Pool. Don't remember? The default is lower and upper case, a number, and a symbol). Hit Signup

If it worked, you will get a confirmation email. Once you confirm, you can then click Sign in and see the first few characters of one of the authentication tokens that AWS returned. If you jump over to the Users and Groups page in the User Pool page of the AWS Console, you should see your new account listed as CONFIRMED.


It is a bit difficult to delete Cognito user accounts. You can't do it from the AWS management website, you have to use the command-line tools. But, you have to put in valid email addresses to test things. So, I found GMail's disposable address system to be helpful for testing.

Step X - The Code

If you are reading this post, it can only possibly be that you need/want to develop something that uses Cognito User Accounts (otherwise, why are you reading this? it is not interesting.) So, as far as the code goes, all the good stuff is in the CognitoThings Monobehavior. There are really only 2 functions, TrySignupRequest() and TrySignInRequest(). The Signup request is relatively simple, you just send the username and password (I may be mistaken, but it seems like this password is sent as plaintext. Hrm...).

The Sign-in is...complicated. Basically, you do not just send a username and password "over the wire" to AWS because that is a bad idea, security-wise. Even if you would really like to do it just this one time and promise to never ship it, it is not supported. Instead what you do is, you construct some math numbers, send them, and then AWS does more math things and sends back some other numbers, that you combine (irreversibly) with the password and send back a second time. AWS checks your math, and if it adds up, it is convinced that you know the password it has on file, without you ever having to send the password. Cryptography! it's wild. 

But, oddly, doing all this math stuff is not actually part of the AWS .NET SDK, and is in fact quite difficult to implement. AWS has released a developer preview of a library that handles this - but not for Unity. Let's hope that materializes soon? Luckily, Markus Lachinger has an excellent blog post about it, and standalone .NET sample code for Cognito Sign-in that I borrowed liberally from to make this work. Most of his code is in CognitoAuthUtil.cs, which handles computation of the math numbers. This code uses an open-source library called BouncyCastle from Australia (Australia!). I have included a compiled BouncyCastle.dll, you can make your own from their github repo if you prefer.

The two functions TrySignupRequest and TrySignInRequest take callback functions for success and error, however these have no arguments. In cases of failure, you will need to dive into the response objects yourself and decide what to do. Similarly, when the Sign In succeeds, you get an instance of AuthenticationResultType that contains the ID, Access, and Refresh tokens. You'll have to work out what to do with these yourself - here is an explanation of what they are for.

Ok, I hope I have saved you some suffering. Unless you read this far but aren't going to use it - then I apologize for the suffering I have caused.



DMesh3: A Dynamic Indexed Triangle Mesh

If you are using the g3Sharp library, there is a good chance it's because you need to do some things with triangle meshes. If so, then the triangle mesh data structure is kind of important! In g3Sharp, this is the DMesh3 class. In this post I will dive into the details of this data structure and the operations it provides. Along the way I will try to explain some of the central concepts of triangle mesh data structures, which should also apply to other triangle mesh libraries.

At its core, DMesh3 is an index-based triangle mesh. This means that we reference components of the mesh - ie the vertices and triangles - by integer indices, rather than pointers. This is similar to the basic mesh data structures used in many other libraries. For example, Unity's Mesh class is also an indexed mesh, and the vertex buffers you might pass to a low-level graphics API like OpenGL/Direct3D/Vulkan/Metal are indexed meshes. The diagram below shows how these things connect up. Each vertex is a triplet or 3-tuple of real values (x,y,z) which are the 3D coordinates. In DMesh3 we store these in double-precision floating point. Then, each triangle is an integer 3-tuple (a,b,c), where each integer is the index of one of the vertices.


For many mesh processing tasks, this is all we really need. For example to render the triangles we just need the vertices of each triangle. If we want to estimate vertex normals, we can iterate over the triangles and accumulate their facet normals at the vertices. If we want to animate the mesh, we can modify the vertices. We can do a lot with just the basic indexed mesh, and it is the most memory-efficient mesh representation. The SimpleMesh class stores a mesh in this format.

But what if we want to know which triangles are connected to a specific vertex, or find the triangles connected to a specific triangle? These are common tasks in mesh processing. Even for something as basic as deleting a set of vertices, we'll need to find the list of affected faces. Of course we can compute this information from the triangle indices - this is called the adjacency graph, and we frequently refer to it as part of the mesh topology. However, computing the adjacency graph takes time, and in g3Sharp we use it often enough that we want to just keep it around all the time. So, we will add it directly into our mesh data structure.

First, we will explicitly represent the edges of the mesh. In a manifold triangle mesh, each edge has a vertex at either end, and is connected to one or two triangles (if it's just one, then this edge is on an open boundary of the mesh). So each edge is a 4-tuple (v0, v1, t0, t1), where if the edge is a boundary edge, then t1 will be set to the invalid index (which is -1, but you should use the constant DMesh3.InvalidID). These edges connect up pairs of vertices and faces, but we still need to know how to get from vertices to edges. So, for each vertex, DMesh3 stores a list of connected edges. The diagrams below illustrate this connectivity.


Storing the set of edges connected to a vertex adds a significant complication, because the number of edges is variable. At the implementation level, this means we need dynamically-sized per-vertex lists, which we will discuss further below. But at the conceptual level, we have largely solved our connectivity problems. If we have a vertex and want to find the set of faces it connects to - we call this it's one-ring - we can iterate over the edges, and collect up the unique triangle indices. If we would like to find the edge between two vertices A and B, we can just iterate over the edges at A and search for the one that connects to vertex B. 

However, because of this variable-sized list, some common tasks are still relatively expensive. In particular, consider finding the three neighbour triangles for a given triangle. This involves iterating over each edge one-ring of each of the vertices of the triangle. When we manipulate regions of selected triangles, we need to do this search for each triangle in the selection, often many times, and so this search overhead is significant. 

We can avoid these searches by explicitly storing the triplet of edges (eab, ebc, eca) for each triangle, as shown in the diagram below.


Storing the triangle edges in this way is redundant. However it vastly simplifies many algorithms, and we find it to be worth the memory cost. For example, although one can find the edge between vertices A and B using the list-search described above, if A and B come from a known triangle, then the edge can be found in constant time. This is also a very frequent operation in many mesh processing algorithms. 

Accessors and Internal Storage

The DMesh3 class provides interfaces for accessing the above elements. For example, GetVertex(vi) returns (x,y,z) for a given vertex index, and GetTriangle(ti) returns the (a,b,c) tuple. For edges, GetEdgeV(ei) returns the two edge vertices, GetEdgeT(ei) returns the triangles, and GetEdge(ei) returns both in a 4-tuple Index4i struct. FindEdge(a,b) returns the index of the edge between a and b, while FindEdgeFromTri(a,b,ti) uses the more efficient triangle-based query described above. 

Similarly, convenient iterations are provided like VtxVerticesItr(vi) to iterate over the vertices connected to vertex vi, as well as VtxEdgesItr(vi) and VtxTrianglesItr(vi). The GetTriEdges(ti) function returns the triangle-edge triplet, while GetTriNeighbourTris(ti) returns the neighbour triangles, and TriTrianglesItr(ti) provides the same values via an IEnumerable<int>. So, you can iterate over an edge-one ring like so:

foreach (int eid in mesh.VtxEdgesItr(vid)) {

There are also various task-specific queries that are useful in certain situations. For example, GetVtxNbrhood(eid,vid) looks up the "other" vertex of edge eid, as well as the connected triangles and "opposing" vertices, which are needed to compute the cotangent weights used in Laplacian mesh processing. You can of course compute this yourself using the above functions, but DMesh3 can do it more efficiently because it can directly query its internal data structures.

And what are these data structures, exactly? Most DMesh3 clients will not need to know, but it is relevant in some situations, so I will describe it in detail here. DMesh3 stores its indexable lists using the DVector<T> class. This class provides an array/list-like interface, ie the i'th element of type T can be accessed as array[i], and set as array[i] = v. However, internally the elements are not stored in a single contiguous array. Instead, DVector allocates memory in blocks, and the index accessor maps between the external linear index and the internal block/index pair. 


In the current DVector implementation, each block is 2048 items. This is a hardcoded value to optimize indexing via bitwise operations. If you profile g3Sharp code, you will often find that DVector.this[] is the top hotspot, called via many DMesh3 functions. This is why DMesh3 stores the redundant triangle edges, and provides many variations on its accessor functions - because these allow us to minimize unnecessary queries that, when done millions of times, can really add up!

Why use a DVector instead of a C# List? The main reason is to avoid allocating huge linear blocks of memory. A triangle mesh can easily involve millions of elements, which means the internal buffers would be very large. In addition, if the mesh is growing - for example when adding triangles - the C# List class will resize itself multiple times, which means new allocations and buffer-copies. In interactive applications this can result in noticeable pauses. With DVector this situation never occurs. 

If you look inside DMesh3, you may note that rather than the vertices list being a DVector<Vector3d>, it is instead a DVector<double>, and similarly for the triangles, edges, and so on. Because C# does not support returning references, it is not possible to use common C++ idioms that would allow for more efficient access. Frankly, in C# storing structs inside a list is risky, and a common source of errors. In addition, by using POD types, we can directly serialize the DVector buffers, which can be useful in interop situations. However, this is a design decision that is largely encapsulated inside DMesh3 and may be revisited in the future.

Vertex Edge Lists

As I mentioned above, the variable-length per-vertex edge lists are a significant complication. For a small mesh, using a List<int> for each vertex is not outrageous, but if we have millions of vertices, then we have millions of tiny lists and this starts to become a problem. In particular, if we are changing the mesh, we will be updating these lists and so even more memory allocations (and garbage collections) will occur. In fact the initial DMesh3 implementation used Lists and during something like a remeshing pass, the GC was constantly working to clean up all the List objects being generated and discarded.


To alleviate this, the per-vertex lists are represented by a SmallListSet object. This class is designed to store a large number of small integer lists, where we have a priori knowledge of how many elements will be in the median list. If the median list size is K, then each list begins with K contiguous elements (think "array", but they are not stored as separate Array objects). If the list grows larger than this, then we spill into additional elements stored via a linked-list, as shown in the diagram.

K should be set so that "most" lists fit in the initial block of elements. In a perfectly regular mesh each vertex is connected to 6 edges, but in the current implementation we set K = 8. This gives us some breathing room. Internally, SmallListSet is built out of DVector<int> instances, and even can re-use previously-allocated lists that have been freed. As a result, memory allocations due to the vertex_edge lists are very infrequent. In addition, this scheme is approximately 30% more space-efficient than per-vertex Lists. Although most vertices will not need the full 8 ints in the initial per-list array, we make it up on the overhead of managing all those separate C# objects.

Dynamic Mesh Operations

DMesh3 is not just an indexed triangle mesh - it is a dynamic indexed triangle mesh. This means that we can modify the mesh topology, for example by deleting triangles, splitting edges, and so on. This are complex operations because the data structures above must be updated to remain internally consistent. However, these are mainly internal issues. From the perspective of the user of DMesh3, the immediate complication is that, if we delete a triangle, its index is now invalid.


Internally, for each of the verticesedges, and triangles lists, DMesh3 also stores a reference count for each index, implemented via a RefCountVector object. When a vertex is deleted, its reference count is set to an invalid value (currently -1), which indicates that this index is no longer in use. The RefCountVector also maintains a list of these free indices, and functions like AppendVertex() and AppendTriangle() will preferentially re-use free indices. The reference counts themselves are stored as 16-bit short integers, which limits the valence of a vertex to ~65k.

These are internal implementation details - you don't have to interact with the reference counts yourself.  However, it is critical to understand that after some mesh processing operations - like after Mesh Simplification via Reducer - a DMesh3 instance will have gaps in its index space, where certain indices will be invalid. If there are invalid indices, we say the mesh is Non-Compact, and DMesh3.IsCompact can be used to determine when this is the case. 

When the mesh is Non-Compact, you *cannot* blindly iterate from 0 to TriangleCount or VertexCount, because some indices are invalid. The functions IsVertex(vi)IsTriangle(ti), and IsEdge(ei) can be used to check if a given index is valid. However, for a Non-Compact mesh you also cannot rely on VertexCount and TriangleCount as an iteration boundary, because they return the number of valid indices, not the maximum index. If you take a mesh with a million triangles and delete all but 100 of them, it is entirely possible that one of the triangles has an index of 999999. So, to properly iterate via indices over a Non-Compact mesh you must iterate from 0 to MaxTriangleID or MaxVertexID. These values are the maximum allocated indices. 

If this all sounds inconvenient, you are strongly encouraged to instead always use the functions VertexIndices() and TriangleIndices(). These functions return IEnumerables that iterate over only the valid indices. 

In some situations, such as converting a processed DMesh3 to a different mesh format, you may wish to Compact the mesh, so that the index space is dense. The simplest way to do this is to use the compacting copy-constructor like so:

DMesh3 compactMesh = new DMesh3(noncompactMesh, true)

The resulting mesh will have indices that are tightly packed. Note that compacting is more computationally and memory intensive than non-compacting copies, because the internal buffers cannot be copied directly. So if you are doing a series of mesh processing operations, it is best to only compact at the end, or when necessary. If you need to know the mapping from old to new indices, you can use CompactCopy(), which returns a data structure containing these index maps. And CompactInPlace() can compact a given mesh without having to create a copy. However, if the mesh is large and even moderately sparse, this is more expensive than compacting with a copy.

Vertex Attributes and Triangle Groups

DMesh3 provides a few common per-vertex attribute buffers for vertex normalscolors, and uvs. The normals and colors are stored in single-precision floating point, and for the colors, only RGB values are available, there is no alpha channel. The UVs are stored as 2-element floats. 

These buffers are optional. By default they are not allocated. The various constructors and copy functions can enable these buffers at setup. You can also use functions like EnableVertexNormals(), etc, to initialize vertex attributes for a given mesh, and DiscardVertexNormals()/etc to throw them away. The DMesh3.Components property returns a MeshComponents enum-flags, which will tell you if a mesh has a particular buffer, as can the HasVertexColors/etc properties.

There is only one per-triangle attribute, an integer triangle group field. Usage of this is up to you. However we do use triangle groups inside g3Sharp to indicate semantic groupings of triangles. For example Remesher and Reducer can be configured to preserve the boundaries of sets of triangles with the same group.

Modification Operations

Since DMesh3 is dynamic, you can change it. There are various functions to do so, which properly update all the internal data structures to keep the mesh consistent. RemoveTriangle() and RemoveVertex() can be used to delete elements. SplitEdge()CollapseEdge(), and FlipEdge() implement the standard mesh refinement operations. PokeTriangle() does a 1-to-3 triangle subdivision by adding a vertex at the triangle centroid.

The MergeEdges() function combines two edges into one. Conceptually this is performed by welding each pair of vertices. However a single vertex weld creates a bowtie vertex, so MergeEdges does both, and the mesh is never left in a non-manifold state. In addition, MergeEdges will properly close the zero-area holes that occur when, for example, merging the second-last of a pair of edge loops. 

These functions return a MeshResult enum, and for the more complex operations, also an operation-specific data structure like DMesh3.EdgeCollapseInfo. These structs provide information on what happened to the mesh - indices of elements removed and added, and so on. 

Metadata, Timestamps, Sanity Checks, and Serialization

DMesh3 has a few other capabilities that are useful in certain situations. In the extensibility direction, you can add arbitrary string/object pairs via AttachMetadata(), and look up said objects later using FindMetadata(). This is a bit of a hack, frankly, but it means that if absolutely necessary, you can attach data to a mesh as it is passed from one place to another. For example, when working with a custom mesh format, you can use AttachMetadata() to hang on to any format-specific mesh attribute data, without having to subclass or duplicate every mesh processing function you need to call. Note, however, that metadata is not copied/transferred by the DMesh3 copy constructors.

It is often useful to know if a mesh has changed. We find this happens frequently in interactive contexts, where we need to build caches/buffers to render a mesh, and we want to rebuild these caches if the mesh changes. Rather than events, DMesh3 has timestamps which are updated when the mesh is modified. These are not actually date/times, but rather just integers that are incremented on any mesh update. DMesh3.Timestamp is incremented when any mesh attribute changes, while DMesh3.ShapeTimestamp is only incremented when the mesh topology or shape changes (so, not when a vertex normal/color/uv or face group is updated).

Most of the functions in DMesh3, and the code in the rest of g3Sharp, assumes that the mesh is internally consistent - ie no valid triangle references invalid vertices, that sort of thing. However, particularly when loading mesh files or constructing them from external index buffers, this may not be the case. The function CheckValidity() can be used to verify that the mesh is well-formed. If you are encountering weird issues, it is a good idea to throw in a few calls to this function. Note that by default it will consider bowtie vertices to be invalid, this is configurable via the first argument and may not be desirable in certain contexts. 

The IsSameMesh() function will tell you if two meshes are "the same", and is useful primarily for testing. By default it only checks that the vertices and triangles are the same, checking of other mesh elements can be enabled with the many arguments.

Finally, if you need to serialize a DMesh3, you can use the functions gSerialization.Store() and gSerialization.Restore(). The restored mesh will preserve all vertex/triangle/edge indices and allocated vertex/triangle attributes. However if you don't care about these things, you can get a much more compact serialization by just storing the vertex coordinates and triangle indices yourself (although reconstructing the DMesh3 from the deserialized buffers will be more expensive).

Questions Answered

Why an indexed mesh?

Many mesh libraries that aim to support similar dynamic-mesh capabilities use pointer-based meshes, where a level of indirection allows for the "invalid-index" problem to be avoided. However, such approaches tend to not scale well to huge meshes. If each vertex and triangle is a full object, then the memory allocator must manage millions of tiny objects, which is not ideal. Other clever pointer-like schemes can be used, but if taken to the extremes of efficiency, the result is often essentially an indexed mesh, but without the advantages of being able to make assumptions about indices.

Having indices means we can iterate over them, or over subsets. Even when the index space has gaps, it can be beneficial to iterate over indices and simply test-and-skip invalid indices - particularly in multi-threaded code. An index iteration doesn't become invalid if we modify the mesh during an iteration. We can do math on indices, and serializing index-based data structures is much simpler. And finally, trying to debug pointer-based mesh code is sheer madness, in the author's humble opinion.

Is this a Winged-Edge Mesh?

No. Although DMesh3 edges do connect two vertices and have two adjacent faces, we do not store the other topological connectivity links common in Winged Edge meshes, and we do not guarantee anything about the ordering of faces. 

Why not Half-Edge?

Half-Edge mesh data structures have the advantage that all elements have fixed size, ie our per-vertex variable-sized list is not necessary. They also have other benefits, and are more efficient for various kinds of element-adjacency iterations. However, Half-Edge is generally a pointer-based mesh data structure, and so inherits some of the problems described above. In addition, half-edge mesh code, although conceptually elegant, can be quite difficult to read. 

Can DMesh3 store non-manifold meshes?

DMesh3 does allow for bowtie vertices, ie non-manifold vertices that are connected to more than two boundary edges. However, some functions in DMesh3 will fail to operate properly on edges connected to bowtie vertices, in particular CollapseEdge(). 

Non-manifold edges, where more than two faces meet at an edge, cannot be represented because the DMesh3 edge elements only reference two triangles. Functions like AddTriangle() will return an error if you try to add a non-manifold triangle. The NTMesh3 class is a variant of DMesh3 that can store non-manifold topology, however it currently does not have many of the other DMesh3 operations implemented. 

3D Bitmaps, Minecraft Cubes, and Mesh Winding Numbers

As a follow-up to my Signed Distance Fields tutorial, a reader asked about how to voxelize a mesh. The MeshSignedDistanceGrid that we computed is a kind of voxelization, but this reader was (I assume) asking about binary voxels - ie the blocky, minecraft-y kind - which can be represented with a Bitmap3 object, where each (i,j,k) entry is true if it is inside the mesh and false if it is outside.

There are several ways to create this Bitmap3 voxelization of a mesh. If you start with the MeshSignedDistanceGrid from the SDF Tutorial, then you can convert it to a binary bitmap like so:

// create SDF
MeshSignedDistanceGrid levelSet = ...

Bitmap3 bmp = new Bitmap3(levelSet.Dimensions);
foreach(Vector3i idx in bmp.Indices()) {
    float f = levelSet[idx.x, idx.y, idx.z];
    bmp.Set(idx, (f < 0) ? true : false);

This block creates a Bitmap3 with the same dimensions as the SDF, and then for each index, sets the voxel as "inside" (true) if the distance is negative. 

Creating a Minecraft-Style Surface Mesh

If you would like to see what the binary voxelization looks like, you can use the VoxelSurfaceGenerator class to create a Minecraft-style mesh of the voxel faces:

VoxelSurfaceGenerator voxGen = new VoxelSurfaceGenerator();
voxGen.Voxels = bmp;
voxGen.ColorSourceF = (idx) => {
    return new Colorf((float)idx.x, (float)idx.y, (float)idx.z) * (1.0f / numcells);
DMesh3 voxMesh = voxGen.Meshes[0];
Util.WriteDebugMesh(voxMesh, "your\\path\\mesh_file.obj");

Click to enlarge

The ColorSourceF function I am setting here is used to provide a solid color for each block. This is helpful for visualization, but might also be useful if for example you had some spatial coloring function you wanted to visualize. You can also assign colors to the mesh afterwards, of course.

The last line writes out the mesh to an OBJ file, you'll have to provide your own path. The result is shown to the right, for our standard bunny mesh. This image is a screen shot from Autodesk Meshmixer, with boundary edges shown in blue. Note that the mesh generated by VoxelSurfaceGenerator is actually a bunch of small squares, that are simply adjacent. So, many mesh processing techniques will not work on this mesh.

We could try to weld these borders together, but if we aren't careful this will result in non-manifold topology where more than two faces meet along an edge or at a vertex. We'll cover that in a future tutorial.

Voxelization with a Point-Containment Queries

Generating an SDF is one way to voxelize a mesh, and perhaps the fastest, because we only have to resolve inside/outside near the surface. A fast-sweeping algorithm is then used to fill the rest of space. However, we have some other options, which do have some benefits. Here is an alternative that uses DMeshAABBTree3.IsInside() to set the inside/outside value at each voxel:

DMesh3 mesh = (load mesh...);
DMeshAABBTree3 spatial = new DMeshAABBTree3(mesh, autoBuild: true);

AxisAlignedBox3d bounds = mesh.CachedBounds;
int numcells = 32;
double cellsize = bounds.MaxDim / numcells;
ShiftGridIndexer3 indexer = new ShiftGridIndexer3(bounds.Min, cellsize); 

Bitmap3 bmp = new Bitmap3(new Vector3i(numcells,numcells,numcells));
foreach (Vector3i idx in bmp.Indices()) {
    Vector3d v = indexer.FromGrid(idx);
    bmp.Set(idx, spatial.IsInside(v));

Basically the same code as above, the main difference is that we have to create a ShiftGridIndexer3 to map between grid coordinates and 3D space. In this example we only need to go from the grid (i,j,k) to 3D (x,y,z), using FromGrid(). However if in your application you want to map from 3D coordinates into the grid, you can use ToGrid(). There are also more advanced Indexer options, for example if your object has a full 3D position (origin + rotation), you can use a FrameGridIndexer3 to do the mapping.

This code uses a smaller grid resolution - 32 - so the result is blockier, as you can see on the right. There are not a lot of benefits to using the mesh IsInside() in this context, however you might find the code above useful if you have some kind of geometry that is hard to get into the level set format.

Voxelization with the Mesh Winding Number

There is a third function we can use to determine inside/outside of a mesh at a point, that is pretty awesome. Here is the code:

spatial.WindingNumber(Vector3d.Zero);  // seed cache outside of parallel eval
Bitmap3 bmp = new Bitmap3(new Vector3i(numcells+2, numcells+2, numcells+2));
gParallel.ForEach(bmp.Indices(), (idx) => {
    Vector3d v = indexer.FromGrid(idx);
    bmp.SafeSet(idx, spatial.WindingNumber(v) > 0.5);

Ok, the main thing is that we are using DMeshAABBTree3.WindingNumber(), which computes a hierarchical evaluation of the Mesh Winding Number. This is something that was just recently invented, and published in a SIGGRAPH paper in 2013 by Alec Jacobson, Ladislav Kavan, and Olga Sorkine-Hornung. If you have ever used the Polygon Winding Number to check if a point is inside a polygon, this is the same thing, but for 3D meshes. You just compute a simple function over the mesh triangles, and if the mesh is closed, you get an integer if you are inside the mesh, and 0 if you are outside. Kind of amazing.

The function DMesh3.WindingNumber() will do this computation, and for small meshes or single queries (eg like checking if a single 3D point is inside a mesh in a VR user-interface), this is sufficient. However if the mesh is large and/or you are doing lots of queries (like to fill a 64^3 voxel grid), it is too slow. However there is a neat trick to do a hierarchical computation based on a bounding-box tree, which is implemented in DMesh3AABBTree, and that's what I'm using above (see the paper for more details).

Since each voxel is independent, we can trivially parallelize this evaluation. The WindingNumber() function in DMesh3AABBTree does a precomputation the first time it is called, so the first line does one evaluation to seed this cache. Then we use gParallel.ForEeach, which does a multi-threaded iteration over the IEnumerable we provide (here the indices of the grid), and calls the lambda function for each index. Note that we also use Bitmap3.SafeSet() (instead of Set), which internally uses a SpinLock to make sure we aren't writing to the Bitmap3 from multiple threads. This is necessary because internally a BitArray is used, which stores the bits packed into 32-bit integers. Since we cannot read-and-flip bits independently in an atomic operation, we might end up with race conditions without the lock.

Ok, enough of that - why bother with this much more expensive computation? Because it is magic. Below is another example, where I have cut a few large holes in our venerable bunny. Also, that sphere stuck on it's back, is in fact only half a sphere, that is just overlapping - it is not connected to the bunny mesh. Look what happens when we use IsInside() (middle) or the SDF version (right). In both cases, those open boundaries are a disaster. 



Now, the interesting thing about the Mesh Winding Number (MWN) is that unlike the binary IsInside(), or the distance-field-based SDF, it is a real-valued computation that is well-defined over space. If the mesh is closed the MWN is an integer, but when the mesh contains holes, the winding number smoothly diffuses around the open boundaries. As a result, we can use a non-integer threshold to determine the range of MWN values we consider to be "inside". In the code above I used > 0.5, which produces a great result shown below-left. This value is important though - if I use 0.8, I get the result on the right - not a catastrophic failure, but there are still clearly big chunks missing. 


The Mesh Winding Number is a very powerful tool, and is also extremely simple to implement - my brute-force implementation is about 20 lines of code (the hierarchical version is harder, but not /that/ hard). The hierarchical evaluation is critical if you want to fill a grid. On my 6-core/12-thread machine, in Release build, computing the 64^3 voxelizations above takes a second or two. With the non-hierarchical version, it took well over a minute.

So, go forth and voxelize!

[Update] A reader asked about splitting up a mesh into 64k chunks, which is a hard constraint in the Unity Mesh class. There is a way to do this for any DMesh3, but that's a topic for a future tutorial. The VoxelSurfaceGenerator has a field MaxMeshElementCount, if you set this to 65535, then once the growing mesh hits that many vertices or triangles, a new mesh will be started. The resulting meshes will be accumulated in the VoxelSurfaceGenerator.Meshes List.

Merging Meshes with Signed Distance Fields

Left: lots of overlapping spheres, YOUR SLICER's NIGHTMARE Right: a single continuous surface OF ULTIMATE PRINTABILITY

Left: lots of overlapping spheres, YOUR SLICER's NIGHTMARE
Right: a single continuous surface OF ULTIMATE PRINTABILITY

In this tutorial I'll show you how to use a few of the tools in geometry3Sharp to create solid approximations to an input mesh. We'll create a Signed Distance Field (SDF) approximation for an input DMesh3, and then use MarchingCubes to extract a new mesh from the SDF. These basic techniques can be used for an enormous number of cool things, from voxelizing and mesh repair (similar to the Make Solid tool in Autodesk Meshmixer), to data structures for spatial queries, and even for 3D modeling things like Mesh Booleans.

As a teaser, by the end of this tutorial, I'll have explained how to turn that mess of overlapping spheres on the right into a closed surface that you could safely send to any 3D printing software.

By the way, if you would like to experiment with g3Sharp, it's now available as an official NuGet package - you can find it by searching for "geometry3Sharp" in your favorite package manager.

I'll start with a basic code sample and then we'll step through it:

DMesh3 mesh = StandardMeshReader.ReadMesh("c:\\demo\\bunny_solid.obj");

int num_cells = 128;
double cell_size = mesh.CachedBounds.MaxDim / num_cells;

MeshSignedDistanceGrid sdf = new MeshSignedDistanceGrid(mesh, cell_size);

var iso = new DenseGridTrilinearImplicit(sdf.Grid, sdf.GridOrigin, sdf.CellSize);

MarchingCubes c = new MarchingCubes();
c.Implicit = iso;
c.Bounds = mesh.CachedBounds;
c.CubeSize = c.Bounds.MaxDim / 128;
c.Bounds.Expand(3 * c.CubeSize);

DMesh3 outputMesh = c.Mesh;

StandardMeshWriter.WriteMesh("c:\\demo\\output_mesh.obj", c.Mesh, WriteOptions.Defaults);

That's it. If you run this code on this solid bunny mesh, then open the input and output meshes, you'll see that they look quite similar. The SDF version is a bit smoother in some places, and it has more triangles so the shading is different. But if you overlay the two, you'll see that they solid is a very close approximation (right image).

Lets step through the code. After loading in the mesh, I first decide on num_cells. This defines the density of the grid we will compute the SDF on. Larger numbers mean better shape approximation, but also more memory and computation time. Internally, the SDF will be based on a dense 3D grid of floats - think of a solid block of Minecraft voxels. Each block is a cube cell_size wide.

Next we create the MeshSignedDistanceGrid object for the mesh and compute the result, which comes out in sdf.Grid. Then we create a DenseGridTrilinearImplicit based on this grid. This class will use trilinear interpolation to turn the discrete grid values (ie the blocky cubes / voxels) into a continuously-varying 3D scalar field. So, based on our Signed Distance Grid, we have created a Signed Distance Field.

We can call iso.Value(Vector3d) at any point in space, and it will return the (approximate) signed distance to the (approximated) mesh surface. If the distance is negative, then that point is inside the surface. Positive is outside, and if the value is zero, then the point is on the surface (this surface is known as the iso-surface). Of course we rarely get to exactly zero, but if we had point A just inside the surface (ie negative), and point B just outside (positive), then we know that at some point on this line, the function will evaluate to zero. So, we can do root-finding along this line, using something like Bisection or Newton's method, to converge on the zero value (wikipedia). 

This is exactly how we will get a mesh back out of our SDF. Remember, at this point our SDF is completely decoupled from the original mesh. All we have is the function iso.Value(). So what we are going to do is fill a bounding-box with smaller boxes, and evaluate the SDF at the corners of all the boxes. Then when we find a box where some corners are inside and some are outside, we know the surface cuts through that box. We'll do root-finding along the box edges to (approximately) find the zeros, and then make a patch of triangles. This is the famous Marching Cubes algorithm. Paul Bourke has an excellent page with more information, and you can find a PDF of the original paper here - compared to a modern SIGGRAPH paper it is incredibly readable.

In g3Sharp the MarchingCubes class implements this method, and you can give it an arbitrary function to surface via the Implicit member. This way of representing a 3D shape - as a scalar function over space - is often called an Implicit Surface, hence the naming. We also have to provide a bounding box to search inside of (ie to fill with smaller cubes), and the CubeSize we wish to use. Note the small expansion of the bounds - if you don't do this, the most extreme edges of the shape might be clipped off (3 cells is probably overkill).

After calling Generate(), the mesh is built and we can write it out. Easy!

You're not impressed? Ok, how about this one. Here's the same bunny with an intersecting sphere stuck on it, and I did a bit of sculpting to create a self-intersection (download). If you wanted to turn this into a solid non-self-intersecting shell (say, to 3D print it), well, you're in trouble. These are hard problems to solve. But not for an SDF - run this file through the code above, and the output mesh is a closed, manifold shell. The trade-off for this simplicity is that we have to accept some resampling artifacts around the sharp edges.


(click to enlarge)


In the examples above, what is "inside" vs "outside" is defined by something called the winding number. We'll explore this concept more in a future post. But, basically, in this context it means that for any given point in space, we can count up how many "times" the point is inside the input surface. So, points inside both the sphere and the bunny, have winding number 2. Similarly, points inside that bit that is self-intersecting (which you can't see) also have winding number 2 - they are inside the bunny "twice". Points outside have winding number 0.

But what about cavities on the inside of the surface? Well, the winding number depends on the orientation of the mesh. If we flip a mesh inside-out, then the points inside it have negative winding numbers. In the SDF mesher, we will define any point with positive winding number as inside. This means we can use inside-out meshes to define holes, and also to do Boolean subtraction. Here's an example below. I am using Meshmixer to view the output meshes, and in Meshmixer the red/white stripey pattern means you are looking at the "back-side" of the mesh surface. I cut away in the SDF+MC mesh to show that there is also a fully-enclosed hole.


click to embiggen


In the description above, I mentioned two parameters - the cell_size we passed into MeshSignedDistanceGrid, and the MarchingCubes.CubeSize. The cell_size of the Signed Distance Grid defines how accurate our approximation of the input mesh can be. Even if we mesh at a very high resolution, if the grid doesn't capture the shape well, we'll have approximation error.

In the image below, I set num_cells to 16, 32, 64, 128, and 256, and used a CubeSize of 128. At the lower resolutions we clearly don't have enough values in the grid to capture the shape. However it is quite interesting to mesh a low-resolution grid at high resolution. The trilinear interpolation produces a smoothly-varying patch inside each cell, but you can clearly see the discontinuities at the cell boundaries, where the interpolation is only C0 continuous. At the higher 128 and 256 resolutions, the grid is as or more accurate than the Marching Cubes triangulation, so you don't see any difference.

click for big

Varying the CubeSize is not so nice at lower resolutions - again I went from 16-256 below, this time with num_cells=256 for the SDF grid. The only reason you might use lower-resolution marching cubes is to reduce memory (Marching Cubes will produce huge meshes very quickly!). At the highest resolution below, you can start to see the original triangles - clearly our SDF approximation is quite accurate in this case! However even at this resolution, the sharp edge around the border of our intersection is not particularly clean. This is caused by both the SDF and the Marching Cubes, but even if it were a nice, smooth sharp edge in the SDF, Marching Cubes will not be able to capture it.


So, to capture shape details we want to use high MC resolutions, but then we end up with huge meshes. What  to do? Use another tool in our mesh processing toolbox, of course. The Reducer class, which I described in a previous tutorial, can help us get back to reasonable mesh resolutions. In this case you only need a few lines:

Reducer r = new Reducer(outputMesh);

In the 256-SDF-grid, 256-MC-grid case above, the initial MC mesh has about 450k triangles. The code above will reduce this mesh to 50k triangles. This takes about 3 seconds on my 4-core desktop computer. This is actually more time that it takes to generate the 450k mesh in the first place! The result, shown in the middle on the right, is much lower-density but clearly there is still a lot of redundant geometry. If we go further, down to 10k triangles (far right), the mesh starts to get better.

You'll also notice that in the 10k-triangles version, the sharp edge around the intersection has started to get a bit cleaner. I have (experimentally) found that using a slight modification to the Reducer setup will do an even better job at recovering these sharp edges. Instead of reducing to a specific triangle count, the Reducer also supports reducing to a target edge length. The following code:


resulted in the mesh on the right, which has a very crisp sharp edge around where the sphere was subtracted. This doesn't always work, but it does work sometimes

So, we can combine a multiple overlapping meshes into a single SDF solid, mesh it at a crazy high resolution, and then use Mesh Simplification to get it back to a closed shell we could actually use in other software. And maybe even get sharp edges out. What to do? Generate some shapes, of course! Geometry3Sharp has lots of different kinds of Mesh Generators built in - Spheres, Cylinders, Boxes, Tubes, Surfaces of Revolution, and more. Look in the mesh_generators/ folder to find all the generator classes. You can do procedural shape generation in two steps - first generate a bunch of small closed meshes, then combine them all and send through the code above. It literally is that easy.

As an example, here is a bit of code that generates a new mesh from an input mesh, by adding a sphere for each vertex, and a rectangular box along each edge:

Sphere3Generator_NormalizedCube gen = new Sphere3Generator_NormalizedCube() { Radius = sphere_radius, EdgeVertices = 5 };
DMesh3 sphereMesh = gen.Generate().MakeDMesh();

DMesh3 latticeMesh = new DMesh3();
MeshEditor editor = new MeshEditor(latticeMesh);
foreach ( int vid in mesh.VertexIndices() ) {
    DMesh3 copy = new DMesh3(sphereMesh);
    MeshTransforms.Translate(copy, mesh.GetVertex(vid));
foreach ( Index4i edge_info in mesh.Edges() ) {
    Vector3d a = mesh.GetVertex(edge_info.a), b = mesh.GetVertex(edge_info.b);
    Frame3f f = new Frame3f((a + b) * 0.5, (b - a).Normalized);
    editor.AppendBox(f, new Vector3f(box_width, box_width, (b - a).Length*0.5));

I set the sphere_radius and box_diam to something appropriate for the scale of my mesh, and ran it on a bunny reduced to 512 triangles, with the SDF-grid cell_size and Marching Cubes CubeSize both set to 512. This crazy resolution is required to capture the fine details. After about 30 seconds, the set of overlapping meshes on the left is turned into the single solid on the right:


You really want to click on this one!


Hopefully this gives you some ideas =)

One caveat about Marching Cubes should be mentioned, and is illustrated in the low-res MC image further up the page. I mentioned that we figure out the mesh patch that should be inside each "cube" based on whether the corners of the cube are inside or outside the surface. Most of the time, this works well, but there are cases where it is ambiguous. Currently, the Marching Cubes implementation in g3Sharp does not properly handle these cases. It is mostly a problem when the shape varies much more rapidly than the MC mesh. In our case these failures will leave small holes in the mesh, as certain triangles will be discarded (in other implementations these triangles would produce non-manifold geometry, but DMesh3 doesn't allow that). We'll be fixing this in the future.

To give credit where credit is due, the super-fast Mesh to SDF conversion I have implemented in g3Sharp is based on a C++ implementation by Christopher Batty, which is on github. And he tells me this code was a cleaned-up version of an initial implementation by Robert Bridson. My C# version has been extensively refactored and modified so that some steps can be multi-threaded. I also added parity-counting to properly handle overlapping shells. 

Mesh Simplification with g3Sharp

Recently a user posted a github issue asking for a mesh simplification example. It just so happened that I had recently finished writing an implementation of Garland and Heckbert's Quadric Error Metric (QEM) Simplification algorithm. If you want to learn more about this technique, the original papers and several later articles are available on Michael Garland's Website, and are very readable. I will give the broad strokes below, but first here is an example of what we are talking about - automatic reduction of a bunny mesh from 13,000 to 500 triangles:


An easy way to Simplify or Reduce a mesh (which is the terminology I use, because...well I don't have a good reason, but it's how I think of it!) is to iteratively collapse edges in the mesh. Each collapse removes one edges and the two (or one at the boundary) triangles connected to that edge, as well as one vertex. Just repeat until you hit your target triangle count, and you're done. Easy! 

Except, which edge should you collapse first? And, when you collapse the edge, should you just keep the existing vertex position? In most cases it would at least make more sense to move the remaining vertex to the edge midpoint. But could we do better? This is where QEM comes in. Basically, the Quadric Error gives us a way to (1) "score" edges, so we know which edge collapses will have the smallest impact on the shape, and (2) predict a position for the new vertex that will minimize the score after the collapse. 

Ultimately, the Quadric Error is a measurement of the sum-of-distances from a point to a set of planes. If you think of the ring of triangles around a vertex, each triangle makes a plane.  The distance from the vertex to that plane is zero. But if we start moving the vertex, this distance increases. So we can "score" an a vertex movement (like an edge collapse) by measuring all the distances from the new vertex to all the input planes (of the original triangles). Each point-plane distance measurement can be expressed as a matrix multiply with the point, and since Ap+Bp = (A+B)p, we can combine all the error measurements into a single matrix multiplication!

Even better, after an edge collapse, we can think of that vertex as having a total error measured relative to the input planes of both vertices (still just one matrix). So as we do sequential collapses, we accumulate all the plane-distance-functions of the original mesh triangles that were (at some point in the past) connected to the remaining vertex. And it's still just one matrix. In the '99 paper linked from the site above [PDF], Garland showed how the QEM error is in some sense a measure of surface curvature, and produces "optimal" triangulations, under a reasonable definition of optimal. Amazing!

But, ok, you're probably just here for the code, right? To use the g3sharp implementation you need to get your mesh to be a DMesh3, see my previous tutorial for how to do that. Then you just need a couple lines to create a Reducer object and run the simplification:

DMesh3 mesh = load_my_mesh_somehow();
Reducer r = new Reducer(mesh);

In the code above, 500 is the target triangle count. This takes a fraction of a second to run. I have not done extensive profiling, but on a relatively fast machine I can reduce a 350k mesh to 100k in 2 to 3 seconds. So, it's pretty fast.

Mesh Validity Checking

Most of the mesh processing algorithms in g3Sharp require that the input meshes be manifold. This term has lots of meanings, in this context it means that at minimum each mesh edge is connected 1 or 2 triangles. Inside DMesh3, edge triangles are stored in an Index2i object, so DMesh3 can't even represent a non-manifold edge. In most cases we also require that there are no bowtie vertices, which are vertices connected to disjoint sets of triangles. The simplest example of this would be two triangles connected only at one vertex (hence the name "bowtie"). 

If these conditions are not met, then some mesh operations - like an edge collapse - can produce undefined results, result in exceptions, and so on. So, to test that your mesh is internally consistent, you can use the DMesh3.CheckValidity() function. You can configure this function to have different behavior if the mesh is not valid, ie it can assert, throw an exception, just return false, and so on. By default it will consider bowtie vertices to be invalid, but you can also configure this behavior with the bAllowNonManifoldVertices argument. 

Note that this function also does very thorough testing of the internal mesh data structures to make sure everything is consistent. So, it is relatively expensive, and you probably do not want to be calling it all the time in production code. One exception is when loading meshes from a file, in that case you really should check the meshes after reading, it can save you a lot of headaches trying to track down garbage-in/garbage-out type problems!

Preserving Boundaries

In some cases, for meshes with boundaries you might need to preserve the boundary loops exactly. For example if you are actually reducing a sub-region of a larger mesh, or a mesh split along UV boundaries. This just takes a few more lines, before your call to ReduceToTriangleCount():

r.SetExternalConstraints(new MeshConstraints());
MeshConstraintUtil.FixAllBoundaryEdges(r.Constraints, mesh);

The MeshConstraints object in the code above is in fact a very powerful facility, that allows you to constrain much more than just boundary edges. But that is a topic for a future tutorial. You can poke around the data structure, or the MeshConstraintUtil helper class, to find out more. The images below compare reducing a bunny-with-boundary to 500 triangles without (left) and with (right) a preserved boundary. 


Here is a closeup of the boundary around the bunny's front paws. You see on the left that there are significantly shorter edges along the boundary loop, because it has been exactly preserved. However, you might also note if you look closely (or click to enlarge) that on the front-left paw that there are some thin sliver triangles. This is a current limitation of boundary preservation - it may result in a bit of ugly stuff at the border. This will hopefully be improved in future updates.


Project to Target

Finally, one last thing that you might want to do when Simplifying a mesh. By default, the "new" vertex position after an edge collapse is computed by minimizing the QEM error for that vertex. Compared to something like edge midpoints, this produces nicer shapes and actually results in the algorithm running much faster. However in some cases you may require that the vertex positions lie on the original mesh surface. This is also supported. First you build a spatial data structure, like the DMeshAABBTree3 we built in the Spatial Query tutorial, and then set that as a "Projection Target" for the Reducer. This causes the vertex positions to be mapped to the nearest points on the input mesh surface. Here is the code:

DMeshAABBTree3 tree = new DMeshAABBTree3(new DMesh3(mesh));
MeshProjectionTarget target = new MeshProjectionTarget(tree.Mesh, tree);
r.ProjectionMode = Reducer.TargetProjectionMode.Inline;

The last line is optional. This causes the Reducer to compute the projection each time it wants to evaluate the QEM error for that vertex. This is "more correct" but many of these vertices will eventually be discarded, so the work is in some sense wasted (projections are expensive). If you leave this line out, then the projection is computed after the reducer is finished, for just the vertices that were ultimately kept. The image below compares no projection (left) and with inline projection (right), overlaid on the original surface (dark grey). The simplified meshes don't actually look very different, but you can see that on the right, most of the mesh is "inside" the original surface, while on the left it is roughly half-and-half inside/outside.


One thing to keep in mind with Projection is that for thin parts, the projection can easily end up on the "wrong" side. So, for most simplification problems you probably don't need it. Which is great because it's quite a bit slower!

Now, go save some triangles! Next up is Remeshing, which works in much the same way, only it's also much more complicated...

Mesh Creation and Spatial Queries with g3Sharp

Welcome to the first post on the gradientspace blog! The first of many, I hope. The posts here will mainly be tutorials on how to use the various gradientspace open-source libraries, and, eventually, interactive 3D tools. 

In this first entry, I will answer a user question, which was filed as Issue # 2 in the geometry3SharpDemos project. The gist is that the user would like to use geometry3Sharp to construct a mesh and do some raycasting against the mesh surface. 

The first problem is how to construct a DMesh3 object from lists of vertex x/y/z coordinates, triangle indices, and in this case also normals (which might not be necessary for the user's problem). This is not very hard but it comes up so often in my own coding that I decided to add a new utility function that makes this construction a one-liner:

DMesh3 mesh = DMesh3Builder.Build(vertices, triangles, normals)

This DMesh3Builder.Build() function is written using C# generics, and internally it does type interrogation to figure out what the input buffers are and cast them to the correct types. So, the vertices and normals arguments could be a float[] array, a List<Vector3f>, or any other generic IEnumerable of <float>,<double>,<Vector3f> or <Vector3d> type. Similarly triangles can be an int[] array or any other IEnumerable<int> or <Index3i>.

This uber-function is not necessarily the most efficient. Internally it basically does this:

    DMesh3 mesh = new DMesh3(MeshComponents.VertexNormals);
    for ( int i = 0; i < NumVertices; ++i )
        mesh.AppendVertex(new NewVertexInfo(vertices[i], normals[i]));
    foreach ( Index3i tri in triangles )

The NewVertexInfo type has additional constructors for other cases, such as vertex colors and UVs. Note that you need to bitwise-or in additional flags (eg MeshComponents.VertexColors) in the constructor, or use the functions like DMesh3.EnableVertexColors(), to allocate these other internal data structures before you can add colors.

After you create a mesh like this, it is a good idea to check that all the internal data structures are consistent. In some cases AppendTriangle() will throw Exceptions if there is a problem, but we do not exhaustively check that the mesh is well-formed on construction because those checks are expensive. Instead, you can call DMesh3.CheckValidity() to do this. This function takes a FailMode argument which determines whether it throws, asserts, or returns false when a problem is found.

(If you do find problems, fixing them might be difficult - I recommend trying Autodesk Meshmixer for now...)

Basic Mesh File I/O

After you have constructed a mesh as above, you might want to see what it looks like. You can do this by exporting the mesh to disk and opening it in a mesh viewer, like the aforementioned Meshmixer. The code to write out a single mesh is a somewhat-convoluted one-liner:

    IOWriteResult result = StandardMeshWriter.WriteFile(path,
            new List<WriteMesh>() { new WriteMesh(mesh) }, WriteOptions.Defaults);

OBJSTL, and OFF formats are supported. For STL the default is ASCII, but if you want a smaller binary STL you can configure this in the WriteOptions data structure, long with many other standard and format-specific options. 

If you would like to read a mesh from disk, you can use the StandardMeshReader class. This currently can read OBJSTL, and OFF formats. It is possible to register additional readers yourself using the MeshFormatReader interface. The simplest way to read a mesh is a one-liner:

DMesh3 mesh = StandardMeshReader.ReadMesh(path)

This works for most cases but if your file contains multiple meshes, or you want to get error feedback, or configure read options, you have to use the more verbose method:

    DMesh3Builder builder = new DMesh3Builder();
    StandardMeshReader reader = new StandardMeshReader() { MeshBuilder = builder };
    IOReadResult result = reader.Read(path, ReadOptions.Defaults);
    if (result.code == IOCode.Ok)
        List<DMesh3> meshes = builder.Meshes;

For OBJ format we can also read the materials, but you have to load the texture images yourself. This is somewhat complicated, perhaps a topic for a future post.

Spatial Data Structure Queries

The next part of the Issue asks how to make a spatial data structure, to do efficient ray-intersection queries. Currently g3Sharp only supports Axis-Aligned Bounding Box (AABB) trees. It just takes two lines to set one up:

DMeshAABBTree3 spatial = new DMeshAABBTree3(mesh);

If the mesh is large this might take a few seconds, but the result is a spatial data structure that has many query functions. For example we can compute a ray-cast like so:

Ray3d ray = new Ray3d(origin, direction);
int hit_tid = spatial.FindNearestHitTriangle(ray);

Of course the ray might miss, so we have to check the resulting triangle ID:

    if (hit_tid != DMesh3.InvalidID) {
        IntrRay3Triangle3 intr = MeshQueries.TriangleIntersection(mesh, hit_tid, ray);
        double hit_dist = origin.Distance(ray.PointAt(intr.RayParameter));

Generally when a query returns a vertex, triangle, or edge index, you should test it against DMesh3.InvalidID to check if the query actually found anything. 

DMeshAABBTree3 also supports nearest-point queries, which are very useful in lots of applications. Here is a the standard code to find the nearest point on a mesh to an input point:

    int near_tid = spatial.FindNearestTriangle(point);
    if (near_tid != DMesh3.InvalidID ) {
        DistPoint3Triangle3 dist = MeshQueries.TriangleDistance(mesh, near_tid, point);
        Vector3d nearest_pt = dist.TriangleClosest;

Those are the two queries I use most often, but there are a few others, like FindAllHitTriangles(Ray3d), which finds all ray/triangle intersections, and TestIntersection(Triangle3d), which tests a triangle for intersection with the mesh.

DMeshAABBTree3 also supports a point inside/outside query, using IsInside(point). This function only works if the mesh is closed. In addition, the current implementation is not the most efficient (it uses FindAllHitTriangles() and then counts crossings).

To check for intersections between two meshes,  you can use TestIntersection(DMeshAABBTree3 otherTree). This is more efficient than testing each triangle separately because it descends the bounding-box hierarchies recursively. This function also take an optional Func<Vector3d, Vector3d> TransformF argument, which allows you to apply a transformation to the second mesh without actually modifying its vertex positions. If your meshes are in the same coordinate system you can just pass null for this argument. However if you are, for example, trying to compute intersections between meshes in Unity that have hierarchical transforms above them, then sending in a suitable transform function that maps one into the space of the other can simplify your code.

Finally, if you would like to implement your own spatial queries that can take advantage of the DMeshAABBTree3 spatial decomposition, you can use the internal TreeTraversal class by replacing the box and triangle test functions, and passing your instance to DoTraversal(). See the code comments for more info about how this works.

So, now you know how to load a DMesh3 or construct one from scratch, create an AABB Tree for it, and compute things like raycasts and nearest-points. This is enough to do some pretty interesting procedural geometry generation, if you think about it...