Category: Unity

Mobile VR: Wrong Choices in Art Pipeline That Caused Problems

—- There are a bunch of decisions that I regret, that could have been avoided if I knew it ahead of time. Here I share with you, by far the worst decisions that I made for Project Voyage.

The consequences of not keeping the same unit across different softwares

The intention was good: because we are making an educational experience, we originally wanted all the things to be real-life size. So I think OK, if that’s the case, it would be an enormous number of scale in maya. I also felt that if everything’s done in that way, it would be easier to manage the size of the assets. Ok I think, and I made the whole scene about 100m by 100m.

However, it created a lot of problem in our pipeline. Basically, when imported into unity, the size is HUGE, and we didn’t notice that at first.

  1. Clipping problem: Far plane, near plane.

When we render the scene, we found that on Pixel, the overlapped faces had weird behavior. It is flickering, and seems that unity could not tell which plane is in front of the other. It is Z-fighting in Pixel. ( Do we have image?)

The reason we found was that, unity is calculating the scene with the ratio of the far plane and the near plane. It divides the space according to that ratio.

(Image Illustration)

Because we needed to see far away, the far plane for us was about 10000. Also, we needed to see the Google Daydream controller, our near plane need to be about 0.02. Because the ratio is too large, it could not render properly. (More tech details)

So our kind programmer made the scene size 0.1(I need to check) and keep on working.

2. Unity crashes every time I try to bake light map

But seems that 0.1 isn’t small enough. When we finally chose to use forward rendering and were ready to try baking light map in unity, it always crashed during the bake time. I found that if I scale the scene down, it could bake the light map (but still it had weird black color) so the scale was the problem.

Also, because by that time, all the things has been set up according to that scale, and it would take a long adjustment time to make everything correct, we decided not to use maya to bake light map instead. Though the shadow baked by maya was beautiful, and I can have full control of every aspect, it created about 2 hours extra amount of work every time we changed the scene, and it was not applied on the small assets (trees, animals and plants were only baked with ambient occlusion). And also! If our programmers move the assets around in the scene, the light map will not be correct. This is painful: in order to make things perfect, we found a way to export Unity scene assets into an fbx, and we are still experimenting with it, hopefully it will work.

3. Affecting the Doppler Effect of sound

(Maybe talk more about how scale works in Unity)

Accidentally Freezed the Transformation of the whole SCENE

When I realized, I had already pressed the button. When finally put everything inside the scene, I selected the scene and chose freeze transformation. It cleared up all the TRANSFORMATION information on all the objects: which means, how much the objects has been rotated & scaled from & translated from the original models has been lost…

Be careful about these small things. This is very important to scene management. Extra time was wasted.

  1. No longer easy to replace the models by a click

When our other artist want to make small modification of the tree models, we could have clicked ‘replace A with B’ button and magic would have happened. However, because I freezed transformation, we would need to import the new tree, transform it into new position, and then adjust the scale and rotation.

To mend that, we tried snap tool, and some simple scripts, which kind of helped, but all these time were not necessary. It could have been easy.

 

Mobile VR Art Development of Trees, Project Voyage

Introduction

This semester I am working as an 3D artist for Project Voyage. (website link: http://www.etc.cmu.edu/projects/voyage/)

Our project is to explore collaborative VR in a classroom setting. We are putting the whole classroom into Virtual Reality, with students on Google Daydream and the teacher on the iPad, and observe and try to solve all the problems that come up along.

The school we works with, the Cornell High School located in Pittsburgh, has about 15 Google Daydreams, and it makes our project possible. We are working with two teachers, one Social Study teacher and a Science teacher, around the topic of deciduous forest biome. We have chosen this topic after discussion with the teachers. This topic is relevant to both subjects, and also, Pittsburgh is along the deciduous forest biome, and we would like the students to make connection of what they learn from school to their real life.

This blog is about all the problem the art side encountered when working with Google Daydream, and some experience about pipeline between Maya and Unity.

 

Google Pixel Capability

To be clear, the device we use is Google Pixel & Daydream 1.

Before we went into the development, we did several tests on Google Daydream. Here are some observations:

1. 75K: this is the number of TRIANGLES that Google Pixel can run smoothly at in a single field of view. We can push up to 90K triangles, but 75K is the safe amount. Above that amount there will be lagging. It is understandable: it is not only mobile, but also mobile VR.

We chose to stay in the safe zone instead of pushing to the boundary, to leave space to other things that may lower performance.

2. Bad shadow: the overall shadow rendering is bad. It is better to use baked shadow.

3. Anti-aliasing and Rendering: There are two types of renderings: Forward Rendering and Deferred Rendering. Forward Rendering needs more calculation power and support anti-aliasing, while Deferred Rendering is faster when there’re multiple lights but doesn’t support anti-aliasing, which is very important to make the scene look nice. We choose to use Forward Rendering with one directional light in the scene.

4. Alpha Cutout Has White Border: We haven’t solve this problem yet. For some assets we use unity standard shader cutout. With Forward Rendering and Anti-aliasing 4, the textures nearby for these models look good, but the far away ones has white border. ( When we were using deferred rendering, all the alpha-cutout models has white border, so I think one possible answer is because of anti-aliasing.)

 

Softwares and Plugins for Art:

We use Maya for modeling, and Substance Painter for texturing. We are planning to use Aleytsu for rigging and animating, and if we failed, we will use Maya’s rigging & animation system.

A useful tool we used to put in the plants & trees are called spPaint3D. Basically you can use it to paint models onto the surface, and in our case, we painted the assets onto the terrain. The link to download: https://www.highend3d.com/maya/script/sppaint3d-for-maya

 

Development Problem Solving

Deciding on the Art Style — Make Low-poly Art work in Google Pixel VR, especially for TREES

The capability of Google Pixel is not wonderful, and we are building a forest, so we need to keep the polygon of one asset as low as possible. Here I will take you along the journey that I went.

Because we also want to preserve certain educational value, so we did some tests on the textures. ( Because I also don’t have any experience in making low-poly trees, in the early stage a lot of the things doesn’t work) Also because I am thinking too much about making the art style low-poly, I tried triangulated the textures. Here is the first test I did of an oak tree:

As you can see, among A-F, only A has similar shape of a real oak tree. And if apply the same way of texturing on the sphere-looking leaves, you can easily see through and realize it is hollow. So I picked A and E and combined them together.

However, the scene doesn’t look clean, so I decided to only keep the edge to add a little more detail to the model.

However, We ignored a fact that, low-poly look like this works well when it’s far away, but it doesn’t work will when they are big. Far away it looks fine:

But it doesn’t look good when it’s close. For example, you are inside the forest and you raise you head, but all you can see is plain green color above. We need good texture. Add on to that, we also need to preserve certain educational value. Trees has all kinds of shapes, and the trees we chose, oak and maple, can not be distinguished just by the shape.

To be more specific, here’s one of the screenshot of our early prototype:

Let’s ignore all the other factors and only look at the green blob.  It looks very plain and it certainly doesn’t look good from underneath. From underneath the tree, you should see this:

相关图片(image from arbtech.co.uk)

So we decided to go back to our texture study. Referencing the tricks that have been used in Games, we tested on this:

We gave up on this mainly for its poly count. This tree, which looks fairly leafy, takes about 3000 triangles. 75K/3K = 25. 25 Trees in the viewport is a very small number.

One of the reason that the triangle count is high, is because the way I put the branches was not clever & accurate enough. At this time, Speed Tree was recommended to us. We chose not to use Speed Tree, which is a great tool for making trees in games & cinema. Several Reasons: less control over triangle counts, realistic looking, and costly because we need to use specific type of tree, and we need to purchase it in the speed tree store.  (Images from speed tree asset store)

These are the assets from speed tree. As you can see the triangle count on the left is cool, but it’s not leafy enough as a forest tree. The one on the right is great, but it has a lot of triangles.

Also, because I have no experience, there are several things that I felt lack of control of, which made me gave up on this design:

  1. If the style of the tree goes more to the realistic end, the animals and plants need to be made according to this style. For the animals, to make it fairly real and not uncanny-valley, it requires more work on modeling, texturing, rigging and animation. For the amount of work we need to finish within 3 month, I am not sure we can finish on time.
  2. We tested with this tree in Google Pixel, and at that time, the area which the leaves overlapped looks weird. Also it has white borders, and at that time, I was not sure we can solve this problem. Was it limited by Pixel’s Limitation? I was not sure. One thing I did notice was that, of all the Daydream Apps I checked, None of them were using this method. So I suppose Pixel’s rendering was not good enough to use this method.

These are the reasons why I gave up. Because we were developing at a very fast speed, I didn’t dig deeper into the texture rendering problem, which I would encourage others to try.

 

Failed many times, I finally found a solution for the trees. Here’s the model I made:

From observation, I found that for trees, it is usually hard to identify what tree it is from a distance, but from underneath, you can see more details. So I decide to put detailed texture at the bottom of the tree, so when you look up, you can see the branches and the shape of the leaves. You usually see a tree in the forest from this angle. The texture from underneath the tree is fairly easier to make, since I want the textured branches be connected to the modeled branches, As you can see above.

We also did animations that make leaves fall from the tree, so you can make the connection between the leaves and the tree, which is usually the way people use to identify a tree in real life.

The way we made the trees identified the our artstyle.