Category: VR

Unreal Week 4: Interactive Lighting Prototype with Blueprint

Weekly Goal:

Make a prototype of interactive lighting. Getting familiar with Blueprint coding.

 

Final Result Explanation:

I made blueprints to model three different types of lights: point light, directional light and spot light. All the lights are intractable. You can pick up the light and move / rotate it and you can change its parameters. The lights can be turned on and off with a button, also made with blueprint.

Point Light

Directional Light

Spot Light

 

Things I’ve learnt & Problems I’ve met

1. Fake mesh light

If you make the material unlit and make the mesh no cast shadow, the light inside will pass through the object.

Also, I found that if the material is unlit, the ‘cast shadow’ option is turned off. In the picture below, it shows the spot light I made cast shadows on the desk. This is because there’s a directional light in the scene.

To turn the shadow completely off, you need to deselect dynamic & static shadow options in the Details panel.

2. Using Interface in Blueprint

Blueprint Interface is a collection of one or more functions – name only, no implementation – that can be added to other Blueprints. Any Blueprint that has the Interface added is guaranteed to have those functions. It means in your blueprint you can call this interface of an object as long as it has implemented this interface. The realization of this interface can vary.

For example, in my case, I made an interface called interact and all the lights and buttons implement this interface. When my controllers touch an object, it calls the interact interface on that object and triggers the right way to handle it.

3. Get input for Vive Controllers

This is the blueprint I made at first. It is part of my HMD pawn class. I enabled input during Event BeginPlay.

It does not work because in Pawn, you need to use possess instead. Also in simulation mode, you can not get input until you pressed possess in the menu bar.

4. Map the experience to reality

When I first run my prototype in VR, the room axis look wrong. It is not aligned with Unreal’s x, y, z. The way to fix it is to do Vive room setup again.

I remeasure my playable area and marked it in my scene. It is 2.9m * 1.6m.

If set track origin to eye level: Since my HMD blueprint was used in a sitting experience, you need to reset the value. I put the controller on the ground and it is not on the virtual ground. Unreal has recommended character height to be 160 cm in this documentation: https://docs.unrealengine.com/latest/INT/Platforms/VR/ContentSetup/#vrcharactersettings. So if you set the Vive Camera Height to 80 (Note it is half the value you want) it will look OK.

If set track origin to floor level: Everything seems work well. The headset orientation is different though. Need more observation.

5. Very Stupid problem: cannot get controller input

Debugged a whole day about this issue… Turns out if you don’t turn on the controller before the app starts, it can not get the input.(Weirdly that the controllers were being tracked.)

Always turn on Vive first before open Unreal.

 

6. One last thing to note: Unreal’s own VR template. It implements teleportation, pick up and physics, and seems like a good template to use. I am not using it right now for learning purpose and may do in the future.

Here’s a guide to it: http://www.tomlooman.com/vrtemplate/

Unreal Week 2: Day Scene, HDR, Post Processing

Weekly Goal:

Get Familiar with the lighting in Unreal. Learn about its structure and make a day scene with foliage.

Final Result Explanation:

This is a standing experience. The player can go closer to the small houses and it will trigger a post-processing effect like a memory.

During my daily noon walk I saw this shadow on the wall and decided to make something similar to this. The scene is a small European town .  It’s not the final product yet, and I will work on this scene in later plan. The time is around noon and it creates the sharp shadow.

Parts Breakdown:

Importing models created with Maya into Unreal

I made the house model myself and tried to import it into Unreal. I would like to make each house a little different, so I didn’t combine the model, which created a mess in the content browser.

So I combined parts together but I have some power to customize it:

However when I am copy pasting the house in the scene I found it hard to organize. I tried to put it into folders, but seems that you cannot copy folders.

Also, when importing into Unreal, it does not keep the model’s pivot. You have to put your model on the origin to get the right transformation, especially rotation.

So I suggested assemble the models outside Unreal. Since I find some problems of the model I made, I am thinking of replacing the model in the future, and let’s see what trouble it will have. (As you can see below, the shadow has strange behavior.) As comparison, Unity’s prefab system is really easy to use.

Using HDRi to light the scene

Unreal support HDRi lighting in both its Skylight and its post-processing.

The left one is using the skylight, and the right one is using the post-processing cube. (Called ambient cubemap, and I also added ambient occlusion since there’s no shadow with ambient cubemap alone) The first difference is that the skylight creates shadow. (Not very obvious in the picture above but I noticed where below the roof and the pavement shading.) Another difference I found: in the refection map, it is layered differently.

For the skylight, it renders behind the scene. For the post processing, you can see it is rendered in front of the scene. See the blurry highlight?  ( Left: skylight. Right: Post-processing)

 

Post-processing Cube:

In unreal, you simply need to drag a post processing cube into the scene, then you can add a lot of cool effects. It gives a lot of controls for artist to better tune the scene.

I put two post-processing cube in the scene. The smaller one has fancy Chromatic Aberration, Vignette, Grain, Bloom and Lens Flare. I only set a small value since big intensity will make it look fake. I haven’t test how it feels inside VR.

The larger one only has Auto Exposure. I find it interesting since it is adjusting the exposure and has very natural transition. To better demonstrate, I turn down the light in the scene and here’s the result of enter/exit the post-processing cube:

 

Other tricks I learned

  • If the master texture is too complicated and you want to use just part of the texture parameters, you can put a white texture instead of creating a new material.
  • Keep you naming convention since Unreal is strict about repeated names.

Problem Unsolved:

  • When  I build the lighting after importing the trees, it gave me some error complains about overlapping UV. By default, Unreal imports the model and sets its lighting to static.  In unity, you can generate lightmap UV in the engine without doing another UV channel. Still trying to figure out what to do.

 

Credits:

HDRi: from HDRiHaven  https://hdrihaven.com/

Low poly tree by AnthonyVD: https://skfb.ly/6sSUr

Street by 3DMaesen: https://skfb.ly/6wtQu

Textures from quixel megascan: https://megascans.se/

 

Found a place which lists resources of HDRi maps:

https://www.daz3d.com/forums/discussion/57531/list-of-sites-with-free-hdri

The creator of HDRiHaven has an old post about how to correctly use HDRi:

There seems to be a bug in Unreal and the asset from HDRiHaven can not be imported. It generates an asset error but no message shown in the message log.  Resaving it with Photoshop will work.

 

Mobile VR: Wrong Choices in Art Pipeline That Caused Problems

—- There are a bunch of decisions that I regret, that could have been avoided if I knew it ahead of time. Here I share with you, by far the worst decisions that I made for Project Voyage.

The consequences of not keeping the same unit across different softwares

The intention was good: because we are making an educational experience, we originally wanted all the things to be real-life size. So I think OK, if that’s the case, it would be an enormous number of scale in maya. I also felt that if everything’s done in that way, it would be easier to manage the size of the assets. Ok I think, and I made the whole scene about 100m by 100m.

However, it created a lot of problem in our pipeline. Basically, when imported into unity, the size is HUGE, and we didn’t notice that at first.

  1. Clipping problem: Far plane, near plane.

When we render the scene, we found that on Pixel, the overlapped faces had weird behavior. It is flickering, and seems that unity could not tell which plane is in front of the other. It is Z-fighting in Pixel. ( Do we have image?)

The reason we found was that, unity is calculating the scene with the ratio of the far plane and the near plane. It divides the space according to that ratio.

(Image Illustration)

Because we needed to see far away, the far plane for us was about 10000. Also, we needed to see the Google Daydream controller, our near plane need to be about 0.02. Because the ratio is too large, it could not render properly. (More tech details)

So our kind programmer made the scene size 0.1(I need to check) and keep on working.

2. Unity crashes every time I try to bake light map

But seems that 0.1 isn’t small enough. When we finally chose to use forward rendering and were ready to try baking light map in unity, it always crashed during the bake time. I found that if I scale the scene down, it could bake the light map (but still it had weird black color) so the scale was the problem.

Also, because by that time, all the things has been set up according to that scale, and it would take a long adjustment time to make everything correct, we decided not to use maya to bake light map instead. Though the shadow baked by maya was beautiful, and I can have full control of every aspect, it created about 2 hours extra amount of work every time we changed the scene, and it was not applied on the small assets (trees, animals and plants were only baked with ambient occlusion). And also! If our programmers move the assets around in the scene, the light map will not be correct. This is painful: in order to make things perfect, we found a way to export Unity scene assets into an fbx, and we are still experimenting with it, hopefully it will work.

3. Affecting the Doppler Effect of sound

(Maybe talk more about how scale works in Unity)

Accidentally Freezed the Transformation of the whole SCENE

When I realized, I had already pressed the button. When finally put everything inside the scene, I selected the scene and chose freeze transformation. It cleared up all the TRANSFORMATION information on all the objects: which means, how much the objects has been rotated & scaled from & translated from the original models has been lost…

Be careful about these small things. This is very important to scene management. Extra time was wasted.

  1. No longer easy to replace the models by a click

When our other artist want to make small modification of the tree models, we could have clicked ‘replace A with B’ button and magic would have happened. However, because I freezed transformation, we would need to import the new tree, transform it into new position, and then adjust the scale and rotation.

To mend that, we tried snap tool, and some simple scripts, which kind of helped, but all these time were not necessary. It could have been easy.

 

Mobile VR Art Development of Trees, Project Voyage

Introduction

This semester I am working as an 3D artist for Project Voyage. (website link: http://www.etc.cmu.edu/projects/voyage/)

Our project is to explore collaborative VR in a classroom setting. We are putting the whole classroom into Virtual Reality, with students on Google Daydream and the teacher on the iPad, and observe and try to solve all the problems that come up along.

The school we works with, the Cornell High School located in Pittsburgh, has about 15 Google Daydreams, and it makes our project possible. We are working with two teachers, one Social Study teacher and a Science teacher, around the topic of deciduous forest biome. We have chosen this topic after discussion with the teachers. This topic is relevant to both subjects, and also, Pittsburgh is along the deciduous forest biome, and we would like the students to make connection of what they learn from school to their real life.

This blog is about all the problem the art side encountered when working with Google Daydream, and some experience about pipeline between Maya and Unity.

 

Google Pixel Capability

To be clear, the device we use is Google Pixel & Daydream 1.

Before we went into the development, we did several tests on Google Daydream. Here are some observations:

1. 75K: this is the number of TRIANGLES that Google Pixel can run smoothly at in a single field of view. We can push up to 90K triangles, but 75K is the safe amount. Above that amount there will be lagging. It is understandable: it is not only mobile, but also mobile VR.

We chose to stay in the safe zone instead of pushing to the boundary, to leave space to other things that may lower performance.

2. Bad shadow: the overall shadow rendering is bad. It is better to use baked shadow.

3. Anti-aliasing and Rendering: There are two types of renderings: Forward Rendering and Deferred Rendering. Forward Rendering needs more calculation power and support anti-aliasing, while Deferred Rendering is faster when there’re multiple lights but doesn’t support anti-aliasing, which is very important to make the scene look nice. We choose to use Forward Rendering with one directional light in the scene.

4. Alpha Cutout Has White Border: We haven’t solve this problem yet. For some assets we use unity standard shader cutout. With Forward Rendering and Anti-aliasing 4, the textures nearby for these models look good, but the far away ones has white border. ( When we were using deferred rendering, all the alpha-cutout models has white border, so I think one possible answer is because of anti-aliasing.)

 

Softwares and Plugins for Art:

We use Maya for modeling, and Substance Painter for texturing. We are planning to use Aleytsu for rigging and animating, and if we failed, we will use Maya’s rigging & animation system.

A useful tool we used to put in the plants & trees are called spPaint3D. Basically you can use it to paint models onto the surface, and in our case, we painted the assets onto the terrain. The link to download: https://www.highend3d.com/maya/script/sppaint3d-for-maya

 

Development Problem Solving

Deciding on the Art Style — Make Low-poly Art work in Google Pixel VR, especially for TREES

The capability of Google Pixel is not wonderful, and we are building a forest, so we need to keep the polygon of one asset as low as possible. Here I will take you along the journey that I went.

Because we also want to preserve certain educational value, so we did some tests on the textures. ( Because I also don’t have any experience in making low-poly trees, in the early stage a lot of the things doesn’t work) Also because I am thinking too much about making the art style low-poly, I tried triangulated the textures. Here is the first test I did of an oak tree:

As you can see, among A-F, only A has similar shape of a real oak tree. And if apply the same way of texturing on the sphere-looking leaves, you can easily see through and realize it is hollow. So I picked A and E and combined them together.

However, the scene doesn’t look clean, so I decided to only keep the edge to add a little more detail to the model.

However, We ignored a fact that, low-poly look like this works well when it’s far away, but it doesn’t work will when they are big. Far away it looks fine:

But it doesn’t look good when it’s close. For example, you are inside the forest and you raise you head, but all you can see is plain green color above. We need good texture. Add on to that, we also need to preserve certain educational value. Trees has all kinds of shapes, and the trees we chose, oak and maple, can not be distinguished just by the shape.

To be more specific, here’s one of the screenshot of our early prototype:

Let’s ignore all the other factors and only look at the green blob.  It looks very plain and it certainly doesn’t look good from underneath. From underneath the tree, you should see this:

相关图片(image from arbtech.co.uk)

So we decided to go back to our texture study. Referencing the tricks that have been used in Games, we tested on this:

We gave up on this mainly for its poly count. This tree, which looks fairly leafy, takes about 3000 triangles. 75K/3K = 25. 25 Trees in the viewport is a very small number.

One of the reason that the triangle count is high, is because the way I put the branches was not clever & accurate enough. At this time, Speed Tree was recommended to us. We chose not to use Speed Tree, which is a great tool for making trees in games & cinema. Several Reasons: less control over triangle counts, realistic looking, and costly because we need to use specific type of tree, and we need to purchase it in the speed tree store.  (Images from speed tree asset store)

These are the assets from speed tree. As you can see the triangle count on the left is cool, but it’s not leafy enough as a forest tree. The one on the right is great, but it has a lot of triangles.

Also, because I have no experience, there are several things that I felt lack of control of, which made me gave up on this design:

  1. If the style of the tree goes more to the realistic end, the animals and plants need to be made according to this style. For the animals, to make it fairly real and not uncanny-valley, it requires more work on modeling, texturing, rigging and animation. For the amount of work we need to finish within 3 month, I am not sure we can finish on time.
  2. We tested with this tree in Google Pixel, and at that time, the area which the leaves overlapped looks weird. Also it has white borders, and at that time, I was not sure we can solve this problem. Was it limited by Pixel’s Limitation? I was not sure. One thing I did notice was that, of all the Daydream Apps I checked, None of them were using this method. So I suppose Pixel’s rendering was not good enough to use this method.

These are the reasons why I gave up. Because we were developing at a very fast speed, I didn’t dig deeper into the texture rendering problem, which I would encourage others to try.

 

Failed many times, I finally found a solution for the trees. Here’s the model I made:

From observation, I found that for trees, it is usually hard to identify what tree it is from a distance, but from underneath, you can see more details. So I decide to put detailed texture at the bottom of the tree, so when you look up, you can see the branches and the shape of the leaves. You usually see a tree in the forest from this angle. The texture from underneath the tree is fairly easier to make, since I want the textured branches be connected to the modeled branches, As you can see above.

We also did animations that make leaves fall from the tree, so you can make the connection between the leaves and the tree, which is usually the way people use to identify a tree in real life.

The way we made the trees identified the our artstyle.