Author: nickyh5_wp

Unreal Week 4: Interactive Lighting Prototype with Blueprint

Weekly Goal:

Make a prototype of interactive lighting. Getting familiar with Blueprint coding.

 

Final Result Explanation:

I made blueprints to model three different types of lights: point light, directional light and spot light. All the lights are intractable. You can pick up the light and move / rotate it and you can change its parameters. The lights can be turned on and off with a button, also made with blueprint.

Point Light

Directional Light

Spot Light

 

Things I’ve learnt & Problems I’ve met

1. Fake mesh light

If you make the material unlit and make the mesh no cast shadow, the light inside will pass through the object.

Also, I found that if the material is unlit, the ‘cast shadow’ option is turned off. In the picture below, it shows the spot light I made cast shadows on the desk. This is because there’s a directional light in the scene.

To turn the shadow completely off, you need to deselect dynamic & static shadow options in the Details panel.

2. Using Interface in Blueprint

Blueprint Interface is a collection of one or more functions – name only, no implementation – that can be added to other Blueprints. Any Blueprint that has the Interface added is guaranteed to have those functions. It means in your blueprint you can call this interface of an object as long as it has implemented this interface. The realization of this interface can vary.

For example, in my case, I made an interface called interact and all the lights and buttons implement this interface. When my controllers touch an object, it calls the interact interface on that object and triggers the right way to handle it.

3. Get input for Vive Controllers

This is the blueprint I made at first. It is part of my HMD pawn class. I enabled input during Event BeginPlay.

It does not work because in Pawn, you need to use possess instead. Also in simulation mode, you can not get input until you pressed possess in the menu bar.

4. Map the experience to reality

When I first run my prototype in VR, the room axis look wrong. It is not aligned with Unreal’s x, y, z. The way to fix it is to do Vive room setup again.

I remeasure my playable area and marked it in my scene. It is 2.9m * 1.6m.

If set track origin to eye level: Since my HMD blueprint was used in a sitting experience, you need to reset the value. I put the controller on the ground and it is not on the virtual ground. Unreal has recommended character height to be 160 cm in this documentation: https://docs.unrealengine.com/latest/INT/Platforms/VR/ContentSetup/#vrcharactersettings. So if you set the Vive Camera Height to 80 (Note it is half the value you want) it will look OK.

If set track origin to floor level: Everything seems work well. The headset orientation is different though. Need more observation.

5. Very Stupid problem: cannot get controller input

Debugged a whole day about this issue… Turns out if you don’t turn on the controller before the app starts, it can not get the input.(Weirdly that the controllers were being tracked.)

Always turn on Vive first before open Unreal.

 

6. One last thing to note: Unreal’s own VR template. It implements teleportation, pick up and physics, and seems like a good template to use. I am not using it right now for learning purpose and may do in the future.

Here’s a guide to it: http://www.tomlooman.com/vrtemplate/

Unreal Week 3: Material Network Shaders and PBR Knowledge Review

Cel Shader

Water Shader

Weekly Goal:

Original: Write HLSL shader using custom node in Unreal.

Current: Learn more about basic concepts and get familiar with composing shaders with Unreal’s Material Network.

 

Final Result Explanation:

I followed the tutorial in the Unreal Documentation as the start point of getting familiar with Unreal Shaders.

The two tutorials:

https://wiki.unrealengine.com/Cel_Shading_Post_Process

https://wiki.unrealengine.com/Water_Shader_Tutorial

The Cel Shading breaks down the lights into series of bands during post processing. The water shader get the time and move the normal to create animation on the water surface.

 

Learnings:

 Switch the plan half-way

I changed my plan in the middle of the week. At first, I was thinking of writing an HLSL shader in Unreal. However, it is not good as a first step:

  1. You need to monitor the performance by yourself since Unreal doesn’t do code optimization on Custom Node, so use some profiler to help you.
  2. It is hard to debug

It is recommended that you create shaders with Unreal’s Material Editor.  You can make most of the shaders by using Unreal’s material expressions.

The tutorial I watched part of was:  https://www.youtube.com/watch?v=HaUAfgrZjlU

For future use here’s a microsoft’s tutorial of debugging a shader: https://msdn.microsoft.com/en-us/library/dn217886.aspx

 

Review PBR & Linear Workflow

Here’s a great explanation about the science behind PBR.

 And here’s another tutorial about why we have to use Linear Workflow from Nvidia

https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch24.html

Something to Keep in mind:

  •  by convention, all JPEG files are precorrected for a gamma of 2.2.
  • for Substance Painter: all maps are to be interpreted as linear except for base color, diffuse and specular.
  • Do the gamma correction as the last step of the last post-processing pass

Unreal Week 2: Day Scene, HDR, Post Processing

Weekly Goal:

Get Familiar with the lighting in Unreal. Learn about its structure and make a day scene with foliage.

Final Result Explanation:

This is a standing experience. The player can go closer to the small houses and it will trigger a post-processing effect like a memory.

During my daily noon walk I saw this shadow on the wall and decided to make something similar to this. The scene is a small European town .  It’s not the final product yet, and I will work on this scene in later plan. The time is around noon and it creates the sharp shadow.

Parts Breakdown:

Importing models created with Maya into Unreal

I made the house model myself and tried to import it into Unreal. I would like to make each house a little different, so I didn’t combine the model, which created a mess in the content browser.

So I combined parts together but I have some power to customize it:

However when I am copy pasting the house in the scene I found it hard to organize. I tried to put it into folders, but seems that you cannot copy folders.

Also, when importing into Unreal, it does not keep the model’s pivot. You have to put your model on the origin to get the right transformation, especially rotation.

So I suggested assemble the models outside Unreal. Since I find some problems of the model I made, I am thinking of replacing the model in the future, and let’s see what trouble it will have. (As you can see below, the shadow has strange behavior.) As comparison, Unity’s prefab system is really easy to use.

Using HDRi to light the scene

Unreal support HDRi lighting in both its Skylight and its post-processing.

The left one is using the skylight, and the right one is using the post-processing cube. (Called ambient cubemap, and I also added ambient occlusion since there’s no shadow with ambient cubemap alone) The first difference is that the skylight creates shadow. (Not very obvious in the picture above but I noticed where below the roof and the pavement shading.) Another difference I found: in the refection map, it is layered differently.

For the skylight, it renders behind the scene. For the post processing, you can see it is rendered in front of the scene. See the blurry highlight?  ( Left: skylight. Right: Post-processing)

 

Post-processing Cube:

In unreal, you simply need to drag a post processing cube into the scene, then you can add a lot of cool effects. It gives a lot of controls for artist to better tune the scene.

I put two post-processing cube in the scene. The smaller one has fancy Chromatic Aberration, Vignette, Grain, Bloom and Lens Flare. I only set a small value since big intensity will make it look fake. I haven’t test how it feels inside VR.

The larger one only has Auto Exposure. I find it interesting since it is adjusting the exposure and has very natural transition. To better demonstrate, I turn down the light in the scene and here’s the result of enter/exit the post-processing cube:

 

Other tricks I learned

  • If the master texture is too complicated and you want to use just part of the texture parameters, you can put a white texture instead of creating a new material.
  • Keep you naming convention since Unreal is strict about repeated names.

Problem Unsolved:

  • When  I build the lighting after importing the trees, it gave me some error complains about overlapping UV. By default, Unreal imports the model and sets its lighting to static.  In unity, you can generate lightmap UV in the engine without doing another UV channel. Still trying to figure out what to do.

 

Credits:

HDRi: from HDRiHaven  https://hdrihaven.com/

Low poly tree by AnthonyVD: https://skfb.ly/6sSUr

Street by 3DMaesen: https://skfb.ly/6wtQu

Textures from quixel megascan: https://megascans.se/

 

Found a place which lists resources of HDRi maps:

https://www.daz3d.com/forums/discussion/57531/list-of-sites-with-free-hdri

The creator of HDRiHaven has an old post about how to correctly use HDRi:

There seems to be a bug in Unreal and the asset from HDRiHaven can not be imported. It generates an asset error but no message shown in the message log.  Resaving it with Photoshop will work.

 

Unreal Week 1: Getting Familiar with Unreal Engine

Weekly Goal:

Go through the interface. Understand the set up of a level & VR. Build a program for HTC vive. Make the player interact with the scene with the controller.

 

Final Result Explanation:

This is a seated experience. The player sits on the chair and there’s an alien cube in front of him. By touching it with the controller, the lights outside will be turned on.
The two images below shows the scene before and after the light turns on.

Parts Breakdown:

Vive Setup:

It is very easy to set up the Vive. Keep SteamVR on and you can access the headset & controllers. No need for special settings; however, you do need to write your own blueprint to use it.

I created on according to the official tutorial. I also imported the Vive controller model. One problem I encountered was that, I scaled the controller model since it’s not the right size inside Unreal, and the controller is not moving correctly.

The fix is to scale the model to the right size outside. Then import with scale 1.

There’s more Vive-related control that I haven’t explored.

 

Material Network:

The material network is similar to Maya’s. Compare to Unity, it is more powerful and also more complicated. It brings a lot of artistic freedom (During my study, I found that Unity is also adding something similar in its newest beta version.)

If you installed the substance plugin you can directly import the .sbsar file into Unreal and it will be extracted to texture files. Remember to set the textures to the right color space ( and flip normal green channel).

One thing I really like about Unreal is its master & instance material hierarchy. When creating a master material, you can set parameters and when you instance them, the instance material can only do modification to these parameters. It’s a great way to manage repeated shaders. When the master changes, all the instances will change too.

I created different kinds of materials this week:

Glass – translucent

Plants & Leaves – masked, translucent & two sided

Lamp – emissive

Landscape – using special nodes to blend two texture together ( need further study)

 

Simple Blueprint:

Blueprint is another powerful tool in Unreal. Without writing any code you can create interactions in the game:

This is a simple implementation of touching the cube and triggering the lights to toggle visibility.

 

Creating Landscape:

I made a simple landscape with Unreal’s build in tool. It’s similar to Unity’s except for the material part. You need to set the textures in the landscape’s material to paint it.

 

Lighting Overview:

I went over different lighting type and its parameters. I also did some research about Unreal’s advantage over Unity about lighting:

What Unreal is better at:

  • linear colorspace
  • signed distance shadow
  • PBR
  • post process

Unreal is very good for creating realism environment. Unity is easier to use, and to learn Unreal you need to learn its workflow.

 

Credits:
Models:

forest by RubenBuchholz644c9d600cf24bcb on sketchfab – https://sketchfab.com/models/b1aec792d03a4d11a91cc4c0d7c8fb7e

room by Anex on sketchfab – https://sketchfab.com/models/6417cbc1870a4a1691cca06912ae0369

Textures & Materials:

textures.com

substance share forum

quixel megascan

Mobile VR: Wrong Choices in Art Pipeline That Caused Problems

—- There are a bunch of decisions that I regret, that could have been avoided if I knew it ahead of time. Here I share with you, by far the worst decisions that I made for Project Voyage.

The consequences of not keeping the same unit across different softwares

The intention was good: because we are making an educational experience, we originally wanted all the things to be real-life size. So I think OK, if that’s the case, it would be an enormous number of scale in maya. I also felt that if everything’s done in that way, it would be easier to manage the size of the assets. Ok I think, and I made the whole scene about 100m by 100m.

However, it created a lot of problem in our pipeline. Basically, when imported into unity, the size is HUGE, and we didn’t notice that at first.

  1. Clipping problem: Far plane, near plane.

When we render the scene, we found that on Pixel, the overlapped faces had weird behavior. It is flickering, and seems that unity could not tell which plane is in front of the other. It is Z-fighting in Pixel. ( Do we have image?)

The reason we found was that, unity is calculating the scene with the ratio of the far plane and the near plane. It divides the space according to that ratio.

(Image Illustration)

Because we needed to see far away, the far plane for us was about 10000. Also, we needed to see the Google Daydream controller, our near plane need to be about 0.02. Because the ratio is too large, it could not render properly. (More tech details)

So our kind programmer made the scene size 0.1(I need to check) and keep on working.

2. Unity crashes every time I try to bake light map

But seems that 0.1 isn’t small enough. When we finally chose to use forward rendering and were ready to try baking light map in unity, it always crashed during the bake time. I found that if I scale the scene down, it could bake the light map (but still it had weird black color) so the scale was the problem.

Also, because by that time, all the things has been set up according to that scale, and it would take a long adjustment time to make everything correct, we decided not to use maya to bake light map instead. Though the shadow baked by maya was beautiful, and I can have full control of every aspect, it created about 2 hours extra amount of work every time we changed the scene, and it was not applied on the small assets (trees, animals and plants were only baked with ambient occlusion). And also! If our programmers move the assets around in the scene, the light map will not be correct. This is painful: in order to make things perfect, we found a way to export Unity scene assets into an fbx, and we are still experimenting with it, hopefully it will work.

3. Affecting the Doppler Effect of sound

(Maybe talk more about how scale works in Unity)

Accidentally Freezed the Transformation of the whole SCENE

When I realized, I had already pressed the button. When finally put everything inside the scene, I selected the scene and chose freeze transformation. It cleared up all the TRANSFORMATION information on all the objects: which means, how much the objects has been rotated & scaled from & translated from the original models has been lost…

Be careful about these small things. This is very important to scene management. Extra time was wasted.

  1. No longer easy to replace the models by a click

When our other artist want to make small modification of the tree models, we could have clicked ‘replace A with B’ button and magic would have happened. However, because I freezed transformation, we would need to import the new tree, transform it into new position, and then adjust the scale and rotation.

To mend that, we tried snap tool, and some simple scripts, which kind of helped, but all these time were not necessary. It could have been easy.

 

Mobile VR Art Development of Trees, Project Voyage

Introduction

This semester I am working as an 3D artist for Project Voyage. (website link: http://www.etc.cmu.edu/projects/voyage/)

Our project is to explore collaborative VR in a classroom setting. We are putting the whole classroom into Virtual Reality, with students on Google Daydream and the teacher on the iPad, and observe and try to solve all the problems that come up along.

The school we works with, the Cornell High School located in Pittsburgh, has about 15 Google Daydreams, and it makes our project possible. We are working with two teachers, one Social Study teacher and a Science teacher, around the topic of deciduous forest biome. We have chosen this topic after discussion with the teachers. This topic is relevant to both subjects, and also, Pittsburgh is along the deciduous forest biome, and we would like the students to make connection of what they learn from school to their real life.

This blog is about all the problem the art side encountered when working with Google Daydream, and some experience about pipeline between Maya and Unity.

 

Google Pixel Capability

To be clear, the device we use is Google Pixel & Daydream 1.

Before we went into the development, we did several tests on Google Daydream. Here are some observations:

1. 75K: this is the number of TRIANGLES that Google Pixel can run smoothly at in a single field of view. We can push up to 90K triangles, but 75K is the safe amount. Above that amount there will be lagging. It is understandable: it is not only mobile, but also mobile VR.

We chose to stay in the safe zone instead of pushing to the boundary, to leave space to other things that may lower performance.

2. Bad shadow: the overall shadow rendering is bad. It is better to use baked shadow.

3. Anti-aliasing and Rendering: There are two types of renderings: Forward Rendering and Deferred Rendering. Forward Rendering needs more calculation power and support anti-aliasing, while Deferred Rendering is faster when there’re multiple lights but doesn’t support anti-aliasing, which is very important to make the scene look nice. We choose to use Forward Rendering with one directional light in the scene.

4. Alpha Cutout Has White Border: We haven’t solve this problem yet. For some assets we use unity standard shader cutout. With Forward Rendering and Anti-aliasing 4, the textures nearby for these models look good, but the far away ones has white border. ( When we were using deferred rendering, all the alpha-cutout models has white border, so I think one possible answer is because of anti-aliasing.)

 

Softwares and Plugins for Art:

We use Maya for modeling, and Substance Painter for texturing. We are planning to use Aleytsu for rigging and animating, and if we failed, we will use Maya’s rigging & animation system.

A useful tool we used to put in the plants & trees are called spPaint3D. Basically you can use it to paint models onto the surface, and in our case, we painted the assets onto the terrain. The link to download: https://www.highend3d.com/maya/script/sppaint3d-for-maya

 

Development Problem Solving

Deciding on the Art Style — Make Low-poly Art work in Google Pixel VR, especially for TREES

The capability of Google Pixel is not wonderful, and we are building a forest, so we need to keep the polygon of one asset as low as possible. Here I will take you along the journey that I went.

Because we also want to preserve certain educational value, so we did some tests on the textures. ( Because I also don’t have any experience in making low-poly trees, in the early stage a lot of the things doesn’t work) Also because I am thinking too much about making the art style low-poly, I tried triangulated the textures. Here is the first test I did of an oak tree:

As you can see, among A-F, only A has similar shape of a real oak tree. And if apply the same way of texturing on the sphere-looking leaves, you can easily see through and realize it is hollow. So I picked A and E and combined them together.

However, the scene doesn’t look clean, so I decided to only keep the edge to add a little more detail to the model.

However, We ignored a fact that, low-poly look like this works well when it’s far away, but it doesn’t work will when they are big. Far away it looks fine:

But it doesn’t look good when it’s close. For example, you are inside the forest and you raise you head, but all you can see is plain green color above. We need good texture. Add on to that, we also need to preserve certain educational value. Trees has all kinds of shapes, and the trees we chose, oak and maple, can not be distinguished just by the shape.

To be more specific, here’s one of the screenshot of our early prototype:

Let’s ignore all the other factors and only look at the green blob.  It looks very plain and it certainly doesn’t look good from underneath. From underneath the tree, you should see this:

相关图片(image from arbtech.co.uk)

So we decided to go back to our texture study. Referencing the tricks that have been used in Games, we tested on this:

We gave up on this mainly for its poly count. This tree, which looks fairly leafy, takes about 3000 triangles. 75K/3K = 25. 25 Trees in the viewport is a very small number.

One of the reason that the triangle count is high, is because the way I put the branches was not clever & accurate enough. At this time, Speed Tree was recommended to us. We chose not to use Speed Tree, which is a great tool for making trees in games & cinema. Several Reasons: less control over triangle counts, realistic looking, and costly because we need to use specific type of tree, and we need to purchase it in the speed tree store.  (Images from speed tree asset store)

These are the assets from speed tree. As you can see the triangle count on the left is cool, but it’s not leafy enough as a forest tree. The one on the right is great, but it has a lot of triangles.

Also, because I have no experience, there are several things that I felt lack of control of, which made me gave up on this design:

  1. If the style of the tree goes more to the realistic end, the animals and plants need to be made according to this style. For the animals, to make it fairly real and not uncanny-valley, it requires more work on modeling, texturing, rigging and animation. For the amount of work we need to finish within 3 month, I am not sure we can finish on time.
  2. We tested with this tree in Google Pixel, and at that time, the area which the leaves overlapped looks weird. Also it has white borders, and at that time, I was not sure we can solve this problem. Was it limited by Pixel’s Limitation? I was not sure. One thing I did notice was that, of all the Daydream Apps I checked, None of them were using this method. So I suppose Pixel’s rendering was not good enough to use this method.

These are the reasons why I gave up. Because we were developing at a very fast speed, I didn’t dig deeper into the texture rendering problem, which I would encourage others to try.

 

Failed many times, I finally found a solution for the trees. Here’s the model I made:

From observation, I found that for trees, it is usually hard to identify what tree it is from a distance, but from underneath, you can see more details. So I decide to put detailed texture at the bottom of the tree, so when you look up, you can see the branches and the shape of the leaves. You usually see a tree in the forest from this angle. The texture from underneath the tree is fairly easier to make, since I want the textured branches be connected to the modeled branches, As you can see above.

We also did animations that make leaves fall from the tree, so you can make the connection between the leaves and the tree, which is usually the way people use to identify a tree in real life.

The way we made the trees identified the our artstyle.

Technical Animation Pieces 5: Rigid Body Simulation

The best idea inside rigid body simulation is that, you can use one single point and its state ( orientation & movement ) to represent the whole object. I used Houdini to dig further into this feature. I did a Destruction Simulation.

The tutorial I used is: https://www.sidefx.com/tutorials/applied-rigid-body-destruction/

and my result looks like this:

Because these are just choruses instead of some cool model, some of the effort in there can not be told without observation. Now I will explain step by step (not digging into the houdini nodes)  how to make this simulation looks real.

  1. Pack Object

It is very natural to think of packing an object. Packing means store all the mesh information in the memory and use a point to represent its state and reference to that mesh. Packing can make preview runs faster but it doesn’t save real rendering time.

2. Give It a Bullet Solver

Here is the link to explain bullet solver in houdini: http://www.sidefx.com/ja/docs/houdini/nodes/dop/bulletrbdsolver

In a word, the bullet solver can represent the object with simpler shape. It can use convex shapes with less points to represent the object in any situation, which is especially useful in collision detection.

3. Voronoi: Divide the object in space

Voronoi is very fast and the pieces it devided into are very simple. Because of that, the simulation looks a little fake when you look closely. With fogs and motion blur it might look ok.

So the way to do this is to give voronoi the original mesh and some control points around the mesh. The points on the mesh will be clustered with the nearest control point. That is, the mesh is divided by bisections of the control points.

By now, if you give the object a height and drop it onto the ground, it will be broken into pieces. However, if you give it a initial angular velocity, it will be Broken Immediately. The reason is that, they are just pieces that start with a position with the same velocity. They are not bounded together.

Also, the pieces it generated are too regular. To do that, you can use voronoi’s own clustering. But it is not good at all. It can create interpenetrating problem between pieces so, not recommended.

4. Add constraints (glue constraints in houdini)

To bound them together you can create connections that linked adjacent pieces together after you use Voronoi. With appropriate search radius and max connections, you can create beautiful link network. I deleted the connections that exist outside the mesh. After that, you can give the connections strengths. In Houdini you can write expressions to give connections random chance to be extremely strong.

Connections inside the model

After that, you input the constraints inside the simulation. It improves both problems: the pieces will not flew apart with an angular velocity and also, the part that falls on the ground will break the most, and the others will generate bigger parts because of the constraints’ strength.

5. Break it again with smaller pieces

By now we have regular pieces and big pieces, but it should have more small pieces that fall apart. To do this, the tutorial breaks random pieces again before it is assembled into another pack.

The result looks like this:

It looks quite nice with all the details. The breaking time looks reasonable, and the pieces generated contain all kinds of shapes. It will look nicer with dust and smoke.

Technical Animation Pieces 4 — Skinning

Feb.8

Class Summary — Three Techniques of Skinning

LBS Linear Blend Skinning ( = SSD, skeletal subspace deformation)

The logic of LBS is very simple.  Here is a reference I found that explain this method in details: http://graphics.ucsd.edu/courses/cse169_w05/3-Skin.htm

The joints have weight on every vertex, reflecting their impacts. The movement of the vertex will be the blended by the weight.

MIT has developed Pinocchio to show the result of LBS. It first use circles to determine the best position of the skeleton.

LBS has artifacts. One of the biggest problem is the Candy Wrapper Problem, shown below.

[Image]

DQB is one way to solve this problem.

Dual Quaternion Blending

DQB adds the rotation dimensions and interpolate on them. It is not slower than the LBS.

One problem it has is that it can not do extreme things like twist your joint 360 degrees. This problem can be solved by divide the rotation equally into parts and do calculation one by one.

Differential Blending ( Disney)

Cages

Anatomy Transfer ( A good way to think about this problem)

 

 

 

 

Reference website: www.skinning.org