Here some useful references about Japanese traditional architecture, in particular about roofing. First, nowadays there are 3 standard tiled roof types in Japan:
J-shaped tiles: these are Japanese style tiles, the main tile is named 桟瓦 (sangawara); these tiles are the evolution of the traditional 本瓦葺 (hongawara roof) still used for temple and shrines; they are used for traditional houses;
This is, in my opinion, one of the most tricky parts of the process. In theory, one could think that now, thanks to Nanite, it’s possible to directly import the assets produced by Metashape into UE5. Although this is certainly an option, I don’t use it for several reasons. First at all, I don’t like the way Metashape produces UVs and I prefer to re-uv my meshes. Second reason is that, even if Nanite supports very high-poly meshes, one shouldn’t go too high, so I try to keep my high-poly assets to something still very high but reasonable for the Nanite scene I’m building. In theory I can reduce the number of polygons in Metashape, but as I need new UVs too, I prefer to do all this part in zBrush. I don’t know if this is the best way to proceed and I’m still trying a few variations, so, if you have some ideas, please write them in comment.
In zBrush I have experienced different approaches, the one shown here is the one that works for me in most cases. First, I import the mesh in zBrush and check if its topology is OK (no flying polys). Then, I duplicate the mesh, so I can work on this copy and project from the original one. Even if my target points is around 1 million in order to import the asset as Nanite asset in Unreal, in order to do the UVs in a simple way I need something low poly to start with. When reducing from a very high poly and triangulated mesh, as Metashape produces, to a low poly quad mesh using zRemesher a lot of time it causes zBrush to crush (at least in my experience). Also, I have noticed that zRemesher causes sometimes a loss of sharpness of the object shape. For this reason, I use the Decimation Master plugin to reduce the mesh to something around 10-15k ActivePoints. The mesh produced by the Decimation Master plugin is triangulated and its density isn’t uniform, but matches the shape of the mesh at its best. Then I use zRemesher to produce a quad mesh with around the same number of ActivePoints and I use the UV Master plugin to generate some nice UVs. Once this is done, I project from the original mesh to the new one subdividing the latter a few times until I reach the level of details I want. In a few cases, I’ve skipped the retopology with zRemesher and directly did the UVs, subdivisions and projections on the triangulated mesh.
First, create a new project and add the photos to it. There is not “too many photos” concept when coming to photogrammetry, but the more the better.
Once the photos are added, we proceed to align them. I’ve found the default values proposed by Metashape are suboptimal in most cases. For this reason, I augment the key point limit and the tie point limit to something like 40k. I set the accuracy to “high” and check “Generic preselection” to reduce the processing time. In some cases, I use masks too and apply them to tie points. In this case, it’s enough to setup the mask only on a few photos, Metashape is able to use it for all photos. Instead, if the mask is applied to key points, each photo should have a mask in order to have it working. I usually use masks on tie points when I scan an object rotating it on an almost uniform background. In my little experience is not useful or even deleterious when scanning a big rock in the nature.
Once the photos are aligned, we see a cloud of points and we can already redefine the region of the scan reducing it to the object of interest without, however, making it too tight. After that, we need to build the dense cloud. I usually set the quality to “high”. If, for some reason, it’s not the first time I build it, I check “Reuse depth maps”, saving some time.
Once the dense cloud is built we can finally build the mesh. This is the place in the whole process where I change a lot the default parameters. As source data, I set “Depth maps”, as I have built them in the previous step. This save processing time. I set the quality and the face count to high and I check again “Reuse depth maps”.
After some waiting, I have the mesh.
At this point, I select and delete all the parts I don’t need, with special attention to eliminate all flying polygons (they can cause a lot of problem in the retopology-UVing process needed later)
Once the cleaning is roughly done, I check the mesh statistics and fix them. I usually refine my cleaning and recheck the Mesh Statistics a few time, before I’m really satisfied with the result.
Once I’m done with cleaning, I use the “Close Holes” tool. I set it with the level quite high, but not 100% in order to have the small holes eventually present in the mesh closed, but not having it closing the bottom of the rock.
Once I think I’m done with editing in Metashape, I build the texture. I usually set a very high resolution because Metashape usually generate a lot of small UVs islands. Later, in zBrush I generate a more acceptable UV map and then transfer the texture using xNormals. The final texture that I import in Unreal Engine can definitively be smaller then the one generated in Metashape.
In order to prepare photos for being used in a photogrammetry software, like Metashape, we do the following steps:
As our camera stores the raw photos in CR3 format, first thing we do is to convert them to DNG using the Adobe DNG converter free tool;
Create the color checker profile using the “Colorchecker Camera Calibration” program, then save it in “C:\Users\UserName\AppData\Roaming\Adobe\CameraRaw\CameraProfiles” which is actually the default option ;
Import photos in Lightroom;
Edit the color checker photo: in develop mode apply the profile and setup the white balance;
Sync these edited properties to all concerned photos;
Bulk export the edited photos as dng in order to use them in Metashape.
Strong sunlight and water are not good for photogrammetry, so we try to do it during a cloudy and dry day. Once we found the subject, let’s say a rock or a wall, we will first put the Colorchecker (we use Xrite Colorchecker Passport) near the subject and take a good big photo of it, that photo will be used for calibrating color and white balance later.
Then we take many photos of the subject from every side, overview and details too. We use fixed 50mm lens or a zoom lens but not changing the zoom during shooting. We are taking pictures in RAW format. If you use jpeg then make sure your white balance is not changing during shooting (not Auto). The subject should be sharp, so you may want to adjust the aperture accordingly. More photos is better, with too few photos the program may fail to work or make a blurry scan. Typically you want to see every point of the subject in many photos. So like 100 pictures is not too much, some people are even making 1000s of pictures! Anyway once your scan is complete you can delete them all.
That’s the last step of our photogrammetry workflow. At this point we have the mesh to import SM_Mesh.fbx and either its base color texture T_Mesh_D.tga or it’s color-roughness combined texture and the normal map (T_Mesh_DR.tga and T_Mesh_N.tga).
First we’ll import the mesh. Don’t forget to check the Build Nanite and uncheck Build Lightmap UVs.
Mesh import dialog
We will then use Modeling Tools plugin to modify the pivot and sometimes the scale, but remember, the Nanite mesh should be pretty big. If your mesh is too small it will become partly black because the Unreal couldn’t correctly calculate the normals for the micro triangles.
Next step is to import the textures, this is pretty straight forward, just drag them into the content browser. Check if Unreal recognized the normal map correctly.
Final step is to create the material. We have a master material (inspired by Epic’s examples) which allows us to manipulate color, normals and roughness. We’ll create a new instance, setup it with our imported textures and modify various parameters. Then we set that material as default material of the mesh and voila, it’s done!
“De-lighting” actually means removing shadows from the texture, making a good base color texture to use in Unreal. We use free Agisoft De-Lighter tool (official tutorial).
You start by importing the fbx produced by Metashape (or other program but there should by a texture).
On the right side of the UI there are 2 main tools to remove shadows: “remove cast shadows” and “remove shading”. Remove shading is an automatic tool suitable for simple situations. We try it first and if it works use the result.
“Remove cast shadows” requires manual markup of lit and shadowed areas. You basically show the examples of lit areas and shadowed areas in your picture then the program tries to remove shading. To start we paint some yellow strokes in lit areas and blue strokes in the shadows (you don’t need to paint them all, just try to cover all materials, like here we try to show all different stone colors)
If the brushes (2) are grayed out then double-click on the “Illumination map” (1) to activate them. That also works with processed models. Check the other icons on the toolbar, you can erase the marks, hide them and change the size of the brush. Space bar switches between paint and rotate modes.
Once it’s been annotated, use “Remove cast shadows” button to run the algorithm. Preview button use 1/4 resolution to speed things up. The process can create some unnatural color and light variation, it that case try to use different Highlight and Color suppression parameters to see which works better.
The delighting process can remove some dark colors that are not actually shadows (like dark spots on the rocks or paint etc.) To fix that you need to create another mask called “Shadow scale map” and annotate those places with third color (light blue). To create this mask right click on the model on the left and choose “Add shadow scale mask”. Then paint with light blue color lit areas where you want to preserve dark details. To return to the illumination mask or another mask double click on it.
Run the preview again and once you are satisfied run the “remove cast shadows” at full res.
Sometimes this first step will not remove all shadows. In this case you can repeat the procedure on the Processed texture. First double click on the Illumination map of the Processed texture to activate it. Your original light/shadow markup from previous step will be copied here. What we did for this wall is to leave all yellow marks and but remove the blue ones (using blue cross on the toolbar) then repaint them in places that remain too dark (between stones). Finally, we will also create a new Shadow scale Map and protect the lit areas using it
When it’s done use “Remove cast shadows” button again and if it works well export the final result by right clicking on it and choosing Export model. It will export a new fbx (which we don’t use) and a texture, which we’ll use in the next step. We usually export in tiff of tga format.
Read more: check the official tutorial, in the end there are links to downloadable examples.
In this step we will prepare textures to import in Unreal. Right now we have the original photo scanned mesh, lets’ call it Mesh.fbx, it’s de-lit base color texture Mesh_delit.tif and the re-topologized mesh SM_Mesh.fbx with new UVs prepared for import in Unreal. We will first create the base color texture that matches the new mesh (bake), then, optionally, create a Normal and Roughness textures.
But before the baking begins we will open our 2 meshes in Maya together and check two things: they should be in the same place in the world (otherwise the bake will fail of course), and they should be at least 1m large or more. If the meshes are too small (few centimeters large), the tiny high-poly triangles become microscopic and that will cause issues with normals both for baking and for Unreal. Even if the mesh is small IRL you need to scale it bigger. (you can scale it back in your level in Unreal). Finally we will freeze the transforms so rotation is 0 and scale is 1.
For baking the base color texture to the new mesh and its UVs we use xNormal. Specify Mesh.fbx as “High definition mesh”, Mesh_delit.tif as its “base texture to bake” and SM_Mesh.fbx as “Low definition mesh”. Then in the Baking options tab specify the output file, here we will call it Mesh_xn.tga, check “Bake base texture” and hit Generate maps button. This will give you the base color texture you can import in Unreal, usually we rename it in something like T_Mesh_D.tga (D for Difuse).
For creating optional roughness and normal maps we use Substance Alchemist. Simply drag the T_Mesh_D.tga into that program and it will generate various textures using its AI based algorithms. We will export roughness and normal maps, then for our Unreal material, we will add the roughness as alfa channel of the base color texture in Photoshop. In this case we will save the combined texture as a new file called T_Mesh_DR.tga (R for Roughness). We will call the normal map T_Mesh_N.tga
Alchemist also allows you to do the de-lighting, but it doesn’t take the mesh shape into account. It’s totally possible to use it for simple cases, for example when the mesh is very flat. But with more complex meshes and lighting scenarios the Agisoft De-Lighter works better. In case when we want to use Alchemist for de-lighting we will skip the De-Lighter step, using xNormal to bake the original photo scanned texture, then give it to the Alchemist and produce also the base color in there.