If like me you can’t compile the GUI either, you can open the point cloud files (.ply) or the finished product (.obj) in meshlab for inspection.
GEF
PS: The steps of photogrammetry are loosely Matching (the computer plans its approach), Sparse Cloud (which is to the finished product like an artist’s sketch to his eventual painting), fill in to a Dense Cloud (like an impressionist pointillism), and finally Texturing.
MVE handles steps 1-3, while Tex(ture)Recon(struction) handles the last one. By contrast, openMVG handles steps 1-2, and openMVS steps 3-4. That means the output from openMVG is only a sparse cloud, which is why I don’t see much but there is something vaguely recognizeable there, and openMVS is the part I need to troubleshoot.
I have now installed hundreds of packages with the letters “qt” in them and I still error out when compiling UMVE with QOpenGLWidget no such file. As nearly as I can determine, QOpenGLWidget isn’t exactly an exotic component of Qt, but somehow I can’t find the package it’s in. -GEF
Since it wants to use external packages via git and download a tarball I added a script for this as well as a patch to stop it trying as on OBS you can’t access external repos while building…
Download from the same x86_64 directory used for the other packages when it’s finished…
If your happy with your test results, then I will clean up a little further as it needs to be compiled with the OBS rpm additional flags…
Hey Malcolm, mostly looking good. I followed the steps here (www.gcc.tu-darmstadt.de/home/proj/texrecon) on the image collection of Sceaux Castle posted with openMVG. All the steps went smoothly until the last one, invoking the GUI to see the output:
gef@purplebox:~/datapics/castle> umve textured.obj
MTL Loader: Skipping unimplemented material property Ka
MTL Loader: Skipping unimplemented material property Kd
MTL Loader: Skipping unimplemented material property Ks
MTL Loader: Skipping unimplemented material property Tr
MTL Loader: Skipping unimplemented material property illum
MTL Loader: Skipping unimplemented material property Ns
MTL Loader: Skipping unimplemented material property Ka
MTL Loader: Skipping unimplemented material property Kd
MTL Loader: Skipping unimplemented material property Ks
MTL Loader: Skipping unimplemented material property Tr
MTL Loader: Skipping unimplemented material property illum
MTL Loader: Skipping unimplemented material property Ns
Resizing GL from 0x0 to 584x613
Using OpenGL 4.5 ...
Skipping shaders from /usr/bin/shaders/surface_330.*
Skipping shaders from /usr/bin/shaders/wireframe_330.*
Skipping shaders from /usr/bin/shaders/texture_330.*
Skipping shaders from /usr/bin/shaders/overlay_330.*
Skipping shaders from /home/gef/.local/share/umve/shaderssurface_330.*
Skipping shaders from /home/gef/.local/share/umve/shaderswireframe_330.*
Skipping shaders from /home/gef/.local/share/umve/shaderstexture_330.*
Skipping shaders from /home/gef/.local/share/umve/shadersoverlay_330.*
Skipping shaders from /usr/local/share/umve/shaders/surface_330.*
Skipping shaders from /usr/local/share/umve/shaders/wireframe_330.*
Skipping shaders from /usr/local/share/umve/shaders/texture_330.*
Skipping shaders from /usr/local/share/umve/shaders/overlay_330.*
Skipping shaders from /usr/share/umve/shaders/surface_330.*
Skipping shaders from /usr/share/umve/shaders/wireframe_330.*
Skipping shaders from /usr/share/umve/shaders/texture_330.*
Skipping shaders from /usr/share/umve/shaders/overlay_330.*
Using built-in surface shader.
QIODevice::read (QFile, ":/shaders/surface_330.geom"): device not open
terminate called after throwing an instance of 'std::runtime_error'
what(): GL error: 1281
Aborted (core dumped)
So I checked the output in meshlab, and it looked OK. There was some noise, and there were some holes, and foliage got mapped as a texture, but I suspect that I could improve the output with better knowledge of the command switches, or just manual cleaning in meshlab.
I tried launching UMVE without parameters and it worked, and then I imported a scene, and then I clicked on the tab to inspect the scene and it crashed with similar output:
gef@purplebox:~/datapics/castle> umve
Initializing scene with 11 views...
Initialized 11 views (max ID is 10), took 1ms.
Resizing GL from 0x0 to 584x613
Using OpenGL 4.5 ...
Skipping shaders from /usr/bin/shaders/surface_330.*
Skipping shaders from /usr/bin/shaders/wireframe_330.*
Skipping shaders from /usr/bin/shaders/texture_330.*
Skipping shaders from /usr/bin/shaders/overlay_330.*
Skipping shaders from /home/gef/.local/share/umve/shaderssurface_330.*
Skipping shaders from /home/gef/.local/share/umve/shaderswireframe_330.*
Skipping shaders from /home/gef/.local/share/umve/shaderstexture_330.*
Skipping shaders from /home/gef/.local/share/umve/shadersoverlay_330.*
Skipping shaders from /usr/local/share/umve/shaders/surface_330.*
Skipping shaders from /usr/local/share/umve/shaders/wireframe_330.*
Skipping shaders from /usr/local/share/umve/shaders/texture_330.*
Skipping shaders from /usr/local/share/umve/shaders/overlay_330.*
Skipping shaders from /usr/share/umve/shaders/surface_330.*
Skipping shaders from /usr/share/umve/shaders/wireframe_330.*
Skipping shaders from /usr/share/umve/shaders/texture_330.*
Skipping shaders from /usr/share/umve/shaders/overlay_330.*
Using built-in surface shader.
QIODevice::read (QFile, ":/shaders/surface_330.geom"): device not open
terminate called after throwing an instance of 'std::runtime_error'
what(): GL error: 1281
Aborted (core dumped)
So, on the whole, looks like everything works but umve, the graphical interface. However, I also tried the pipeline on a different set of pics (one of the examples for the python photogrammetry toolbox), and it failed half-way through. I say failed, not crashed or aborted, so this might be an issue with the images themselves, which have a lot lower resolution. -GEF
So, I have a set of pics where the process chokes early: scene2pset comes up with no vertices, prior step dmrecon seems to run too fast, so problem is probably in the sfmrecon stage. This is one of the sample datasets for PPT found here: https://github.com/steve-vincent/photogrammetry/tree/master/models/examples/ET.
I tried the GUI, and as long as I didn’t click scene inspect, I didn’t crash. When I started a new project and opened the images, it generated the mve-formatted views. However, when I tried to export a ply-formatted file, it asked for a depth map but didn’t offer a drop-down list or let me type in the field.
As nearly as I can tell, the MVE+TexRecon pipeline is equivalent to openMVG+openMVS. With this same set of images, openMVG produces the sparse cloud but openMVS still says it can’t read jpg format even though I do have the library for jpg files installed.
So if I load the castle scene (after processing) after the starting umve, I see the images, drop downs are now populated as well as on export PLY… so think we are a bit further now, I can scene inspect if I click in the window I see;
Hey Malcolm, UMVE loads, and when I click on the scene tab, it doesn’t crash!
However, I’m stymied getting it to do anything beyond load images and convert them to its own format, what would be makescene in the command line. The next command would be sfmrecon, and it’s critical to create the depthmap. Everything I can find in the graphic interface wants me to specify a depth map from a dropdown menu, but nothing is listed in said menu, meaning them sfmrecon process hasn’t run yet. If there’s any online documentation, I haven’t found it yet, so I’ll keep poking around.
Frustrated with starting the process from scratch with UMVE, I loaded a scene that I had already prepared with the command-line tools. Then I selected open mesh. I thought that referred to one of the .ply files, but when I selected one nothing happened, so I tried again with the .obj, and that crashed it. Here’s the output in terminal during the steps described:
Initializing scene with 11 views...
Initialized 11 views (max ID is 10), took 1ms.
PLY Loader: comment Export generated by libmve
Reading PLY: 34236 verts... 67026 faces... done.
Warning: Zero-length normals detected: 0 face normals, 1 vertex normals
MTL Loader: Skipping unimplemented material property Ka
MTL Loader: Skipping unimplemented material property Kd
MTL Loader: Skipping unimplemented material property Ks
MTL Loader: Skipping unimplemented material property Tr
MTL Loader: Skipping unimplemented material property illum
MTL Loader: Skipping unimplemented material property Ns
MTL Loader: Skipping unimplemented material property Ka
MTL Loader: Skipping unimplemented material property Kd
MTL Loader: Skipping unimplemented material property Ks
MTL Loader: Skipping unimplemented material property Tr
MTL Loader: Skipping unimplemented material property illum
MTL Loader: Skipping unimplemented material property Ns
terminate called after throwing an instance of 'std::runtime_error'
what(): GL error: 1280
Aborted (core dumped)
I tried again from the command line, typing “umve textured.obj” which is supposed to work as a way to inspect the finished product per the limited documentation I did find. The result was the same (except loading the shaders preceeded the output above). -GEF
Oops, I missed this post. What are you saying, that MVE absolutely needs CUDA? I thought it could benefit from it for acceleration but would also poerform CPU-based reconstruction. Well, obviously the pipeline works end-to-end from command-line; are you saying the GUI needs nvidia?
Yes, agree farther along. I’m actually getting excited about MVE after reading this site: OpenDroneMap. Nice to see a performance comparison by somebody besides the program’s author. It looks like MVE is pretty good at detail that other apps miss, and leaves empty holes where it can’t be sure where another app (CMP-MVS) would draw a smooth texture over the hole. Whether that’s a bug or feature might depend on your use-case, but I like it because I’m pretty sure there must be a way to cover the hole manually if you want to. So, if there’s to be just one graphical photogrammetry app in the repo, UMVE looks like a fine candidate.
The MVE-only pipeline starts with the command makescen -i, where the -i switch means image only, and if I understand correctly, it means that MVE will attempt to interpolate camera info. The alternative is to start with the output of another app, and one of the candidates is openMVG which you’ve already packaged, so my next test is to pre-process the castle with openMVG and see if the completed model is any better. Some of the frames have trees in the way, and the first has a person in the way, and in the model I made with MVE alone, the trees and the person became part of the texture map. In particular, on the right side of the manor, the green texture-map looked like ivy, not a tree, so there’s plenty of room for improvement.
And the way to avoid things like the tree artifact seems to be to take photos carefully in the first place, so I’ll be trying this with a home-grown set too, so hopefully I’ll have some useful feedback by this evening.
This also works on the *.ply files output by the intermediate steps, but you’ll see they look like pointilsms (which they - point clouds). TexRecon outputs the *.obj, and also the catalog of texture maps as *.png files - check 'em out and you can see how the textures fit in the finished product.
Meshlab is an important part of ANY photogrammetry pipeline, as a viewer if nothing else, but it has tools to clean up the pipeline output, too. For instance, you can try reducing the density of the mesh, making for a smaller file, and underneath the textures it may look just as good. How do you use these features? I don’t know yet - still learning, myself. -GEF
So working with downloaded sample images, just a dozen at a time, I got recognizeable models with holes. So I took nigh a hundred photos from all sides of a painted figurine used for wargaming, fairly hi-def with a “prosumer” camera. I thought about placing it on a lazy susan, with the camera on a tripod, but instead I followed advice I found on the CMU website which said to move around the object because the photogrammetry process uses common elements from the background in order to match the photos. After throwing out the bad ones, I still had 84 pics. An hour later I had my model, and I wish I could show it to you. Where I expected better detail, what I got were bits of minotaur in a fog of noise.
So, I’ll try again with the approach I originally intended, figurine on a lazy susan in front of a featureless white screen, camera steady on a tripod. But first, I’ll see if processing the images with openMVG helps at all.