Understanding performance bottlenecks in system working on rendering video file

I am running TW on KDE desktop and have been beginning to use Kdenlive for the first time, to do some very modest video editing. (There is a lot to learn!!!)
My first attempt seemed OK and so to wrap up my edits I rendered the whole clip which worked and a good start but it threw up some questions.
My hardware has two Xeon processors with 6 cores and 12 threads on each and with 64GB of memory so the system is reasonably well resourced and although old it has nvme drive so I was expecting performance to be good. On my rendering task however it took an age, as in an hour or so, but the tray cpu monitor was showing it never peaked above 19%.
My graphics card is at present Nvidia GK107 which is old but I believe is working so my question is to understand where the speed is being constrained, why is only 19% of the processor power being used and would a change of graphics adaptor to a newer card speed things up.

@Budgie2 I’ve not used Kdenlive, are there options to set cpu usage?

Maybe just try ffmpeg direct from the command line?

Side Note: Both my T400 and Tesla P4 have hardware encoders/decoders for this task… Even the Beelink device I got with an intel N100 has low power GuC/Huc built in…

Two things to check.
I haven’t use kdenlive in a while and I don’t have it to test.
See the settings if it is using hardware acceleration, if yes since you are using an older card it might be the culprit.
If using the cpu check if all number of cores are set to be use for rendering/encoding.

New world for me this video stuff; so many options, formats, containers. Not sure what you are suggesting using ffmpeg direct. I assume you mean render the clips to one file but that would mean more reading and learning and I think Kdenlive does make the tools to make it easier once learned.

You have already mentioned your T400 and tesla P4 but I have committed to the WX7100 which I believe will give me what I need.

FYI I ran phoronix-test-suite with a graphics test suite:-

phoronix-test-suite benchmark gputest

In a couple of the runs I made in order to understand what I was looking at and was offered a basket of some random comparable runs on different graphics devices, given for comparison purposes.

My existing card gave a score of about 10,000. The WX7100 was also shown with a score of 71,000, one of the lowest, and the rest went up to 160,000+. I think I will be fine with the WX7100 but out of interest I looked at what some of these high scoring devices were. In short; power hungry and very very expensive. I am told they are used for coin mining.

Now I need to look at the settings I have on my machine to check if I am using my existing resources correctly. Many thanks for the help.

Hi conram,
I have been looking into the settings as you suggested and yes the default for my Kdenlive installation as installed is to use only 2 threads. I think I can change this to 12 without making any difference to the overall performance and no doubt there are other settings I need to investigate. Will keep reading and testing and thanks for the suggestion.

Ok so what I need to how to change the settings for my graphics card. I understand there may be a variable in my environment, which makes sense but am I using the NV driver or an open source equivalent and where should I look for the config file

Read the hidden message under:
for ffmpeg recompile instructions.

Hi Budgie2.
I installed the kdenlive just right now, seems rusty at the moment regarding the settings.
Here is some clue on how to use the graphic acceleration. First you open the config wizard as shown on my first image:
config wizard

Then when you open config wizard you will see this :

Hope this will help you a little bit.

Excellent and yes it does. I had been struggling with my OS environment without any luck but the switch you have shown me confirm I have NVIDIA hardware and I have been able to use it so thanks again,

Hi and many thanks for the links.
I am not using laptop at the moment but will check out implications on what I am still trying to set up correctly on my workstation and whether, once settings are correct NVENC will already be invoked. Meanwhile many thanks for the help.