There is a new DCP-o-matic benchmark number now for the new AMD Ryzen 9 5950X CPU.
https://dcpomatic.com/benchmarks/input.php?id=1
Now, that is an impressive number for a single CPU. CPU configurations that come close to that number so far cost a lot more.
https://www.cpubenchmark.net/high_end_cpus.html
There were some issues recently getting decent performance figures for the AMD Threadripper Server CPUs that feature very many cores. I don't know if Carl already had the time to dig into them.
Well, at least from the numbers for the Ryzen 9 5950X we see now that this CPU architecture works very well with DOMs encoding, and that there is no AMD penalty compared to Intel CPUs.
The Ryzen 9 5950X is a 1200 US$ CPU, though. From the passmark figures, you can see that it's lower grade brother, the Ryzen 5900X, and it's predecessor, the Ryzen 9 3950X go for at lot less and perform close enough to the more costly 5950X.
- Carsten
Benchmarking/encoding speed on fast CPUs
-
- Posts: 2804
- Joined: Tue Apr 15, 2014 9:11 pm
- Location: Germany
-
- Posts: 20
- Joined: Mon Oct 14, 2019 3:48 am
- Location: Australia
Re: Benchmarking/encoding speed on fast CPUs
I would recommend DCP-O-Matic consider asking kakadu software to offer a paid (minimal cost) addition to DCP-O-Matic to allow the speed to jump 4-8 times the current speed. This would allow real time encoding on pretty stock kit, and 2-3 times real time on faster machines that have the IO needed to maintain that speed.
Also, the DCP-O-Matic player would then become for more usable.
Is that an option the community would be open to?
If so I can contact Kakadu and discuss, the GM/Director is a friend.
James
Also, the DCP-O-Matic player would then become for more usable.
Is that an option the community would be open to?
If so I can contact Kakadu and discuss, the GM/Director is a friend.
James
-
- Site Admin
- Posts: 2548
- Joined: Thu Nov 14, 2013 2:53 pm
Re: Benchmarking/encoding speed on fast CPUs
It's something I've thought about on and off for years. The last time I made enquiries, the cost was definitely not minimal, which put me off a bit (I can't remember the details now, though, and that was many years ago). Also for a long time I've been keen to keep things 100% open-source.
My feelings are softening a bit on this, now, though. I've done maybe 70% of the work to integrate a closed-source GPU-based {en,de}coder, which looks promising, and which I am planning to release at some point after 2.16.0. The main snag with that is that it only supports Windows and Linux, which is a bit of a shame when there seem to be quite a few macOS users of DoM. Also it is nvidia-only, so a bit restrictive in that sense.
I have some practical concerns, e.g. who's administering licences and who's doing technical support, things like that, but I would be interested to hear what Kakadu people think about it. So, if you have a contact there, that would be great.
My feelings are softening a bit on this, now, though. I've done maybe 70% of the work to integrate a closed-source GPU-based {en,de}coder, which looks promising, and which I am planning to release at some point after 2.16.0. The main snag with that is that it only supports Windows and Linux, which is a bit of a shame when there seem to be quite a few macOS users of DoM. Also it is nvidia-only, so a bit restrictive in that sense.
I have some practical concerns, e.g. who's administering licences and who's doing technical support, things like that, but I would be interested to hear what Kakadu people think about it. So, if you have a contact there, that would be great.
-
- Posts: 2804
- Joined: Tue Apr 15, 2014 9:11 pm
- Location: Germany
Re: Benchmarking/encoding speed on fast CPUs
Wondering wether you could build just a Kakadu encoding server. Now, it makes not much sense to have a very fast encoding server actually working remotely due to network saturation effects (a fast networked encoder can not receive enough frames over gigabit ethernet to show its full performance).
However, it IS possible to run the Kakadu optimised encoding server locally, and disable the J2K encoding in DCP-o-matic main. That way you could offer/license 'just' the Kakadu encoding server alone. The same could work at some point with the GPU encoder. And if someone uses 10G networks, they could also be utilised as actual network encoders. That approach would also make it easier to keep the licensed Kakadu Encoder more or less static und proceed with general DCP-o-matic development as usual.
- Carsten
However, it IS possible to run the Kakadu optimised encoding server locally, and disable the J2K encoding in DCP-o-matic main. That way you could offer/license 'just' the Kakadu encoding server alone. The same could work at some point with the GPU encoder. And if someone uses 10G networks, they could also be utilised as actual network encoders. That approach would also make it easier to keep the licensed Kakadu Encoder more or less static und proceed with general DCP-o-matic development as usual.
- Carsten
-
- Site Admin
- Posts: 2548
- Joined: Thu Nov 14, 2013 2:53 pm
Re: Benchmarking/encoding speed on fast CPUs
That's an interesting idea, thanks... it might be the easiest way to handle things. I think I could just dynamically link (at runtime) the Kakadu libraries from DoM; and if they're there, use them... but that is kind of fiddly.
I do like the idea of just "download this binary and run it and your DoM encodes get faster".
I do like the idea of just "download this binary and run it and your DoM encodes get faster".
-
- Posts: 2804
- Joined: Tue Apr 15, 2014 9:11 pm
- Location: Germany
Re: Benchmarking/encoding speed on fast CPUs
That approach would also make it possible to use a networked NVDIA GPU encoder running in windows from a Mac once it is available.
-
- Site Admin
- Posts: 2548
- Joined: Thu Nov 14, 2013 2:53 pm
Re: Benchmarking/encoding speed on fast CPUs
Another good point!