Hi
It depends on what your compiling, I use cuda/cudann here with (cheap) nvidia cards, the intel gpu for the display… what virtualization, I use qemu here and then get to allocate one of the nvidia gpu’s (and a second SATA card) to my qemu machines.
Interesting questions.
And one I had not thought about before… compiling on a GPU.
A quick Google search confirmed my suspicion, it’s not typically done because of the nature of what happens when you compile.
You have to understand that today’s processing technoliges (CPU and GPU) have been running up against a wall not easy to cross limitations of manufacturing that is preventing a continued increase in computing speed consistent with Moore’s Law.
For a long time now, without being able to run at faster speeds, there have been great progress implementing multiple processors (typically cores instead of multiple dies) and improving the parallellism of the running processors/cores.
What this means is that compiling which today is typically more serial and hardly parallel can’t make use of parallellism so you’d want to concentrate on processors that run faster, and not necessarily multiple cores.
Virtualization on the other hand can make excellent use of parallelism, the concept of multiple virtual machines is a prime example of making use of multiple parallel processes… So more processors and cores would be beneficial.
Bottom line, from your description I don’t know that Itnel or AMD CPUs will make that much of a difference, particularly if your workload doesn’t push your machine’s resources to the maxiumum. When I looked at both not that long ago, I found the cost of computing power was approximately the same. Benchmarks suggest that Ryzen is the more powerful architecture at the moment, but costs a lot more than an Intel processor. Both are pretty powerful for running maybe 3-4 heavy tasks simultaneously.
As for the GPU, it’s hard to say what is better than another.
You haven’t described a workload that could be run on GPU computing but if you did, GPU computing makes a world of difference. Also, some projects support one GPU and not necessarily the other… so YMMV. If you don’t have a use case and don’t plan on a use case, then maybe you shouldn’t pay too much attention, particularly if you can upgrade at a later time when something specific comes around.
One thing you didn’t mention is what I consider extremely important nowadays…
Provided yuo have the budget, you should deploy as much of your actively used disk storage as m.2 NVME, not regular SATA or SAS. Even if you compare to SSD SATA/SAS, m.2 NVME is 5x faster using first generation technology, and much faster than even that with third generation technology announced by most of the major manufacturers already this year (shipping 2nd half this year supposedly). In other words, maybe store old, archived data on HDD, use SATA/SAS SSD for relatively inexpensive, active use and use m.2 NVME for your frequently used data. Do that, and it’ll make a world of difference on your system, maybe more than your choice of CPU/GPU and on par with the amount of physical RAM installed.
If you can’t afford m.2 NVME, consider 16GB Opteron cards, they use an extra m.2 slot you’d rather use for something else, but it can be used to buffer the I/O for HDD and SDD so you can access the data off those slow drives as fast as if they were SSD at least (maybe even approaching m.2 NVME). But, if you can afford and install even first generation m.2 NVME, I’d recommend that instead.
Went back to AMD after a decade of using Intel - I haven’t found a single bad thing to say about my Ryzen.
They even gifted a nice 5-10% boost just a while back when they released new AGESA lib. Comparing that to Intel mainly slowing down their CPUs due to firmware patches trying to fix their leaky tub.
Intel Optane 16 GB is about 2 times slower than Optane 32 GB. And they are using PCI-E 3.0 2x.
If you have money then use NVME drive with PCI-E 4.0 4x bus.