50x slower /dev/urandom comparing to ubuntu

.Hi, what is the difference between /dev/urandom in tumbleweed and ubuntu’s 18.04? When I run simple dd, I get something like this

me@tumbleweed:~> dd if=/dev/urandom of=/dev/zero bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 21.3013 s, 4.9 MB/s

My tumbleweed machine is AMD Ryzen 5, ubuntu is running under virtualbox on older core i3 (about 50% power of AMD in all tests) and /dev/urandom makes there 230MB/s, which is nearly 50x faster :open_mouth:

I thought that urandom is rather kernel than distro thing, so how is this possible? Or maybe intel is simply so much faster on dev urandom, or virtualbox makes that? Any ideas? I cannot run other distro than opensuse on Ryzen, because most distros fails to boot, so I cannot test ubuntu on bare metal here, but can try to install virtualbox and try ubuntu in it.

I made a test, booted ubuntu on AMD Ryzen in virtualbox and I get the same 4.9MB/s when reading from /dev/urandom. But I still fail to believe that old intel i3-4170@3.7GHz (2C/4T) is 50x times faster than rather new AMD Ryzen 5 2400G@3.6GHz (4C/8T) when generating random numbers… Someone has any idea, what could be the catch?

It could be better to move the topic to hardware, but I cannot do that…

Don’t know about your specific results, but it’s not like there isn’t a simple and well known workaround…
Might not make much of a diff for a very small file like what you’re creating, but generally speaking

You can execute your file creation in multiple dd operations with all except the last creating much larger chunk at once.
In other words, a major reason why your dd takes a long time is because it’s creating the file 1M at a time.
You could for instance create a file 3/4 more of the full file size at once, then create the remainder 1M at a time…
The result then would be a fraction of a second instead of multiple seconds to create your file with randomized content.

Depending on what you are creating the file for, you could even create your file in one 100M step which should be nearly instantaneous.

This would work almost no matter what size of file which could be relatively enormous.