No idea what Windows et al do for this, or if that's still true, but I believe the above description was how the argument was originally ended.
Also, tbh, if you can patch arbitrary instruction behavior, just replacing rdrand seems like far too ham fisted a tool with the level of versatility in your hands...
I agree that worrying about RDRAND shouldn't probably be on the top of anyone's priorities, but there is some cause to at least use it defensively, for example using it to pre-fill buffers with random values in a different context, so that subversion would in the very least require memory references, instead of running RDRAND in the context of your RNG.
Should you not see it from timing alone being wrong?
It also persists the pool across reboots so it doesn't start empty.
(I'm not insisting on that workflow or example being an uptick, just that that's a concrete example of how having an additional source back then could have caused noticable performance differences. I'm aware you're not calling rdrand for your /dev/urandom or random accesses, now or then.)
https://www.reddit.com/r/archlinux/comments/1d9aazw/rdrand_n...
https://web.archive.org/web/20011027002011/http://www.dilber...
E.g. if a CPU were to put a microscopic lavalamp and camera inside it and use a hash code of pictures taken from this lavalamp as the result of RDRAND: would that be compliant?
I know microscopic lavalamp in CPU is not physically feasible, what I'm asking is if RDRAND is broken by design, or proper physics based RNG's could be used to implement if a CPU maker wanted to
> Interrupt Timings
> The primary entropy source in Windows 10 is the interrupt timings. On each interrupt to a CPU the interrupt hander gets the Time Stamp Count (TSC) from the CPU. This is typically a counter that runs on the CPU clock frequency; on X86 and X64 CPUs this is done using the RDTSC instruction.
> ...
> The Intel RDRAND instruction is an on-demand high quality source of random data.
> If the RDRAND instruction is present, Winload gathers 256 bits of entropy from the RDRAND instruction. Similarly, our kernel-mode code creates a high-pull source that provides 512 bits of entropy from the RDRAND instruction for each reseed. (As a high source, the first 256 bits are always put in pool 0; providing 512 bits ensures that the other pools also get entropy from this source.)
> Due to some unfortunate design decisions in the internal RDRAND logic, the RDRAND instruction only provides random numbers with a 128-bit security level. The Win10 code tries to work around this limitation by gathering a large amount of output from RDRAND which should trigger a reseed of the RDRAND-internal PRNG to get more entropy. Whilst this solves the problem in most cases, it is possible for another thread to gather similar outputs form RDRAND which means that a 256-bit security level cannot be guaranteed.
> Based on our feedback about this problem, Intel implemented the RDSEED instruction that gives direct access to the internal entropy source. When the RDSEED instruction is present, it is used in preference to RDRAND instruction which avoids the problem and provides the full desired guarantees. For each reseed, we gather 128 output bytes from RDSEED, hash them with SHA-512 to produce 64 output bytes. As explained before, 32 of these go into pool 0 and the others into the ‘next’ pool for this entropy source.
> This vulnerability could be used by an adversary to compromise confidential computing workloads protected by the newest version of AMD Secure Encrypted Virtualization, SEV-SNP or to compromise Dynamic Root of Trust Measurement.
I don't know whether the people who write this upside-down corpo newspeak are so coked up on the authoritarian paradigm that they've lost touch with the reality, or if they're just paid well enough by the corpos to not care about making society worse, or what. But I'll translate:
This "vulnerability" might be used by the owner of a computer to inspect what their computer is actually doing or to defend themselves against coercion aiming to control the software they're running.
https://community.amd.com/t5/business/amd-and-microsoft-secu...
This argument (or the individualist approach in general) no longer works in the context of remote attestation ("Dynamic Root of Trust Measurement"). As soon as the "average person" has a computer that can be counted on to betray what software they're running, remote parties will start turning the screws of coercion and making everyone use such a machine.
Knowing nothing about you, it is safe to assume that you may use the dollar or similar currency, either directly or indirectly. Why would you choose to do that? Such currencies come with lots of rules and regulations about what you can and cannot do. Society must keep track of who has how much in banks, and individuals are held liable should there be an overdraft. Remote attestation may certainly put a damper on doing your own thing with your computer, but you can choose whether you wish to pay the price of participating in the economy. Not sure if the above is a great analogy, probably a terrible one, but the bygone ideal of having your computer acting solely in your interest is simply not very realistic. At the very least, controlling the microcode will not give you the control you seek, because there are too many other layers in the stack that are routinely broken, and you are always one exploit away from joining a botnet. No matter how you wish to spin it, your computer is not yours, and obeys many masters. If you feel otherwise, please offer a counterexample to the preceding statement.
In the context of remote attestation, they can revoke the key and vulnerability like these won't help.
You're talking about the first order reaction.
The reaction to that reaction is that the noose of remote attestation develops slower. As things currently stand, they can't revoke the attestation keys of all the affected processors with websites (etc) just telling the large numbers of people with with those computers that they need to buy new ones. Rather the technological authoritarians have to continue waiting for a working scheme before they can push the expectation of remote attestation being something that is required.
Eg. Throw away all spectre mitigations, find all the hacks to get each instructions timing down, etc.
But I would be pleased to be proven wrong.
You can read about how this works here: https://www.amd.com/content/dam/amd/en/documents/epyc-techni...
If you aren't using SEV-SNP / attested compute, you have bigger fish to fry anyway since you have no actual trust in your hypervisor.
I personally evaluated this technology a few years ago, and even just moving a small part of our team's workload to SEV-SNP faced so much resistance from the VP level. I'm not sure if that's more of a problem with bureaucracy at my old employer or a general problem.
[0] https://duckduckgo.com/?q=dilbert+random+generator+nine+nine... (couldn't find a good link to the comic)
[0] - https://www.amd.com/content/dam/amd/en/documents/epyc-techni...
The running microcode's revision ID?
Or the running microcode's ROM version plus loaded patch lines plus active match registers plus whatever settings were adjusted in config registers during the act of loading?
That is, attest the actual and complete config that is running, or some pointless subset that instills a false sense of security?
It would be good for AMD (and Intel etc.) to provide better details here.
As long as the vulnerability doesn't let them actually extract the secrets necessary to simulate completely arbitrary operations including with any future keys, I _think_ you can trust the new attestation chain afterward?
I've not been paid to work on this, though, and it would be pretty easy to have accidentally built it in a way where this is a world-ending event, and truly paranoid workloads in the future are going to insist on only using silicon that can't have ever been compromised by this either way.
"Vulnerability"
These restrictions should never have been in place in the first place.
This vulnerability breaks this assumption.
https://wiki.archlinux.org/title/Microcode#Late_loading
https://docs.kernel.org/arch/x86/microcode.html#late-loading
although quotes from this article claim that it's fine specifically on AMD systems:
(I believe this example would also still break on AMD-based systems, AMD just hasn't killswitched a CPUID feature flag yet AFAIR...)
index of linux-firmware, 41 cpus supported: https://github.com/divestedcg/real-ucode/blob/master/index-a...
index of my real-ucode project, 106 cpus supported: https://github.com/divestedcg/real-ucode/blob/master/index-a...
sadly, unless you have this recent agesa update you can no longer load recent microcodes due to this fix
which very well means quite a substantial amount of models whose vendors don't provide a bios update for this (since it goes back to zen1) will not be able to load any future fixes via microcode
I would be tickled pink if the 4 was in reference to https://xkcd.com/221/
I would speculate that the problem is less that the hash function is weak inherently (otherwise we'd have a really complicated horizon of needing to chain microcode updates since we'd eventually want to e.g. go from MD5 to SHA1 or something), and more that the implementation has a flaw (similar to things like Nintendo using strcmp and not memcmp on their hash comparisons in the Wii, so you only had to collide the function to the first \0).
under agesa 1.2.0.2b
> microcode: CPU1: update failed for patch_level=0x0a60120c
under agesa 1.2.0.3a PatchA (which asus leaked that it fixes this issue)
> microcode: Updated early from: 0x0a60120c
> Minimum MilanPI_1.0.0.F is required to allow for hot-loading future microcode versions higher than those listed in the PI.
Now that runtime loading of microcode patches cannot be implicitly trusted, the machine should not attempt to prove AMD's authorship of the newly-loaded patch without a concrete guarantee that the current microcode patch is trustworthy.
Presumably (load-bearing italics), the contents of an AGESA release (which contains the patch applied by your BIOS at boot-time) can be verified in a different way that isn't broken.
[^1]: https://www.amd.com/en/resources/product-security/bulletin/a...
I suppose a sufficiently older agesa may actually load the newer microcode then if that was a recent addition in preparation for this
What a load of shit! Confidence is earned, it does not grow back like a weed you stepped on!