There's a register in most Firewire controllers where you can set the address bounds for which that function is available. I once noted that the hard-coded default values for Linux were 0 .. 2^32-1, that is, the first 4GB. I reported this as a security bug and was told it was needed for the kernel debugger.
Sigh.
> Wait a second. USB3 doesn’t do Bus Mastering. Either there’s something wrong with the device description, or there’s some hardcore multiplexing of lines going on. But the reality was less exсiting — it uses a JMicron JMS581LT host controller chip, which implements PCIe root/switch/something at least partially, and communicates with the card over PCIe. But it doesn’t pass it to the host, and communicates with the host over 10 Gib/s USB. Interesting chip overall, but not interesting as a DMA target.
However, there are also Thunderbolt CFX readers. And those do actually hook up the SSD to the host directly.
> By the way, the photo camera probably doesn’t need the speed of PCIe
"need" is a curious question, if you're inclined to shoot RAW + JPG and let 'er rip at 20 frames per second (no shutter means no wear, after all!) you're producing around 1.5 gigabytes of photos... per second. (In practice, card write speeds seem to tap out at around 850 MB/s).
Shame we aren't living in that kind of world.
This sort of thing is why QubesOS tends to put hardware controllers in isolated VMs and only pass access through. With a working IOMMU (any modern hardware has this), all you can get is DMA access into a VM that doesn't actually have much of interest in it, and no access into other VMs...
//EDIT: Though at a closer read, there's some that... isn't quite right, in how terms and examples are done. I'd expect better from someone doing low level security work - INB copies to a general purpose register, not a memory address, a DMA controller is a "discrete" bit of hardware, it's not very "discreet," etc. I'm not sure. This is starting to feel very AI-assisted to me. The overall concepts are fine, but a lot of the background section doesn't read reasonably, or goes off into weird weeds and... never explores them. The Intel Xeon is not a less exotic example of a DMA controller. The PC/AT platform did not have a PCI bus.
Eh. I remain convinced it's a decent enough overview of the matter, but a lot of the details just read really weird to me in the background sections. To the point that this could be an interview discussion question. "What does this get subtly wrong?"
Some Xeon chips have additional DMA controllers "onboard".
No AI was used, each mistake here is handmade with love and 100% organic :) We wanted to give a decent (but not too deep) historical overview, however first and foremost we introduce new vector to conduct the attack.
I write long form text posts as well, so I appreciate the format. It just had a number of things that didn't seem quite right to me, being in similar deep technical weeds.
Now try the attack on Qubes. ;)
(Both ISA and PS/2 microchannel would allow busmastering of exactly what your article describes, though, so the point might as well stand - for that matter, so would other buses like EISA and VESA local bus.)
"discreet" looks like translation error, in russian version word "special" is used. PC/AT is still there, as well as Xeon example (latter does not seem "not quite right" to me)
Anyway, I'd also like to see some of their source, or hardware diagrams, but... it'll come out eventually, I suppose.
Proper IOMMU configuration and assigning anything with DMA to a disposable service VM still solves a lot, though at least these attacks require physical access. So far. I'm sure someone, at some point, will release a SD Express card with awful enough firmware that you can pivot through it for a software-only attack on this sort of system.
I agree with the "IOMMU" part, but my experience with the "working" part is more hit-or-miss.
The full context is:
> The DMA controller is just used as an “memcpy() hardware accelerator”. And this is not a joke. Sometimes those blocks are used in microcontrollers to copy large swathes of data inside RAM. A less exotic example of this we can mention are Intel Xeon platforms.
I interpreted this as a reference to the Data Streaming Accelerator (DSA) [1], which is a programmable DMA peripheral on the SoC that can be used to accelerate writes to and from I/O devices (amongst other things).
[1] : https://www.intel.com/content/www/us/en/products/docs/accele...
I agree, that's probably what they're referring to, but it was neither needed to make the points they were trying to make, nor expanded into something to further strengthen the points made.
Of course you have to ensure that you harden the interface between that VM and the host sufficiently.
God bless the blitter.
Fat Agnus be fat.