And also set up a Russian keyboard: https://krebsonsecurity.com/2021/05/try-this-one-weird-trick...
# dmidecode 3.6
Getting SMBIOS data from sysfs.
SMBIOS 2.8 present.
Handle 0x002C, DMI type 27, 15 bytes
Cooling Device
Temperature Probe Handle: 0x0029
Type: <OUT OF SPEC>
Status: <OUT OF SPEC>
Cooling Unit Group: 1
OEM-specific Information: 0x00000000
Nominal Speed: Unknown Or Non-rotating
Description: Cooling Dev 1
Handle 0x002F, DMI type 27, 15 bytes
Cooling Device
Temperature Probe Handle: 0x0029
Type: <OUT OF SPEC>
Status: <OUT OF SPEC>
Cooling Unit Group: 1
OEM-specific Information: 0x00000000
Nominal Speed: Unknown Or Non-rotating
Description: Not Specified
Handle 0x0037, DMI type 27, 15 bytes
Cooling Device
Temperature Probe Handle: 0x0036
Type: Power Supply Fan
Status: OK
Cooling Unit Group: 1
OEM-specific Information: 0x00000000
Nominal Speed: Unknown Or Non-rotating
Description: Cooling Dev 1
So a cooling device is still present.Sensor data:
iwlwifi_1-virtual-0
Adapter: Virtual device
temp1: +59.0°C
acpitz-acpi-0 # Fake, always reports these temperatures
Adapter: ACPI interface
temp1: +27.8°C
temp2: +29.8°C
coretemp-isa-0000
Adapter: ISA adapter
Package id 0: +51.0°C (high = +86.0°C, crit = +92.0°C)
Core 0: +51.0°C (high = +86.0°C, crit = +92.0°C)
Core 1: +47.0°C (high = +86.0°C, crit = +92.0°C)
Core 2: +49.0°C (high = +86.0°C, crit = +92.0°C)
Core 3: +49.0°C (high = +86.0°C, crit = +92.0°C)
I normally think PC cases are gaudy and boring even when trying to evoke some style. That stuff in Streacom website however makes me want to build something with it.
If you've got a little Node-RED box reading serial data from your bar code reader, doing lookups in your SAP database, and then sending Modbus commands to your PLC to redirect a box down a different conveyor line, it's probably an industrial PC.
There are far better ways to do this, but they require software engineering, not €3 and 15 minutes.
How does the computer knows that? You mean the parts that can meassure temperature will meassure where it gets warmer, or where it doesn't get warmer, altough it should?
How does the system knows, it is not a local heat pipe, transferring heat away?
This way, malware authors would have to choose between making things easier for researchers or targeting far fewer people.
Either way, everyone except the malware creators wins.
It's a pretty neat system; runs Doom, so we know it's production ready; and the source is meticulously organized.
The docs try to be overly general, IMHO, clouding the core ideas. If you're interested, I recommend just spinning up a VM and mucking about, along with the user guide.
Or perhaps the other way around?
That is making VMs totally unaware they've been virtualised, as I believe IBM's lpars work…
The solution really does seem like implementing those same hooks in non-VM environments, but preventing their actual usage behind permissions. In a VM, the permissions could genuinely be granted or denied. In a non-VM they would always be denied. But malware could never be able to tell why it was denied permission.
This is a huge, huge, huge amount of work. Even the most obvious things -- like "can you run a VM?" -- can require huge support, in that case even from the hardware, when you want to do them within a VM.
But if these assumptions are true then I'd presume malware authors would do timing checks rather than the trivially "emulable" SMBIOS.
This seems to be especially true for cheap chineese boxes. If I had a dollar for every time I saw "to be filled in by OEM" strings in "live/production" BIOS images ... i'd be retired :).
Triple-points if the vendor includes a sticker telling you to complete Windows OOBE without connecting it to the Internet to avoid this.
# Manufacturer: Micro-Star International Co., Ltd.
# Product Name: PRO Z790-A WIFI (MS-7E07)
$ sudo cat /sys/firmware/dmi/tables/DMI | strings | grep -i filled | wc -l
10
Sigh...There was a substantially effective virus years ago that made it around the world in 90 minutes, and it turns out a bug in its networking code caused it to spread half as fast as it should have. Meaning it should have been everywhere in 45 minutes. You can still do a lot of damage without hitting every machine in existence.
The legit programs interested in these APIs are almost always binaries signed by well known (and trusted) CAs - making it sensible for the analysis to report sus behavior.
I worked as a junior in this field, and one of my tasks was to implement regex pattern matching to detect usages of similar APIs. Surprisingly effective at catching low hanging fruit distributed en masse.
Same goes for the common vulnerable drivers that malware likes to load so they can get into the kernel. A weird tiny binary making WMI calls may stand out, but a five year old overclocking utility full of vulnerabilities doing the same queries wouldn't.
From the research I've read, this doesn't seem to be about avoiding detection as much as it's about not detonating the real payload on a malware analyst's machine. If the AV flags the binary or the detection trips, the second stage isn't downloaded and the malware that does stuff that makes the news doesn't execute (yet).
AFAIK most (all?) code signing CAs are cracking down on this (or maybe Microsoft is pushing them) by mandating that signing keys be on physical or cloud hosted HSMs. For instance if you try to buy a digicert code signing certificate, all the delivery options are either cloud or physical HSMs.
Just push untested code/releases on production machines across all of your customers. Then watch the world burn, flights get delayed, critical infrastructure gets hammered, _real_ people get impacted.
_Legitimate_ companies have done more damage to American companies than black hat hackers or state actors can ever dream of.
The folks behind xz util within libzma aspire to cause the amount of damage companies like ClownStrike and SolarWinds have caused.
That said, plenty of malware will stop downloading additional modules or even erase itself when it detects things that could indicate it's being analysed, like VirtualBox drivers, VMWare hardware IDs, and in the case of some Russian malware relying on the "as long as we don't hack Russians the government won't care" tactic, a Russian keyboard layout.
It won't stop less sophisticated malware, but running stuff inside of a VM can definitely have viruses kill themselves out of fear of being analysed.
This is increasingly less true. SR-IOV and S-IOV are becoming increasingly common even in consumer hardware and OS manufacturers are increasingly leaning on virtualisation as a means to protect users or provide conveniences.
WSL has helped with virtualisation support quite a bit as a means of getting hardware manufacturers to finally play nice with consumer virtualisation.
And Microsoft is even now provides full ephemeral Windows VM "sandboxes". The feature that came with them that surprised me was that they support enabling proper GPU virtualisation as well.
You're now at the mercy of the hardware manufacturer on whether there's isolation between the different "partitions" or ... nothing at all. Your attack surface expands in a way that's difficult to imagine.
> You're now at the mercy of the hardware manufacturer
No!
Read up on SR-IOV before you continue posting more misleading nonsense.
https://en.wikipedia.org/wiki/Single-root_input/output_virtu...
Your one link literally says the same thing I have said (a way to multiplex access to the bus). This is ALL about giving VMs direct access to hardware. It makes no sense to even discuss features like this otherwise. What do you think this is for if not real hardware acess? Giving VM hosts an easier time emulating Intel PRO1000 ethernet cards?
SR-IOV: https://cdrdv2-public.intel.com/321211/pci-sig-sr-iov-primer...
S-IOV: https://cdrdv2-public.intel.com/671403/intel-scalable-io-vir...
What they are doing is "technically" giving direct bus access, however the bus access they are giving is restricted such that the VM's accesses are all tagged and if they access anything outside the bounds they are permitted (as defined by access controls on the hardware during configuration), then you get a fault instead of the VM successfully touching anything.
This is similar to how VT-d and other CPU virt extensions allow direct access to RAM but with permissioning and access control through the IOMMU.
And then the other major component of SR-IOV and S-IOV is that they virtualise the interface on the PCI-E hardware itself (called virtual functions) and all of the context associated, the registers, the BAR, etc. This is akin to how VT-x and similar instructions virtualise the CPU (and registers, etc). And notably these virtual functions can be restricted via access controls, quotas, etc in hardware.
So your existing VT-x extension virtualises the CPU, your existing VT-d extension virtualises the IOMMU and RAM, your existing VT-c virtualises network interfaces (but not PCI-E in general). Now SR-IOV and S-IOV virtualise the PCI-E bus w/ access control over the lanes. And now SR-IOV and S-IOV virtualise the PCI-E device hardware and their functions/interface on the bus (akin to VT-x and VT-d).
Now notably S-IOV should be seen as a "SR-IOV 2.0" rather than an accompanying feature. It essentially moves the virtual function to physical function translation from the CPU or hardware in the chipset directly into the PCI-E device itself.
> What they are doing is "technically" giving direct bus access, however the bus access they are giving is restricted such that the VM's accesses are all tagged
This is exactly what I know and what I said in my original post: a way to identify which VM is accessing what. For... giving that VM access to the hardware.
> and if they access anything outside the bounds they are permitted (as defined by access controls on the hardware during configuration), then you get a fault instead of the VM successfully touching anything.
Again, this is exactly what I said: you are now at the mercy of the hardware manufacturer whether there is any partitioning whatsoever. To think otherwise is wishful thinking that I do not know where it comes from.
This is entirely the definition of giving the VM direct access to the hardware. There is no software-controlled emulation whatsoever going on, so you explicitly lose containment and increase your attack surface.
For everything except the simplest of ethernet cards, your hardware is likely implementing this multiplexing in closed source firmware done by hardware engineers. Very likely the worst type of code ever written security-wise.
> This is similar to how VT-d and other CPU virt extensions allow direct access to RAM but with permissioning and access control through the IOMMU.
Not at all. Usually IOMMU is for constraining hardware that already has direct access to the RAM in the first place.
> And then the other major component of SR-IOV and S-IOV is that they virtualise the interface on the PCI-E hardware itself (called virtual functions)
Is this the source of the confusion? That because it is called virtual you think this virtualized somehow? It is the reason I call it partition because it is much closer to what it is (from a hw point of view).
> your existing VT-x extension virtualises the CPU, your existing VT-d extension virtualises the IOMMU and RAM, your existing VT-c virtualises network interfaces (but not PCI-E in general
This is meaningless because it mixes and matches everything. What does it mean to "virtualize the RAM"? RAM is already virtualized by the normal MMU, no VT-d needed at all. Hardware is the one who may require to also have its RAM access virtualized so its idea of memory matches that of the VM directly accessing hardware (instead of through a software emulation layer), and that is what benefits from an IOMMU (but does not generally require it, see GART and VT-c).
But the entire point of this is again to give the VM direct access to hardware! What is it exactly that you want to refute from this?
Yes but the whole point is that it's moving the isolation of the VM's access from software to hardware. Yes you are giving direct access to a subset of hardware but that subset of hardware is configured from outside the VM's access to restrict the VM's access.
> Again, this is exactly what I said: you are now at the mercy of the hardware manufacturer whether there is any partitioning whatsoever. To think otherwise is wishful thinking that I do not know where it comes from.
That's not actually true to my knowledge. S-IOV and SR-IOV require hardware support. Sure the manufacturer can do a shit job at implementing it but both S-IOV and SR-IOV require partitioning. But if you are granting your VMs S-IOV or SR-IOV access to hardware, you are at minimum implicitly trusting that the hardware manufacturer implemented the spec correctly.
> There is no software-controlled emulation whatsoever going on, so you explicitly lose containment and increase your attack surface.
This is true but the same is true of VT-x, VT-d, etc (i.e. the commonplace virtualisation extensions). It is no less true with S-IOV or SR-IOV other than by them being newer and less "battletested". If you use virtualisation extensions you are no longer doing pure software virtualisation anyways.
> For everything except the simplest of ethernet cards, your hardware is likely implementing this multiplexing in closed source firmware done by hardware engineers. Very likely the worst type of code ever written security-wise.
The exact same applies to the microcode and internal firmware on modern CPUs and the associated chipset.
> Not at all. Usually IOMMU is for constraining hardware that already had direct access to the RAM in the first place.
Yes. And VT-d extends this for VMs by introducing hardware level IO, interrupt, and DMA remapping so that the host doesn't need to do software level remapping instead.
> Is this the source of the confusion? That because it is called virtual you think this virtualized somehow? It is the reason I call it partition because it is much closer to what it is (from a hw point of view).
I call it virtualisation because it is virtualisation. In SR-IOV it is still virtualisation but yes it is architecturally similar to partitioning with access controls however that is still virtualisation, it just prevents nesting. With S-IOV however it is full on-hardware virtualisation and supports nesting virtual devices.
> What does it mean to "virtualize the RAM"? RAM is already virtualized by the normal MMU, no VT-d needed at all. Hardware is the one who may require to also have its RAM access virtualized so its idea of memory matches that of the VM directly accessing hardware (instead of through a software emulation layer), and that is what benefits from an IOMMU (but does not generally require it, see GART and VT-c).
Yes I was playing loose with the terminology. Yes RAM is already virtualised (to a certain degree) but VT-d extends that completely and allows arbitrary nesting. And yes VT-d is not required for virtualisation but it is important in accelerating virtualisation by moving it from software virt to hardware virt.
> But the entire point of this is again to give the VM direct access to hardware! What is exactly that you want to refute from this?
I think the disconnect here is that I (and I assume others) are operating under the assumption that giving the VM access to an access controlled and permissioned subset of the hardware through hardware virtualisation extensions/frameworks wouldn't fall under "giving the VM direct access to the hardware" any more than CPU virtualisation extensions do (which are essentially always enabled).
----------
Edit: Oh I should also add in that another commenter was in our comment chain. I just realised they were the one arguing that SR-IOV/S-IOV wouldn't make you at the mercy of the HW manufacturer to implement the isolation and virtualisation functionality correctly. That may help clear up some misunderstanding because I 100% get that you are reliant on the HW manufacturer implementing the feature correctly for it to be secure.
But who is actually gating access to this "subset" (which normally isn't a subset of functionality anyway) ? Answer: the hardware.
Before, it was software who was emulating hardware and implementing whatever checks you wanted. Now, the VM OS is directly accessing the hardware, banging its registers, and you literally depend on the hardware to enforce any kind of isolation between accesses from the VMs.
> This is true but the same is true of VT-x, VT-d, etc (i.e. the commonplace virtualisation extensions). It is no less true with S-IOV or SR-IOV other than by them being newer and less "battletested". ". If you use virtualisation extensions you are no longer doing pure software virtualisation anyways.
No, this is not the correct analogy. Even without VT-x, CPUs since the 386 era are already designed to execute untrusted code. Adding VT-x on it changes a bit the picture but it is almost an irrelevant change in global architecture overall, since the CPU is in any case is directly executing VM guest code (see early virtualizers which did plenty well without VT-x).
Here, you are allowing untrusted code direct access to hardware that has never even imagined the idea of being ever accessed by untrusted software, or even user level code to being with for most it (very few exceptions such as GPUs).
The difference in the size of the security boundary is gigantic, even hard to visualize.
The correct analogy would be to if you were switching from say a JavaScript VM generting native cpu code into directly executing native CPU code directly downloaded from the internet. On a 8086 level CPU with a haphazardly added MMU on top of it. Sure, works on theory. In practice, it will make everyone shiver (and with reason). That is the proper analogy.
The discussion about SRIOV is a red herring because these technologies are about allowing this direct hardware access. It is not that SRIOV is a firewall between the hardware and the VM (or whatever it is that you envision). They are technologies entirely designed to facilitate this direct hardware access, not prevent or constrain it in any way.
This hasn't been true for decades. CPUs have been leaving virtualisation almost entirely to the hardware. For the most part all software was doing was configuring the hardware and injecting a bit of glue here and there unless you were full on emulating another architecture.
> Here, you are allowing untrusted code direct access to hardware that has never even imagined the idea of being ever accessed by untrusted software, or even user level code to being with for most it (very few exceptions such as GPUs).
If the device supports SR-IOV or S-IOV then they had to engineer the product to meet the spec. It's not like this is just a switch being enabled on old hardware. Every device on the stack has to support the standard and therefore is designed to at least attempt to respect the security boundaries those specs impose.
> The correct analogy would be to if you were switching from say a JavaScript VM generting native cpu code into directly executing native CPU code directly downloaded from the internet.
This is exactly what every modern browser does. Chrome's V8 JS engine parses JS and generates V8 bytecode. Then at runtime V8 JIT compiles that bytecode into native machine code and executes that native code on the hardware. That's not interpreting the JS, it's actually compiling the JS into native code running on the CPU (using prediction to make sure the compilation is done before the codepaths are expected to be executed).
> On a 8086 level CPU with a haphazardly added MMU on top of it. Sure, works on theory. In practice, it will make everyone shiver (and with reason). That is the proper analogy.
This also isn't true. Peer to Peer DMA support has been commonplace in consumer PCI-E devices (mainly NVME, network HBAs, and GPUs) for years now and has been available in datacenter, etc for a decade at least.
> On a 8086 level CPU with a haphazardly added MMU on top of it.
Also minor nit but the 80286 (the 3rd gen of 8086 CPUs, released less than 4 years after the original 8086) had an integrated MMU with proper segmentation support. Additionally MMUs long predate the 8086, it just didn't initially include an integrated one because it didn't need to for the market segment it was targeting).
> The discussion about SRIOV is a red herring because these technologies are about allowing this direct hardware access. It is not that SRIOV is a firewall between the hardware and the VM (or whatever it is that you envision). They are technologies entirely designed to facilitate this direct hardware access, not prevent or constrain it in any way.
Again this is just not true. They provide a framework for segmentation of hardware and enforcing isolation of those segments. That is absolutely intended for "preventing and constraining" access to hardware outside of what the host configures.
------
If you can provide some citations of how SR-IOV or S-IOV doesn't do what it claims to, I'm happy to continue this conversation.
That's entirely wrong. VMs do contain a lot of emulator that is still _the primary way of guest OSes_ to access real hardware. "CPU-assisted virtualization" almost changes _nothing_ in the grand schematic. CPUs were executing the guest code before VT-x and are executing guest code afterwards. Your pure software virtualizer contains an entire x86 PC emulator, your "hardware based virtualizer" contains an entire x86 PC emulator. And if anything much more complex than the one included by non VT-x virtualizers because of all the extra virtual hardware they offer guests these days. Did people already forget so much about Popek and Goldberd that they have come to believe some magical properties about "CPU virtualization"?
(before anyone nitpicks, x86 sans VT-x _is_ Popek virtualizable but only for usermode code; non-user mode is a bit more complicated to manage, but still falls short compared to what VMs do in terms of hardware emulation these days).
Even if you are assuming a state-of-the-art virtualizer with hyperdrivers and hyperbuses and whatever... it's still literally the same concept. The VM host is _emulating_ the hardware shown to the guest OS. It just emulates hardware that is much simpler and much more efficient to emulate because it was designed for VMs in mind. And, guess what, you can also apply the same idea to a purely software-based virtualizer to simplify it in the same way, too! (what layman's call paravirtualization).
Obviously if you assume a virtualizer doing passthrough of any kind.... then the VM is directly accessing the hardware... but that is my point! It is now directly accessing hardware that it could not access before.
As a summary: "CPU virtualization" is not even remotely in the same order of magnitude of headache-inducing-paranoia as allowing direct access to hardware is. The CPU running the VM guest's code is kind of an indisputable fact of virtualization at all, hardware-based or software-based. The VM guest's code directly accessing the host hardware ... is simply not.
> If the device supports SR-IOV or S-IOV then they had to engineer the product to meet the spec.
Are you claiming here that A) the spec defines how hardware should internally multiplex itself? (not true) and B) that the spec claims that hardware must be secure , therefore all hardware is secure! (not true and a very strange argument to make, anyway).
In any case, happy to see you are now accepting my thesis that this is about giving VMs direct access to hardware, and that therefore it is now up to the hardware to really enforce this isolation. Or not.
What else is there left to discuss?
> This is exactly what every modern browser does
You quoted my sentence fully so I know you read it yet you totally miss the point again. I said: JavaScript VM generating native CPU code _vs_ directly executing native CPU code directly from the internet.
Your argument summarizes to: "This is what V8 does, which is to generate native CPU code". I know. That's why I put it as the baseline. You have made no counterargument whatsoever.
> This also isn't true. Peer to Peer DMA support has been commonplace in consumer PCI-E devices (mainly NVME, network HBAs, and GPUs) for years now and has been available in datacenter, etc for a decade at least.
I did admit that there is some hardware that is already used to interfacing with more or less user level code (like GPUs), but this is the _exception_ rather than the rule. And even if it is true, it still doesn't contradict my argument which is that : this is still about VMs having direct access to hardware that they didn't have before! No matter how you frame it, it increases the attack surface by an order of magnitude. Even for GPUs, now your GPU also require protection from the guest driver, where before it was the same as the host's.
("peer to peer" DMA commonplace in consumer hardware??? I don't know what you're talking about. DirectStorage developers would like a word with you...)
> Also minor nit but the 80286 (the 3rd gen of 8086 CPUs, released less than 4 years after the original 8086) had an integrated MMU with proper segmentation support.
Which is exactly why I mentioned the 8086, because it has no MMU and no protected mode... so really don't see what's your argument here.
> Again this is just not true. They provide a framework for segmentation of hardware and enforcing isolation of those segments. That is absolutely intended for "preventing and constraining" access to hardware outside of what the host configures.
This is absolutely ridiculous. Do you think that guest VMs can communicate directly with hardware and that SR-IOV is about "preventing and constraining" it?
What virtualization actually is: https://github.com/tpn/pdfs/blob/master/A%20Comparison%20of%....
Violate..? Relax, dude.
You only had to read the very first sentence, but let me paraphrase:
> In virtualization, single root input/output virtualization (SR-IOV) is a specification that allows the isolation of PCI Express resources for manageability and performance reasons.
Counterexample #1: a SRIOV ethernet card that still allows multiple domains (partitions, virtual functions, whatever) to access the same PHY (aka ethernet port). Who is doing the "bridging" here? The PCIe bus? How do you think that even remotely works? Explain to me like a 5 year old, please.
Counterexample #2: a GPU with SRIOV. Each domain can still access a portion of the VRAM from the GPU. How do you think that works, if it is not the GPU itself who is doing the multiplexing? What do you think a PCIe standard even _has anything to do_ with this. How could it even have something to do?
The GPU is not necessarily even exposing its entire VRAM through PCIe at all. At most, it is exposing the registers that allow you to tell how much VRAM to give each partition through a PCIe BAR. And you can tag the one for each partition with a different VF in the same way you could tag them with a different base address or literally ANYTHING.
I do not understand why you (and the sibling guy) seem to think a standard for a _bus_ is even relevant to counter the argument that all of this is for VMs to make direct access to the hardware. You quote a communications standard for this hardware to be accessed by a host with multiple VMs running concurrently. This is, if anything, _even more evidence_ that this is for VMs directly accessing hardware.
Again: I claim that what you're doing here is directly connecting your VMs to the hardware, where before they were not, only through a software emulation layer. You claim that this is not true, and that I couldn't be more wrong, because there is this magical interface that makes the hardware appear as if it actually was several instances. You totally miss the point: if anything, this makes your VMs _easier to directly connect to hardware_, not less.
In fact, the very second sentence:
> SR-IOV is commonly used in conjunction with an SR-IOV enabled hypervisor to provide virtual machines direct hardware access to network resources hence increasing its performance.
And how is SR-IOV hardware going to magically appear as several interfaces, we leave that for the reader as an exercise, because you will not like the response: closed source firmware, likely an order of magnitude less reviewed than even the worst VM hardware emulator you can think of.
Much more efficient, though.
I am sorry, but you are not making the right argument.
I've been gaming through a VM for the last few years now, and hw acceleration is not an issue.
You would passthrough a GPU and then enjoy near native performance.
I use iGPU for my Linux desktop and a dGPU passed through to my gaming vm.
I also passthrough the whole bluetooth device to the VM as I don't use bluetooth on my host anyway. That way I can use gamepads and headset in the vm, too.
> That said [...]
Now you're just riffing.
It happened on mobile because Android (dunno iOS's permission model well enough) is more on the developers' side than the user's side, or at least they're more concerned with everything just working (for some values of "just work") than with giving users a chance to make sure that things don't work that the users don't want to work. A fine-grained capacity system where users were given the option to lie to the software about what capacities it has wouldn't be perfect either, but it would remove a lot of the user-focused pain points of Android's permission model.
Well, after we send a copy of the program to Microsoft, of course
In comparison, a lutron switch is $70 and the hub is $50.
You could certainly bodge together a similar system for less money, but the controls won't be as nice and it'll be nowhere near as hassle free long term. HomeAssistant and competitors have really been catching up in the past few years though, i'm excited to see competition in the market. I wish they could all play nice together with reasonable APIs :/
A next step to making the VM look real is having simulated temperature sensors that actually change in response to CPU load.
Or maybe just increments to absurd numbers or negative values. Or locks up when probed. Either way could be fun.
unironically that would mimick a bunch of existing hardware out there. I owned a PC motherboard that always reported a -65535c in a non existing sensor.
my guess is some sensor described but non existing, probably reporting an infinite value of resistance of some unused pin...
Not just malware, but some apps are known to do this too, e.g. WeChat.
There needs to be a better virtual machine that tries to emulate everything, including random walks for GPS, IMU noise, barometric noise, temperature fluctuations etc.
If you want to fuck up surveillance capitalism, you send plausible but wrong information to the trackers. There are a zillion ways to do this: let one through now and again and replay it, do a P2P browser extension that proxies you and someone near you through each other, subtly corrupt it, bounce it off a mullvad node. The possibilities are endless.
If you got a fair number of people doing it, you could even have some collective bargaining, like let some of the extreme value conversion stuff through in return for concessions on the more egregious tracking-for-the-sake-of-tracking.
Sure they'll checksum and shit, but that's a cat-and-mouse game they lose: the typical tracker cookie fire isn't worth shit, it's Superman 2 fractions of a basis point, so even modest effort playing smart against it drives the effective CPM negative.
Using it, you can also modify the model name and serial number of your Super micro motherboard. Which cam be useful when your idiot system integrator can't be assed to set them correctly themselves.
1) With the level of expertise, would it be as easy, or easier, to modify the check in the malware itself?
2) How much work would it be for a something like KVM to fake absolutely everything about a PC so it was impossible to tell it was a VM?
What's wrong with DLL hooking though?
> Because Xen (or rather hvmloader) does not define it.
> So, before defining it myself, I tried to find out if there was any other poor soul who tried to do the same thing before me. And to my disappointment, there was. Right in the xen-devel patch archive.
> Why it was my disappointment, you may ask? Because after reading the response to the patch, I felt the frustration of the author.
Specifically, the patch is annotated "SMBIOS tables like 7,8,9,26,27,28 are ne[c]essary to prevent sandbox detection by malware using WMI-queries."
And the rejection is in two points:
(1) Why is that valuable?
(2) What if there were other tables that also helped with that goal? Your patch doesn't include them.
If there's anything I've painfully learned in my career, is to not let perfect get in the way of good enough.
i did one little expirement on faking VM's powersupply. done it with 'HotReplaceable=Yes' and 'Status=OK', and you suddenly look like a $5k baremetal server.
cmd used
pip install dmigen dmigen -o smbios.bin \
--type0 vendor="American Megatrends",version="F.1" \
--type1 manufacturer="Dell Inc.",product="PowerEdge T630" \
--type39 name="PSU1",location="Bay 1",status=3,hotreplaceable=1
I personally found that venting about it did achieve something — disagreeing with folks in the comments prompted me to look up concrete numbers on how many men are likely to experience shame around their penis size.
I also think it helps nudge social norms towards making that kind of language less acceptable (both here on HN and elsewhere), and in a best case may have prompted some folks to reflect on how they speak and write.
Is that speculative benefit worth the conflict it created? I think so — when managed appropriately, conflict is a normal and healthy part of most human relationships. And IMO the wellbeing of ~2 million men is worth stirring the pot a little.
But you might disagree and you're the mod, so for better or worse your opinion is the one that matters here.
The large majority of humans adapt our language to the context and the audience every time we open our mouths or put our hands on the keyboard. I’d like the author to do a little more of that.
Men are routinely shamed for their bodies, especially penis size, and I think it makes their lives worse. So I’d like people to stop doing it.
Let’s assume those studies are off by an order of magnitude and it’s only 1% of men who feel insecure about their penis. In the US, that’s still 1.7 million men.
If I had to choose between vulgar jokes and two million people having a better relationship with their bodies, I know where my priorities lie.
[1]: https://www.issm.info/sexual-health-qa/what-percentage-of-me...
You're going to burn out very quickly if this is the level of attention and engagement you desire in the world of the internet.
And it's clear that the people, as they really are, are all despicable and horrible inside.
thanks for the pep talk, coach, but you're not my coach and i didn't ask for any coaching. i know what i'm dealing with. i've probably been on the internet longer than you've been alive, so i've watched the internet go from a fairly healthy place to just pile after pile of shit everywhere people interact with each other online. i've watched more and more people show up solely so they can be themselves, and more and more places appear solely for people to be unreastrainable asses to each other.
There is no humor allowed on this platform; real life is much more colorful and fun.
if you think making fun of people is colorful and fun you are again making my point for me better than i ever could. please continue.
> But that’s smol pp way of thinking
Again... no?
If you read an article where the author says "that's training bra thinking", the author is female.
If you read one where the author says "that's smol pp thinking", the author is male.
apt install laugh
When where we different?
I wish for our entire species to go extinct, not individual people. Why? we are just inherently destructive to each other. we are super flawed in that way, and I don't see us lasting the amount of time it would take for that to evolve out of us. I do see our awful instincts lasting long enough for some future world war three to reduce the population to a small enough amount where being assholes to each other again becomes a survival tactic that works, so this likely won't ever evolve out of us naturally.
Also, in order for it to evolve out of us, we would have to select it out and not allow those who are regularly assholes to each other to breed, and that won't work for a number of reasons. I'm not for society selectively neutering people for any reason, anyway. We are competitive to a point that it is well past anything that the word "flaw" could cover. we are self-destructive, and we let tiny disagreements get us to the point of war.
we are just a garbage species. deep down we all know it, i just happen to mention it for some reason.
You have made up some arbitrary rules, and adjudicated humanity to extinction.
You certainly can do better.