I work in CPU security and it's the same with microarchitecture. You wanna know if a machine is vulnerable to a certain issue?

- The technical experts (including Intel engineers) will say something like "it affects Blizzard Creek and Windy Bluff models'

- Intel's technical docs will say "if CPUID leaf 0x3aa asserts bit 63 then the CPU is affected". (There is no database for this you can only find it out by actually booting one up).

- The spec sheet for the hardware calls it a "Xeon Osmiridium X36667-IA"

Absolutely none of these forms of naming have any way to correlate between them. They also have different names for the same shit depending on whether it's a consumer or server chip.

Meanwhile, AMD's part numbers contain a digit that increments with each year but is off-by-one with regard to the "Zen" brand version.

Usually I just ask the LLM and accept that it's wrong 20% of the time.

> - Intel's technical docs will say "if CPUID leaf 0x3aa asserts bit 63 then the CPU is affected". (There is no database for this you can only find it out by actually booting one up).

I’m doing some OS work at the moment and running into this. I’m really surprised there’s no caniuse.com for cpu features. I’m planning on requiring support for all the features that have been in every cpu that shipped in the last 10+ years. But it’s basically impossible to figure that out. Especially across Intel and amd. Can I assume apic? Iommu stuff? Is acpi 2 actually available on all CPUs or do I need to have to have support for the old version as well? It’s very annoying.

Even more fun is that some of those (IOMMU and ACPI version) depend on motherboard/firmware support. Inevitably there is some bargain-bin board for each processor generation that doesn’t support anything that isn’t literally required for the CPU/chipset to POST. For userspace CPU features the new x86_64-v3/v4 profiles that Clang/LLVM support are good Schelling points, but they don’t cover e.g. page table features.

Windows has specific platform requirements they spell out for each version - those are generally your best bet on x86. ARM devs have it way worse so I guess we shouldn’t complain.

  • baq
  • ·
  • 5 hours ago
  • ·
  • [ - ]
I’m pretty sure the number of people at Intel who can tell you offhandedly the answer to your questions about only Intel processors is approximately zero give or take couple. Digging would be required.

If you were willing to accept only the relatively high power variants it’d be easier.

I'd be happy to support the low power variants as well, but without spending a bunch of money, I have no idea what features they have and what they're missing. Its very annoying.

For anyone not familiar with caniuse, its indispensable for modern web development. Say you want to put images on a web page. You've heard of webp. Can you use it?

https://caniuse.com/webp

At a glance you see the answer. 95% of global web users use a web browser with webp support. Its available in all the major browsers, and has been for several years. You can query basically any browser feature like this to see its support status.

  • jdiff
  • ·
  • 41 minutes ago
  • ·
  • [ - ]
That initial percentage is a little misleading. It includes everything that caniuse isn't sure about. Really it should be something like 97.5±2.5 but the issue's been stalled for years.

Even the absolute most basic features that have been well supported for 30 years, like the HTML "div" element, cap out at 96%. Change the drop-down from "all users" to "all tracked" and you'll get a more representative answer.

I hear you.

Coincidentally, if anyone knows how to figure out which Intel CPUs actually support 5-level paging / the CPUID flag known as la57, please tell me.

> AMD's part numbers contain a digit that increments with each year

Aha, but which digit? Sure, that's easy for server, HEDT and desktop (it's the first one) but if you look at their line of laptop chips then it all breaks down.

  • zrm
  • ·
  • 2 hours ago
  • ·
  • [ - ]
These have been my go-to for a while now:

https://en.wikipedia.org/wiki/List_of_Intel_Core_processors

https://en.wikipedia.org/wiki/List_of_Intel_Xeon_processors

It doesn't have the CPUID but it's a pretty good mapping of model numbers to code names and on top of that has the rest of the specs.

  • 7bees
  • ·
  • 5 hours ago
  • ·
  • [ - ]
You can correlate microarchitecture to product SKUs using the Intel site that the article links. AMD has a similar site with similar functionality (except that AFAIK it won't let you easily get a list of products with a given uarch). These both have their faults, but I'd certainly pick them over an LLM.

But you're correct that for anything buried in the guts of CPUID, your life is pain. And Intel's product branding has been a disaster for years.

> You can correlate microarchitecture to product SKUs using the Intel site that the article links.

Intel removed most things older than SB late 2024 (a few xeons remain but afaik anything consumer was wiped with no warning). It’s virtually guaranteed that Intel will remove more stuff in the future.

I've also found the same thing a decade ago, apparently lots of features(e.g. specific instruction, igpu) are broadly advertised as belonging to specific arch, but pentium/celeron(or for premium stuff non-xeon) models often lack them entirely and the only way to detect is lscpu/feature bits/digging in UEFI settings.
> Meanwhile, AMD's part numbers contain a digit that increments with each year but is off-by-one with regard to the "Zen" brand version.

Under https://en.wikipedia.org/wiki/Ryzen#Mobile_6 Ryzen 7000 series you could get zen2, zen3, zen3+, zen4

  • 7bit
  • ·
  • 4 hours ago
  • ·
  • [ - ]
I have three Ubuntu servers and the naming pisses me off so much. Why can't they just stick with their YY.MM. naming scheme everywhere. Instead, they mostly use code names and I never know what codename I am currently using and what is the latest code name. When I have to upgrade or find a specific Python ppa for whatever OS I am running, I need to research 30 minutes to correlate all these dumb codenames to the actual version numbers.

Same with Intel.

STOP USING CODENAMES. USE NUMBERS!

As an Apple user, the macOS code names stopped being cute once they ran out of felines, and now I can't remember which of Sonoma or Sequoia was first.

Android have done this right: when they used codenames they did them in alphabetical order, and at version 10 they just stopped being clever and went to numbers.

Ubuntu has alphabetical order too, but that's only useful if you want to know if "noble" is newer than "jammy", and useless if you know you have 24.04 but have no idea what its codename is and

Android also sucks for developers because they have the public facing numbers and then API versions which are different and not always scaling linearly (sometimes there is something like "Android 8.1" or "Android 12L" with a newer API), and as developers you always deal with the API numbers (you specify minimum API version, not the minimum "OS version" your code runs in your code), and have to map that back to version numbers the users and managers know to present it to them when you're upping the minimum requirements...

Protip, if you have access to the computer: `lsb_release -a` should list both release and codename. This command is not specific to Ubuntu.

Finding the latest release and codename is indeed a research task. I use Wikipedia[1] for that, but I feel like this should be more readily available from the system itself. Perhaps it is, and I just don't know how?

[1] https://en.wikipedia.org/wiki/Ubuntu#Releases

They can't. They used to, until they tried to patent 586...
Trademark.
Yes, I agree, codenames are stupid, they are not funny or clever.

I want a version number that I can compare to other versions, to be able to easily see which one is newer or older, to know what I can or should install.

I don't want to figure out and remember your product's clever nicknames.

>"it affects Blizzard Creek and Windy Bluff models'

"Products formerly Blizzard Creek"

WTF does that even mean?

  • 7bees
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Intel doesn't like to officially use codenames for products once they have shipped, but those codenames are used widely to delineate different families (even by them!), so they compromise with the awkward "products formerly x" wording. Have done for a long time.
I wouldn't mind them coming up with better codenames anyway. "Some lower-end SKUs branded as Raptor Lake are based on Alder Lake, with Golden Cove P-cores and Alder Lake-equivalent cache and memory configurations." How can anyone memorize this endless churn of lakes, coves and monts? They could've at least named them in the alphabetical order.
  • jorvi
  • ·
  • 2 hours ago
  • ·
  • [ - ]
AMD does this subterfuge as well. Put Zen 2 cores from 2019 (!) in some new chip packaging and sell it as Ryzen 10 / 100. Suddenly these chips seem as fresh as Zen 5.

It's fraud, plain and simple.

The entire point of code names is that you can delay coming up with a marketing name. If the end user sees the code name then what is even the point? Using the code name in external communication is really really dumb. They need to decide if it should be printed on the box or if it's only for internal use, and don't do anything in between.
  • baq
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Product lines are in design and development for years, two years is lightning fast, code names can be found for things five or more years before they were released, so everyone who works with them knows them better (much better) than the retail names.
It means Intel M14 and M15 base designs. Except they don't use numbers.
Do you just have banks of old CPUs from every generation to test against?
I feel like it's a cultural thing with the designers. Ceragon were the exact same when I used to do microwave links. Happy to provide demo kit, happy to provide sales support, happy to actually come up and go through their product range.

But if you want any deep and complex technical info out of them, like oh maybe how to configure it to fit UK/EU regulatory domain RF rules? Haha no chance.

We ended up hiring a guy fluent in Hebrew just to talk to their support guys.

Super nice kit, but I guess no-one was prepared to pay for an interface layer between the developers and the outside world.

  - sSpec S0ABC                   = "Blizzard Creek" Xeon type 8 version 5 grade 6 getConfig(HT=off, NX=off, ECC=on, VT-x=off, VT-d=on)=4X Stepping B0  
  - "Blizzard Creek" Xeon type 8 -> V3 of Socket FCBGA12345 -> chipset "Pleiades Mounds"   
  - CPUID leaf 0x3aa              = Model specific feature set checks for "Blizzard Creek" and "Windy Bluff(aka Blizzard Creek V2)"  
  - asserts bit 63                = that buggy VT-d circuit is not off  
  - "Xeon Osmiridium X36667-IA"   = marketing name to confuse specifically you(but also IA-36-667 = (S0ABC|S9DFG|S9QWE|QA45P))  
disclaimer: above is all made up and I don't work at any of relevant companies
Do you think Intel names things poorly?

NVidia has these, very different GPUs:

Quadro 6000, Quadro RTX 6000, RTX A6000, RTX 6000 Ada, RTX 6000 Workstation Edition, RTX 6000 Max-Q Workstation Edition, RTX 6000 Server Edition

less worse.

It would be like having Quadro 6000 and 6050 be completely different generation

The GeForce 700 series came in 3 different microarchitectures. Most were on Kepler but there were several fermi (the previous uarch) and a few mobile chips used maxwell (the following architecture).

Lest anyone think AMD is any better the Radeon 200 series came in everything from terascale 2 (4 years old at that point) to GCN3.

The gpu manufacturers have also engaged in incredible amounts of rebadging to pad their ranges, some cores first released on the GeForce 8000 series got rebadged all the way until the 300 series.

There are GPUs from 3 different generations in that list... Quadro 6000 is an old Fermi from 2010, Quadro RTX6000 is Turing from 2018, RTX6000 Ada is Ada from 2022...

Oh and there's also RTX PRO 6000 Blackwell which is Blackwell from 2025...

  • g947o
  • ·
  • 2 hours ago
  • ·
  • [ - ]
Ah, I see who you are insinuating
That reminds me when I got a server-grade Xeon E5472 (LGA771) and after some very minor tinkering (knife, sticker mod) fit it into a cheap consumer-grade LGA775 socket. Same microarchitecture, power delivery class, all that.

LGA2011-0 and LGA2011-1 are very unalike, from the memory controller to vast pin rearrangement.

So not only they call two different sockets almost the same per the post, but they also call essentially the same sockets differently to artificially segment the market.

I don't know why, but most tech companies are horrible at naming products.
At least with CPUs, I believe the the retail product names are deliberately confusing by design so that you as a consumer get confused (and mislead) into buying older models, whose sales tend to stagnate when newer models are released. (Newer models are of course, obscenely priced to differentiate them). A somewhat aware tech consumer what like to buy the latest affordable model they can. But if you can't easily identify the latest model or the next best one after it, they will often end up purchasing some older model with similar name.
This is too forgiving of intel in this case. It has a name. They just don't use it. "Sockets Supported: FCLGA2011". It's not like this is poorly named. It's not even true.
  • agos
  • ·
  • 3 hours ago
  • ·
  • [ - ]
you know, there are two hard problems in computer science...
  • mcny
  • ·
  • 3 hours ago
  • ·
  • [ - ]
For today's lucky ten thousand, the joke is that

> There are only two hard things in Computer Science: cache invalidation, naming things, off-by-one errors.

  • tmtvl
  • ·
  • 4 minutes ago
  • ·
  • [ - ]
I thought there were 3 difficult problems: naming things, cache invalidation, , and off by one errors. concurrency
Why do people say that, when the number one hardest problem is making good abstractions?
Because it’s a “famous” (in our circles) quote. You might prefer this one:

> There’s two hard problems in computer science: We only have one joke and it's not funny.

There are at least one more joke:

"There is 10 kinds of people, those who can read binary and those who can't."

Personally I prefer the cache invalidation one.

Names abstract things.
You explained one thing but introduced another needing explanation.

https://xkcd.com/1053/

I recall standing in CEX one day perusing the cabinet of random electronics ( as you do ) and wondering why the Intel CPUs were so cheap compared to the AMD ones. I eventually concluded that the cross generation compatibility of zen cpus meant they had a better resale value. Whereas if you experienced the more common mobo failure with an Intel chip you were likely looking at replacing both.
  • 7bees
  • ·
  • 5 hours ago
  • ·
  • [ - ]
It has pretty much always been the case that you need to make sure the motherboard supports the specific chip you want to use, and that you can't rely on just the physical socket as an indicator of compatibility (true for AMD as well). For motherboards sold at retail the manufacturer's site will normally have a list, and they may provide some BIOS updates over time that extend compatibility to newer chips. OEM stuff like this can be more of a crapshoot.

All things considered I actually kind of respect the relatively straightforward naming of this and several of Intel's other sockets. LGA to indicate it's land grid array (CPU has flat "lands" on it, pins are on the motherboard), 2011 because it has 2011 pins. FC because it's flip chip packaging.

> All things considered I actually kind of respect the relatively straightforward naming of this and several of Intel's other sockets.

That's an industry-wide standard across all IC manufacturing - Intel doesn't really get to take credit for it.

It's fascinating how 'Naming Schemes' are supposed to clarify hierarchy but end up creating more chaos. When the signifier (FCLGA2011) detaches from the signified (physical compatibility), the system is officially broken. Feels like a hardware version of a bureaucratic loop.
Yeah, Intel has some crazies in the naming department since they abandoned Netburst with clear generation number and frequency in the name. I remember having two CPUs with exact same name E6300 for the exact same socket LGA775, but the difference was 1 GHz and cache size. Like, ok, I can understand that they were close enough, but at least add something to the model number to distinguish them.
This reminds me of my ASRock motherboard, though this was over a decade ago now. The actual board was one piece of hardware, but the manual it shipped with was for a different piece of hardware. Very similar, but not identical (and worse, not identical where I needed them to be, which, naturally, is both the only reason I noticed and how these things get noticed…), but yet both manual and motherboard had the same model number. ASRock themselves appeared utterly unaware that they had two separate models wandering around bearing the same name, even after it was pointed out to them.

The next motherboard (should RAM ever cease being the tulip du jour) will not be an ASRock, for that and other reasons.

For the love of everything though, just increment the model number.

Wow $15 for that CPU sounds great.
Yea, old server hardware can be super cheap! In my opinion though, the core counts are misleading. Those 24 cores are not compareable to the cores of today. Plus IPC+power usage are wildly different. YMMV on if those tradeoffs are worth it.
They have to make “shit creek” to put a end to all those water bodies.
Sounds like a great candidate for a Cybersecurity Knowledge Graph.
How dare they accuse Intel of any kind of naming scheme at all. Everyone who’s anyone knows it’s an act of stochastic terrorism.
LGA2011 was an especially cursed era of processors and motherboards.

In addition to all of the slightly different sockets there was ddr3, ddr3 low voltage, the server/ecc counterparts, and then ddr4 came out but it was so expensive (almost more expensive than 4/5 is now compared to what it should be) that there were goofy boards that had DDR3 & DDR4 slots.

By the way it is _never_ worth attempting to use or upgrade anything from this era. Throw it in the fucking dumpster (at the e waste recycling center). The onboard sata controllers are rife with data corruption bugs and the caps from around then have a terrible reputation. Anything that has made it this long without popping is most likely to have done so from sitting around powered off. They will also silently drop PCI-E lanes even at standard BCLK under certain utilization patterns that cause too much of a vdrop.

This is part of why Intel went damn-near scorched earth on the motherboard partners that released boards which broke the contractual agreement and allowed you to increase the multipliers on non-K processors. The lack of validation under these conditions contributed to the aformentioned issues.

>and allowed you to increase the multipliers on non-K processors

Wasn't this the other way around, allowing you to increase multipliers on K processors on the lower end chipsets? Or was both possible at some point? I remember getting baited into buying an H87 board that could overclock a 4670K until a bios update removed the functionality completely.

In fairness, the author should've known something was up when they thought they could put a multiple year newer chip in an Intel board. That sort of cross-generational compatibility may exist in AMD land but never in Intel.
I mean sure, that would seem suspicious. But not suspicious enough that I'd likely have caught the problem. It's not that far fetched that Intel may occasionally make new CPUs for older sockets, and when Intel's documentation for the motherboard says "uses socket FCLGA2011" and Intel's documentation for the CPU says "uses socket FCLGA2011", I too would have assumed that they use the same socket.
The author would likely be able to put a v3 generation processor in the motherboard, they just didn't do the necessary research to find that out before pulling the trigger.
It sounds like you've never heard of Socket 370 or Slot 1.
It sound like you've successfully inserted Tualeron into BP6 and it worked out of the box.
  • johng
  • ·
  • 7 hours ago
  • ·
  • [ - ]
This isn't that bad if you compare it to the USB naming fiasco... but yeah, definitely a problem in the tech industry for a long time.
Not really comparable.

With Intel's confusing socket naming, you can buy a CPU that doesn't fit the socket.

With USB, the physical connection is very clearly the first part of the name. You cannot get it wrong. Yeah, the names aren't the most logical or consistent, but USB C or A or Micro USB all mean specific things and are clearly visibly different. The worst possible scenario is that the data/power standard supported by the physical connection isn't optimal. But it will always work.

I don't think the port names is what they were referring to.

The actual names for each data transfer level are an absolute mess.

1.x has Low Speed and Full Speed 2.0 added High Speed 3.0 is SuperSpeed (yes no space this time) 3.1 renamed 3.0 to 3.1 Gen 1 and added SuperSpeedPlus 3.2 bumped the 3.1 version numbers again and renamed all the SuperSpeeds to SuperSpeed USB xxGbps And finally they renamed them again removing the SuperSpeed and making them just USB xxGbps

USB-IF are the prime examples of "don't let engineers name things, they can't"

> USB-IF are the prime examples of "don't let engineers name things, they can't"

While not disagreeing, I'd ask for a proof it's not a marketing department's fun. Just to be sure.

Engineers love consistency. Marketing is on the opposite side of this spectra.

> USB-IF are the prime examples of "don't let engineers name things, they can't"

Engineers don't make names that are nice for marketing team.

But they absolutely do make consistent ones. The engineer wouldn't name it superspeed, the engineer would encode the speed in the name

> But it will always work

Not at all. If you want to charge your phone, it might "always work", but if you want to use your monitor with USB hub and pass power to your MacBook, you're gonna have a hard time.

Look for the USB hub that costs several times more than the rest, and that’s the correct one for your use case.
You're missing the point. Of course "the most expensive one" will cover it, but price alone should not be a differentiator.
> The worst possible scenario is that the data/power standard supported by the physical connection isn't optimal. But it will always work.

I don't know what "always work" means here but I feel like I've had USB cables that transmit zero data because they're only for power, as well as ones that don't charge the device at all when the device expects more power than it can provide. The only thing I haven't seen is cables that transmit zero data on some devices but nonzero data on others.

  • dtech
  • ·
  • 5 hours ago
  • ·
  • [ - ]
I don't think those cables are in spec, and there are a lot of faulty devices and chargers that don't conform to the spec creating these kinds of problem (e.g. Nintendo Switch 1). This is especially a problem with USB C.

You can maybe blame USB consortium for creating a hard spec, but usually it's just people saving $0.0001 on the BOM by omitting a resistor.

> the data/power standard supported by the physical connection isn't optimal

How polite. It can be useless, not "not optimal". Especially since usb-c can burn you on a combination of power and speed, not only speed.

> But it will always work.

I can't find a USB-C PD adapter for a laptop that uses less than 100W. As a result, I can't charge a 65W laptop from a 65W port because the adapter doesn't even work unless the port is at least 100W.

It does not always work.

I've noticed that GAN PD's 100w and 65w adapters output is actually less (both do not charge my laptop) than lenovo 65w charger (the one with a non-detachable usbc cable). Cable does not matter, tried with many of them including ones providing power from other chargers.

It seems totally random, and you cannot rely on watts anymore.

There's a fair number of misleading our outright wrong specs if your buying from amazon or the like. And even if you're buying brand name, the specs can be misleading. They often refer to the maximum output of all the ports, not the maximum output of a port.

So a 100 watt GAN charger might be able to deliver only 65 watts from it's main "laptop" port, but it has two other ports that can do 25 and 10 watts each. Still 100 watts in total, but your laptop will never get it's 100 watts.

Not every brand is as transparent about this, sometimes it's only visible in product marketing images instead of real specs. Real shady.

  • SEMW
  • ·
  • 2 hours ago
  • ·
  • [ - ]
> Cable does not matter, tried with many of them including ones providing power from other chargers.

That might not necessarily be the right conclusion. My understanding is: almost all USB-C power cables you will enounter day to day support a max current of at most 3A (the most that a cable can signal support for without an emarker). That means that, technically, the highest power USB-PD profile they support is 60W (3A at 20V), and the charger should detect that and not offer the 65W profile, which requires 3.25A.

Maybe some chargers ignore that and offer it anyway, since 3.25A isn't that much more than 3A. For ones that don't and degrade to offering 60W, if a laptop strictly wants 65W, it won't charge off of them.

So it's worth acquiring a cable that specifically supports 5A to try, which is needed for every profile above 60W (and such a cable should support all profiles up to the 240W one, which is 5A*48V).

(I might be mistaken about some of that, it's just what I cobbled together while trying to figure out what chargers work with my extremely-picky-about-power lenovo x1e)

I have a dell laptop that uses a usbc port to charge, but doesn't actually use the PD specification, but a custom one, so my 65w GAN charger falls back to 5v 0.5a and isn't useful at all. I'd bet dollars to donuts that your Lenovo is doing similar shit.
No. It can charge from my monitor PD just fine.

And wow, I'll keep away from Dell, thanks.

For this specific issue I'm surprised, I have used all kinds of USB PD chargers for my laptops and all of them but one are less than 100W, with no problem at all.

The ones I use most are 20W and 40W, just stuff I ordered from AliExpress (Baseus brand I think).

  • ·
  • 5 hours ago
  • ·
  • [ - ]
How did the title end up wrong on HN (schemes vs scenes) and what's the mechanism to get a mod to fix it?
  • rob74
  • ·
  • 5 hours ago
  • ·
  • [ - ]
I assume someone typed it in (possibly on a mobile device with autocorrect) rather than copy & pasting it (which you would have to do twice, for the URL and for the title).
> and what's the mechanism to get a mod to fix it?

Email them, address is in the guidelines.

  • tlb
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Fixed, thanks
[flagged]
well if you buy Intel you should expect incompatible sockets at any step, so that's on you

on the other side AMD with legendary AM4