For example, my GitHub user [2] has the node ID "U_kgDOAAhEkg". Users are "U_" and then the remaining data decodes to: [0, 541842] which matches the numeric ID for my user accepted by the REST API [3].
You shouldn't rely on any of this implementation of course, instead just directly query the "databaseId" field from the GraphQL API where you need interoperability. And in the other direction the REST API returns the "node_id" field for the GraphQL API.
For folks who finds this interesting, you might also like [4] which details GitHub's ETag implementation for the REST API.
[1] https://docs.github.com/en/graphql/guides/migrating-graphql-... [2] https://api.github.com/user/541842 [3] https://gchq.github.io/CyberChef/#recipe=Find_/_Replace(%7B'... [4] https://github.com/bored-engineer/github-conditional-http-tr...
I see that GitHub exposes a `databaseId` field on many of their types (like PullRequest) - is that what you're looking for? [1]
Most GraphQL APIs that serve objects that implement the Node interface just base-64-encode the type name and the database ID, but I definitely wouldn't rely on that always being the case. You can read more about global IDs in GraphQL in the spec in [2].
[1] https://docs.github.com/en/graphql/reference/objects#pullreq... [2] https://graphql.org/learn/global-object-identification/
It's a classic length prefix. Repository has 10 chars, Tree has 4.
GitHub has changed node ID internals before, quietly. If they add a field to the MessagePack array, switch encodings, encrypt payloads, introduce UUID-backed IDs..
every system relying on this will break instantly.
Not everything has to be forced through some normalizing layer. You can maintain coarse rows at the grain of each issue/PR and keep everything else in the blob. JSON is super fast. Unless you're making crosscutting queries along comment dimensions, I don't think this would ever show up on a profiler.
they are not immutable because repositories can change URLs (renamed or moved to a different org).
What I do is store 2 tuples:
Repository: (Id, Org, Repo)
Issue/PR: (Repository.Id, #)
Transferring or renaming a repository is an update to 1 row in this schema.
This is also the reason I wrote our current user onboarding / repo management code from scratch, because the terraform provider sucks and without any management you'll have a wave of "x got offboarded but they were the only admin on this repo" requests. Every repo is owned by a team. Access is only ever per-team.
This is indeed the working pattern, and applies not just to GitHub and organizing teams there, it's a useful pattern to use everywhere. Don't think "Give access to this user" but rather "Give access to this team, which this user is current a part of" and it solves a lot of bothersome issues.
1. You don't want third parties to know how many objects you have
2. You don't want folks to be able to iterate each object by incrementing the id
But if you have composite IDs like this, that doesn't matter. All objects that belong to a repository have the repository id inside them. Incrementing the id gives you more objects from the same repo. Incrementing the repo id gives you...a random object or nothing at all. And if your IDs include a little entropy or a timestamp, you've effectively kneecapped anyone who's trying to abuse this.
If you have a lot of public or semi-public data that you don't want people to page through, then I suppose this is true. But it's important to note that separate natural and primary keys are not a replacement for authorization. Random keys may mitigate an IDOR vulnerability but authorization is the correct solution. A sufficiently long and securely generated random token can be used as both as an ID and for authorization, like sharing a Google Doc with "anyone who has a link," but those requirements are important.
Uuid7 would not have helped GitHub, though, because it doesn't solve the sharding issue.
Exposing a surrogate / generated key that is effectively meaningless seems to be wise. Maybe internally Youtube has an index number for all their videos, but they expose a reasonably meaningless coded value to their consumers.
It looks like a good explanation of the node IDs, though. However, like another comment says, you should not rely on the format of node IDs.
Several companies I’ve worked for have had policies outright blocking the use of nonrandom identifiers in production code.
Here’s how I would think about it: do I want users to depend on the ordering of UUIDv7? Now I’m tied to that implementation and probably have to start worrying about clock skew.
If it’s not a feature you want to support, don’t expose it. Otherwise you’re on the hook for it. If you do explicitly want to provide time ordering as a feature, then UUIDv7 is a great choice and preferable to rolling your own format.
If it is possible to figure something out, your customers will eventually figure it out and rely on it.
Hyrum’s Law: all observable behaviors of your system will eventually be depended on by someone.
Even if you tell users not to rely on a specific side effect, once they discover it exists and find it useful, that behavior becomes an implicit part of your system's interface. As a result, engineers often find that "every change breaks someone’s workflow," even when that change is technically a bug fix or a performance improvement.
Reliance on unpromised behavior is something I was also introduced to as Kranz’s Law (or Scrappy's Law*), which asserts that things eventually get used for their inherent properties and effects, without regard for their intended purpose.
"I insisted SIGUSR1 and SIGUSR2 be invented for BSD. People were grabbing system signals to mean what they needed them to mean for IPC, so that (for example) some programs that segfaulted would not coredump because SIGSEGV had been hijacked. This is a general principle — people will want to hijack any tools you build, so you have to design them to either be un-hijackable or to be hijacked cleanly. Those are your only choices." —Ken Arnold in The Art Of Unix Programming
There is actually a documented way to do it: https://docs.github.com/en/graphql/guides/using-global-node-...
Same for urls, you are supposed to get them directly from GitHub not construct them yourself as format can change and then you find yourself playing a refactor cat-and-mouse game.
Best you can do is an hourly/daily cache for the values.
2. The object identifier is at the end. That should be strictly increasing, so all the resources for the same scope are ordered in the DB. This is one of the benefits of uuid7.
3. The first element is almost certainly a version. If you do a migration like this, you don't want to rule out doing it again. If you're packing bits, it's nearly impossible to know what's in the data without an identifier, so without the version you might not be able know whether the id is new or old.
Another commenter mentioned that you should encrypt this data. Hard pass! Decrypting each id is decidedly slower than b64 decode. Moreover, if you're picking apart IDs, you're relying on an interface that was never made for you. There's nothing sensitive in there: you're just setting yourself up for a possible (probable?) world of pain in the future. GitHub doesn't have to stop you from shooting your foot off.
Moreover, encrypting the contents of the ID makes them sort randomly. This is to be avoided: it means similar/related objects are not stored near each other, and you can't do simple range scans over your data.
You could decrypt the ids on the way in and store both the unencrypted and encrypted versions in the DB, but why? That's a lot of complexity, effort, and resources to stop randos on the Internet from relying on an internal, non-sensitive data format.
As for the old IDs that are still appearing, they are almost certainly:
1. Sharded by their own id (i.e., users are sharded by user id, not repo id), so you don't need additional information. Use something like rendezvous hashing to choose the shard.
2. Got sharded before the new id format was developed, and it's just not worth the trouble to change
What you're suggesting is perhaps true in the sense that the throughout is higher, but AES decryption carries a fairly high fixed overhead. If you're in a language like Ruby (as GitHub is) or Python/Node, you're probably calling out to openssl.
I did try to do my diligence and find data to support or refute your claim, but I wasn't able to find anything that does directly. That said, I'm not able to find any sources that support the idea that AES is faster at decryption than base64 in any context (for small plaintext values or in general). With SIMD, b64 often decodes in 0.2 CPU cycles or so per byte, while AES only manages 2.5-10.7 CPU cycles per byte. The numbers for AES get better as the plaintext size grow, though.
Do you happen to have data to support your claim?
Great, so now GitHub can't change the structure of their IDs without breaking this person's code. The lesson is that if you're designing an API and want an ID to be opaque you have to literally encrypt it. I find it really demoralizing as an API designer that I have to treat my API's consumers as adversaries who will knowingly and intentionally ignore guidance in the documentation like this.
And that is all the fault of the person who treated a documented opaque value as if it has some specific structure.
> The lesson is that if you're designing an API and want an ID to be opaque you have to literally encrypt it.
The lesson is that you should stop caring about breaking people’s code who go against the documentation this way. When it breaks you shrug. Their code was always buggy and it just happened to be working for them until then. You are not their dad. You are not responsible for their misfortune.
> I find it really demoralizing as an API designer that I have to treat my API's consumers as adversaries who will knowingly and intentionally ignore guidance in the documentation like this.
You don’t have to.
Even in OSS land, you risk alienating the community you’ve built if they’re meaningfully impact. You only do this if the impact is minimal or you don’t care about alienating anyone using your software.
What was the saying? When your scale is big enough, even your bugs have users.
VScode once broke a very popular extension that used a private API. Microsoft (righteously) didn't bother to ask if the private API had users.
Sure, but good luck running a business with that mindset.
Other than that, I agree with what others are saying. If people rely on some undocumented aspect of your IDs, it's on them if that breaks.
https://docs.github.com/en/graphql/reference/objects#pullreq...
OP’s requirements changed and they hadn’t stored them during their crawl
OP can put the decoded IDs into a new column and ignore the structure in the future. The problem was presumably mass querying the Github API to get those numbers needed for functional URLs.
You don't need encryption, a global_id database column with a randomly generated ID will do.
In this particular instance, Speck would be ideal since it supports a 96-bit block size https://en.wikipedia.org/wiki/Speck_(cipher)
Note that this is not a one-time pad because we are using the same key material many times.
But this is somewhat pedantic on my part, it's a distinction without a difference in this specific case where we don't actually need secrecy. (In most other cases there would be an important difference.)
No, you would call me a moron and tell me to go pound sand.
Weird systems were never supported to begin with.
I doubt it. That's the beauty of GraphQL — each object can store its ID however it wants, and the GraphQL layer encodes it in base64. Then when someone sends a request with a base64-encoded ID, there _might_ be an if-statement (or maybe it just does a lookup on the ID). If anything, the if-statement happens _after_ decoding the ID, not before encoding it.
There was never any if-statement that checked the time — before the migration, IDs were created only in the old format. After the migration, they were created in the new format.
This is one of many reasons why GraphQL sucks. Developers will do anything to avoid reading docs. In the REST API, the developerId and url fields would be easily discoverable by looking at the API response. But in GraphQL, there is no way to get all fields. You need to get a list of fields from the docs and request them explicitly.
"I was looking at either backfilling millions of records or migrating our entire database, and neither sounded fun."