Google DeepMind and their new startup Isomorphic Labs, are expanding into drug discovery. They developed AlphaFold3 as their model to accelerate drug discovery and create demand from big pharma. They already signed Novartis and Eli Lilly for $3 billion - Google’s becoming a pharma company! (https://www.isomorphiclabs.com/articles/isomorphic-labs-kick...)
AlphaFold3 is a biomolecular structure prediction model that can do three main things: (1) Predict the structure of proteins; (2) Predict the structure of drug-protein interactions; (3) Predict nucleic acid - protein complex structure.
AlphaFold3 is incredibly important for science because it vastly accelerates the mapping of protein structures. It takes one PhD student their entire PhD to do one structure. With AlphaFold3, you get a prediction in minutes on par with experimental accuracy.
There’s just one problem: when DeepMind published AlphaFold3 in May (https://www.nature.com/articles/s41586-024-07487-w), there was no code. This brought up questions about reproducibility (https://www.nature.com/articles/d41586-024-01463-0) as well as complaints from the scientific community (https://undark.org/2024/06/06/opinion-alphafold-3-open-sourc...).
AlphaFold3 is a fundamental advance in structure modeling technology that the entire biotech industry deserves to be able to reap the benefits from. Its applications are vast, including:
- CRISPR gene editing technologies, where scientists can see exactly how the DNA interacts with the scissor Cas protein;
- Cancer research - predicting how a potential drug binds to the cancer target. One of the highlights in DeepMind’s paper is the prediction of a clinical KRAS inhibitor in complex with its target.
- Antibody / nanobody to target predictions. AlphaFold3 improves accuracy on this class of molecules 2 fold compared to the next best tool.
Unfortunately, no companies can use it since it is under a non-commercial license!
Today we are releasing the full model trained on single chain proteins (capability 1 above), with the other two capabilities to be trained and released soon. We also include the training code. Weights will be released once training and benchmarking is complete. We wanted this to be truly open source so we used the Apache 2.0 license.
Deepmind published the full structure of the model, along with each components’ pseudocode in their paper. We translated this fully into PyTorch, which required more reverse engineering than we thought!
When building the initial version, we discovered multiple issues in DeepMind’s paper that would interfere with the training - we think the deep learning community might find these especially interesting. (Diffusion folks, we would love feedback on this!) These include:
- MSE loss scaling differs from Karras et al. (2022). The weighting provided in the paper does not downweigh the loss at high noise levels.
- Omission of residual layers in the paper - we add these back and see benefits in gradient flow and convergence. Anyone have any idea why Deepmind may have omitted the residual connections in the DiT blocks?
- The MSA module, in its current form, has dead layers. The last pair weighted averaging and transition layers cannot contribute to the pair representation, hence no grads. We swap the order to the one in the ExtraMsaStack in AlphaFold2. An alternative solution would be to use weight sharing, but whether this is done is ambiguous in the paper.
More about those issues here: https://github.com/Ligo-Biosciences/AlphaFold3
How this came about: we are building Ligo (YC S24), where we are using ideas from AlphaFold3 for enzyme design. We thought open sourcing it was a nice side quest to benefit the community.
For those on Twitter, there was a good thread a few days ago that has more information: https://twitter.com/ArdaGoreci/status/1830744265007480934.
A few shoutouts: A huge thanks to OpenFold for pioneering the previous open source implementation of AlphaFold We did a lot of our early prototyping with proteinFlow developed by Lisa at AdaptyvBio we also look forward to partnering with them to bring you the next versions! We are also partnering with Basecamp Research to supply this model with the best sequence data known to science. Matthew Clark (https://batisio.co.uk) for his amazing animations!
We’re around to answer questions and look forward to hearing from you!
DeepMind and AlphaFold are clearly moving in a closed-source direction, since they created Isomorphic Labs as a division of Alphabet essentially focused on doing this stuff closed source. In theory it seems nice for academic tools to have an open source version, although I'm not familiar enough with this field to point to a specific benefit of it.
So what's your plan for the company itself, do you intend to continue working on this open source project as part of your business model, or was it more of a one-off? Your website seems very nonspecific about what exactly you intend to be selling.
Also, for work of the highest art (of which AF3 is an example), publication in nature really is the fundamental unit of scientific currency because it ensures all their competitors will get hyped up and work extra-hard to disprove it.
> Also, for work of the highest art (of which AF3 is an example), publication in nature really is the fundamental unit of scientific currency because it ensures all their competitors will get hyped up and work extra-hard to disprove it.
IDK about disproving it, again nobody is distrusting the work, but let's also not pretend that a prestige journal is necessary to promote AF3. They could publish in the Columbia Undergraduate Science Journal and get the same amount of press. And to be clear the controversy has largely center on Nature for allowing AF3 to get away with more than they would most other projects, and the wasted time and effort it's taking to reimplement the work so people can add to it. FWIW an author did state that they're attempting to release the code but that's not like a binding vow.
Finally, AF3 strictly speaking didn't win CASP (it almost certainly would) but again this isn't necessarily the point when people talk about validation. The diffusion process does seem to result in notable edge cases (most obviously in IDPs and IDRs but also non-existent self-interactions), it's not a straight improvement in that respect.
I never had anything more than a dim intuition of the serious chemistry going on before the bytes got to me.
Haskell (and Nix) people are fond of talking about “constraints as power”.
https://github.com/Ligo-Biosciences/AlphaFold3/blob/ebdf3b12...
There are a number of companies doing innovative things around quantifying proteins and their concentrations in various samples.
I had the privilege to rub elbows with folks working on such cool stuff.
Folding@Home https://en.wikipedia.org/wiki/Folding@home :
> making it the world's first exaflop computing system
[1]: https://foldingathome.org/dig-deeper/#:~:text=employing%20Ro...
Google has access to training compute on a scale perhaps nobody else has.
Keep up the good work!
How much compute does YC give you access to btw? Is that just things like azure credit or do YC have actual hardware?