It's not that the technology is magical or special, it's that we're not. That being said, finding new ways to study the nature and limits of cognition and consciousness after 2000+ years of unproductive navel-gazing feels very magical and special.
1. Deep Dive into LLMs like ChatGPT (https://youtube.com/watch?v=7xTGNNLPyMI)
Am I missing something?
The whole post reads like hasty clumsy grey marketing.
As for the specifics of the model I trained, I would be hard pressed to recall the specifics off the top of my head. I believe I trained a small model locally, but after completing that as a PoC, I downloaded the GPT-2 model weights, then trained / fine-tuned those locally. That is what the book directed. All the steps are in my github repo, which (unsurprisingly) like the author's repo. His repo actually has more explanation. Mine is more or less just code.