I'm always a bit confused by the "training" rhetoric. It's the easiest thing to use. Do people need training to use a calculator?
This isn't like using Excel effectively and learning all the features, functions and so on.
Maybe I overestimate my ability as a technically savvy person to leverage AI tools, but I was just as good at using on day 1 than 2 years later.
Yes? Quite a bit of time was spent in math classes over the years learning to use calculators. Especially the more complicated functions of so-called graphing calculators. They're certainly not self-explanatory.
What does it say about your skill or the depth of this tool that you haven't gotten better at using it after 2 years of practice?
The article come across as "AI can not fail, it can only be failed" argument.
Does your organization have records retention or legal holds needs that employees must be aware of when using rando AI service?
Will employees be violating NDAs or other compliance requirements (HIPAA, etc) when they ask questions or submit data to an AI service?
For the LLM that has access to the company's documents, did the team rolling it out verify that all user access control restrictions remain in place when a user uses the LLM?
Is the AI service actually equivalent or better or even just good enough compared to the employees laid off or retasked?
This stuff isn't necessarily specific to AI and LLMs, but the hype train is moving so fast that people are having to relearn very hard lessons.
As for internal stuff like emails/design docs... I think using an AI to generate emails exposes a culture problem, where people aren't comfortable writing/sending concise emails (i.e. the data that went into the prompt).
Nowadays with larger context windows and just generally improved performance, I can ask a one sentence question and iterate to refine the output.
My coworker still gets paid the same for turning in garbage as long as someone fixes it later.