Looking through the problem sets in the link, the majority seems to be asking for just that.
If you're wondering whether or not someone knows how to transpose a matrix, or find the eigenvalues, let them do that on the whiteboard. No need to leetcode-ify such problems, because with 99.99% probability they'll provide you with solutions that are subpar compared to industry standard packages. There's more than time and space complexity when it comes to these problems.
EDIT: Also, you'll potentially lose a lot of high-quality candidates if you suddenly start to test people on methods they haven't worked with or seen in quite a while.
If you ask something like "please show us the equations for a support vector machine, and how you can compute a SVM" you could fail even world class ML scientists, if they haven't touched those for 10 years. Which is a very real possibility in the current ML scene.
I'd say that almost every ML interview I've had, or been part of, have been more big picture whiteboard interviews. Specific programming questions have ranked quite low on things to prioritize.
Secondly, FWIW, when I read the the term 'excercises' in the HN title I interpreted that to mean exactly a learning tool and not interview prep. The term "Challenges" in the website title is maybe a little less specific.
> because with 99.99% probability they'll provide you with solutions that are subpar compared to industry standard packages.
Somebody did last week with only a modest amount of effort: https://news.ycombinator.com/item?id=40870345
Edit: I see a lot of people complaining about interviews, but instead I consider this a good resource for checking you understand fundamental principles.
Personally, I would probably enjoy even more explanations and/or links to good resources, e.g., visualizations, etc. as well as more information in the solutions (e.g., via comments or doc strings). Good job anyway!
(Submitted title was "Leetcode but for ML".)
Everyone copies the FAANG interview process because it looks cool - except that FAANG is just a welfare program for recent graduates, who indulge in peer interview hazing because they are not doing anything else. They don't study for Leetcode because they want to DO something - they study because of the money. But in a real company you have to DO things.
What has Google done in the last decade that is REALLY useful? Google Gmail and Docs can be maintained by probably 50 people, their search has gotten useless and all they do is kill their own products because maintenance toil is a total drag.
Like the dumb brain teasers that Google "pioneered" in 2000s. How many golf balls can fit in a 747? I don't know, but I can estimate how many can fit up your a...
This Leetcode nonsense will go the way of THAT, in time.
Just no.
Google iterated to the standard DSA questions that are common now.
And I don’t think they’re entirely without merit. However, people think you should be testing to find the ceiling. That’s impossible. Not only do you have the issue of whether or not the candidate just got lucky by getting a question they just happen to know, if you are hiring for a more junior position, it’s likely you don’t need them to know it in the first place.
Our goal should be to test the floor, not the ceiling. Find questions that can be answered by anyone with the skill set you desire. Sometimes that floor is: can you write runnable code.
We’ve just completed a hiring cycle where several candidates couldn’t transform a simple circuit diagram into a Boolean statement. One candidate who professed SQL knowledge who couldn’t write a simple query. And I mean “how many buckets do you have?” level of simple.
On paper, these candidates seemed good. Several even had GitHub repositories. But, end of the day, I’m going to ask you to do a task. I’m going to need it by a date. I’m going to need that completed without having to comb over it and possibly rewrite chunks of it.
I don’t need the next Linus Torvalds, but so many candidates come with greatly exaggerated resumes and we have to winnow somehow.
My point exactly.
Using np.testing.assert_allclose in your asserts would solve this I think (https://numpy.org/doc/stable/reference/generated/numpy.testi...).
Happy to contribute / elaborate if you think it's be useful! :)
But dislike siloed websites like Leetcode where they ask you to bear with their awful web experience, I want to keep my code and notes offline and close in case I need it in a year or 10 years.
Approach with simple test files and exercises is more appealing to me https://github.com/dabeaz-course/python-mastery
So what is the goal here, to be like Leetcode ? or spread knowledge ? If latter, put material as plain markdown and .py files on github repo, we will say thank you.
Example: input: a = [[1,2],[2,4]], b = [1,2] output:[5, 10] reasoning: 11 + 22 = 5; 12+ 24 = 10
Which 1 and 2 correspond to the 1 and 2 from a and b?
I like the idea and might try some! But as a warning: leetcode is specifically aimed at prepping for interviews, and I've never seen questions like these in an interview (I'm somewhere between an MLE and ML researcher FWIW). The most common kinds of ML-specific things in my experience are:
- ML system design (basically everyone does this)
- ML knowledge questions ("explain ADAM etc.")
- probability + statistics knowledge
- ML problem solving in a notebook (quite rare, but some do it)
Edit: the sign up works for me, but the spacing is an issue
But thanks for giving Leetcode yet another idea to test AI Engineers who do not know how to write a multi-layered perceptron or a softmax activation function from scratch with yet another repository of already solved puzzles to making it easier for interviewers. I'd say its pretty useful myself.
And so it begins with the complaints of "The AI interview is broken", "We are the only industry that does this" frequently being preached here.
Leetcode already ruined so many coding interviews by asking people to do bullshit like
"Output data from a stream in order, make the solution performant"
Why would you ruin ML for us too?
Looking at your site, problem #1 is Multiply a matrix times a vector..... in no universe is that a legitimate ML interview question.
Also ML is such a huge field (everything from statistical learning through to transformer neural networks), I fail to see how you could say your solution tests core skills. If I'm hiring for an ASR Role, it's going to be very different than for a CV role.
Why not? This seems like the ML equivalent of FizzBuzz. If you don't know how matrix multiplication works well enough to implement it, I would argue that you don't know what you're doing at all.
And this is in no way dismissive of the work. I can definitely see the value in this -- I am just saying many people don't wish to see this, which many people apparently agree based on the number of votes.
I believe you that you intended something more thoughtful, but the rest of us don't have access to your intention (or the real meaning of the comment in your head). We can only go by what you actually post, so if you want to make a more thoughtful point, you need to do so explicitly.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...