Hacker Newsnew | past | comments | ask | show | jobs | submit | eternal_intern's commentslogin

Why not list the pros/cons of each method and let the candidates choose their poison. Whichever metric they think they would be best at given their constraints.

Still requires work and a subjective estimation of candidate skill because you have to make a judgment across metrics instead of within one, but I mean, everybody wins in this case, right?


I like the idea of multiple tracks a lot, but I think you need to be hiring at sufficient scale that each track has a fairly high rate of interviews, which makes it hard to support tracks that aren't commonly chosen.

If you don't had that scale, uncommon tracks will be full of interviewers who are out of practice for your question, and poorly calibrated to evaluate the candidate.

Also, some companies might not want to allow any of these methods, since they each have their own tradeoffs/blind spots.


I work as a Mechatronics engineer and I have an interest in AI. I've personally gone through a lot of the online resources out there: 1. Andrew Ngs Deep learning MOOC

2. Fast AI parts 1 & 2

3. The old Google Machine learning course

But, what next?. From my experience, this doesn't give you enough credibility to get you a job interview at even a small sized firm, let alone Google.

Don't get me wrong, I really appreciate all the fantastic AI learning resources out there. Its incredibly enabling, but I feel like I'm missing the point of this - Is it to enable people to start companies using AI based tech, and grow the google compute based ecosystem? If its to grow the number of AI jobs and eligible people for those jobs, I have doubts whether that's actually working, or am I missing something?


There’s a misconception out there about the data science skills gap - the truth is there is a huge demand for highly skilled data scientists, a big demand for data and ml literate developers, and a moderate demand for entry level data scientists.

These resources from google and courses like Fast AI are great for getting devs up to speed so they can meaningfully contribute to data science projects - filling that big demand for data + ml literate devs, especially internally. They’re not designed to get people jobs (disclosure, getting people jobs in data science is what we do at thisismetis.com)

If you want to go deeper? The open source data science masters is a good set of resources[0]. The first few sections of Goodfellow’s deep learning book are a great crash course in ML math/stats theory[1]. Introduction to Statistical Learning is a staple in most people’s library[2]. There’s a glut of intro level data science content out there on the internet, but intermediate to advanced stuff usually means putting in serious effort or breaking out your checkbook and going back to school (whether traditional or otherwise).

[0]http://datasciencemasters.org/ [1]https://www.deeplearningbook.org/ [2]http://faculty.marshall.usc.edu/gareth-james/ISL/


I am under the impression that the courses are designed for EE/CS engineers to get familiar with the foundations of modern ML, but it's not sufficient education to work as a full time ML engineer.

I returned to grad school for ML two years ago, and even now I still struggle with some ML job interviews when it comes to statistics and theoretical questions that I've studied two years for. One particularly challenging part of ML interview is that it covers much more than a typical CS interviews that I'm used to. I had a ML engineer internship interview with a famous ML company recently, and I was asked about sorting algorithms, hashing algorithms, non-convex optimization techniques, gaussian processes and manually compute the jacobian of a NN for backprop on the spot.


Surely the easy answer is do something with your knowledge. If you feel you can apply it, then I would say it was useful.

I couldn't imagine reading 3 books on python, and wondering will I get an interview. The question should be, can I write a simple program. Measuring by can I get a job interview is asking the reverse question.

I mean, would you hire you? Can you solve a potential company's problems with your AI toolset.


I get what you mean. I've been applying the skillset to Kaggle problems, each of which I imagine contain multiple subproblems which companies might face. But kaggle standings, in my experience, dont seem to be too convincing a metric for job openings.

The problem with the MOOC ecosystem at the moment is there's no clear path forward with them. I'd have imagined the MOOC certifications solving this problem, but I feel networking plays a much bigger role in the job market rather than credibility.

The only exception I see is Udacity, which, by its pricing has created a limited pool of graduates, and therefore are valued much higher


Stay away from:

- MOOCs

- Udacity

- Kaggle

I'm not being facetious, this is my honest advice.


I'm not an expert, but I don't see academic courses here such as e.g. [1]. And I don't see books read such as [2]. Personally, I would follow those type of things as grads or undergrads would follow the same courses, and on top of that I'd do what you do.

[1] http://cs231n.stanford.edu/

[2] http://neuralnetworksanddeeplearning.com


I'm in Ngs course now and also half way through fast.ai. I'm also interested to learn whether these courses reliably lead to anything. I gather from Jeremy's comments in fast.ai that those who attend in person reliably get good jobs in the field. So networking appears to be the key. I wonder what can be done to improve the networking opportunities for people who take the online courses?


    > but I feel like I'm missing the point of this
There is no overarching point - each of the resources that you've listed have their own reasons to share educational content for free, it's up to you to use it as well as you can.


Well, you're right. The people sharing the resources are doing it for education's sake - for anyone that's interested. I think it's more accurate to say I feel locked out


I'm not sure if I agree with the author. Hackers do care about political issues. My question is whether political discussions over short form text do more harm than good to a community. To me, HN detoxing politics seems more like the librarian enforcing a rule of silence rather than encouraging ostriching. And polarizing topics like politics, especially over short form text, to me, seems like it would destroy that ideal of HN.

That being said, the Internet sure could use a proper forum for political discussions.


Why is the post flaggeg, then?


Because enough HN users clicked the "flag" link, most likely.


In my experience, It's never time which is the problem, it's mental attention. Having 4 hours of free time at the end of a tiring day is meaningless if you don't have the mental energy to actually use it.


Isn't it just more succinct to say we have tribal instincts because of our evolutionary origins? I see heated debates in politics, religion, even sports, because people identify from a certain "tribe". And I agree with Paul there, from what I have noticed, tribalism does seem to give you a certain blindspot to opposing arguments


I wonder how related this is to the mechanics behind confirmation bias. We all have a tendency to seek out get evidence that matches our "predictions" so to say. Maybe what we define as curiosity is actually just searching for the little hit of dopamine you get when you're right.


I came across this concept called "The jar of awesome" which on first glance seems really naive but I really took a liking to it. It's basically just writing down your victories when they happen and storing them somewhere in a tangible way. A collection of notes, post its, anything, but it has to be physical and it has to be stored. That way it feels "real", and its a great reminder of the small victories we tend to forget


I tried a similar thing at work several years ago, before "scrum" took hold everywhere. I kept my to-do list as a stack of post-it notes, and when they were done, I'd ball them up and throw them into a decorative glass mug someone left in my office.

"Update ownership on parser tests." "Get M3 feature list from Dan." "Finish code coverage slide deck." "Call dentist." "Buy quicklime and duct tape." "Investigate ia64 hang." "Finish mid-year discussions."


I feel like this is good advice but presents only one side of the coin. Underestimating how difficult something is, is exactly what leads to ''planning fallacy''. I would rather err on the side of caution and overestimate how difficult something is


The most important time to apply the advice from the article is after you have decided a project is worth your time and before you start planning in earnest.

The examples of difficult subject matter the author gives (particularly math) are so intimidating to some people that they miss out on a lot of great stuff because they conclude "I could never do that".

I don't think the author wants people to make poor time estimates and suffer budget overruns.

To me, the message of the article reads: "If you think something looks worthwhile but difficult, take a stab anyway. Often the hairiest, prickliest parts of the undertaking from an outsider's perspective aren't that tough at all once you're in the thick of it"

I 100% agree with you about how valuable conservatism is when you're planning, but I think the author's encouragement is intended for people who haven't even considered planning yet.


Thanks for stating it more clearly than I did! You've nailed it. It's always bothered me when people dismiss their ability to learn a thing, because they think some part will be difficult. They really have no idea. Yes, the task as a whole will have difficult parts, but we as individual human beings are far more capable than we give ourselves credit for.


My take-away was - never estimate confidently without knowing anything about the task. Because the initial assumptions might be very incorrect. Invest maybe 2-5% of the time to start doing something (ie begin learning Korean to realize that it's a phonetic language, and reading is easy within just 1 day). This will allow you to come up with the right estimate. Thoughts?


I have a framework of thinking about learning new things, which I sometimes call the 10-100-10k approach. I wrote about it year ago[0], and the responses helped me refine it.

I think the investing 10 hours in learning a new thing is more than enough to get the initial estimate you describe. Probably even 5 would be enough. That's 10 pomodoros. It's something I'm willing to consider to throw at totally random things to see if they are actually as hard as I think, and if I actually want to learn them.

Most people don't really spend even 15 minutes, to the clock, on things they think are hard to learn - and so underestimate just how much you can progress with as little as few hours, if you actually sit down and do it.

[0] - https://news.ycombinator.com/item?id=8197334


In my experience it's usually not as tricky as predicted, but requires more boilerplate, cleaning and communication with other people, so it takes more time than predicted.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: