Hacker Newsnew | past | comments | ask | show | jobs | submit | one_electron's commentslogin

genuine question (asking because i have a biased perspective): why dont you consider advances in ai to be a major shift? in the past decade, there has been a rapid increase in capabilities in computer vision, language understanding, robotics, etc.


The technology has advanced a lot but what major changes have these advances in AI really made to the average person's life? I can't think of any that are anywhere near as dramatic as the invention of cars/air travel/the internet/smartphones.


AI is largely used for surveillance and advertising. Great.


AI researchers have been over-promising and under-delivering for 70 years now.

It seems like AI has settled into making discriminators. Maybe intelligence really is just pattern matching and grouping? I don't think it is, but I could be wrong.


Many cognitive scientists would argue that it's also about building models which include statistical/Bayesian models but more than that.

To your broader point, it's not hard to find some specific domains of machine learning where there have been huge advances. But zoom out and look at the broader picture and the results aren't so impressive.


It doesn't seem that AI and robotics over-promise. Sure, we're not at AGI, but if you look at the potential(i.e. startups) to replace jobs, there are tons of that.


The same was said of expert systems in the 1980’s. Yes, there’s potential, but that potential hasn’t been realized yet. At least not to the point where people compare the impact of AI to the impact of semi-conductors and integrated circuits.


It will take time. Like every big innovation.

The question is: will the potential be fullfiled.

Maybe, given the many startups doing real work with machine learning(besides the many that just hype), it's reasonable to guess that that the potential Will be fullfiled.


a complementary reading piece should be sutton's "the bitter lesson", which more or less argues that ai methods/techniques that "leverage computation are ultimately the most effective, and by a large margin."

[0] http://www.incompleteideas.net/IncIdeas/BitterLesson.html


i suspect the same. we saw the same exact behavior last year from amazon around minimum wages (https://techcrunch.com/2019/04/11/amazon-ceo-jeff-bezos-chal...)


i dont have any inside insight but id be surprised if thats the case. goodfellow's line of research has been adversarial learning [1].

[1] https://scholar.google.ca/citations?hl=en&user=iYN86KEAAAAJ&...


"easy to copy" isn't really a factor here tbh, seeing how tiktok (along with vine, snap, facebook, etc) is a classic example of network effects.


another one!

berkeley rise lab is such a powerhouse - spark, mesos, etc. just in the past few years. it really puts some of these big tech companies to shame.


I think spark and mesos are about a decade old at this point. I'm not really sure anyone is being put to shame either, there are wildly successful alternatives to all of these.


it did! an entire generation of MBAs learned from jack welch, who popularized be #1 in anything you compete in (and if not, exit).

for those interested in management fads, https://www.investors.com/news/management/leaders-and-succes...


We also have Jack Welch to thank for stack ranking (ranking the bottom X% of employees in any given team, no matter how elite they are, as "underperforming").

The self-inflicted damage management fads (and Welch's Ilk in particular) have caused to our economy is incalculable.


Wasn't he also the person responsible for "building up" GE Capital, i.e. the division that almost took down GE in late October - early November 2008?


in my experience, most people dismiss cyc as a failed science experiment. this shouldnt be! after all, many important deep learning concepts have their roots in the 80s, and it is possible that cyc could be revived too.


Cyc itself probably won't be: proprietary information is something that modern scientists tend to know better than to invest time in. Symbolic AI, though, hasn't really died.


CYC effectively died the day OpenCYC died. There is no way that an entity that tries to catalogue human knowledge in this way will thrive on a closed set of data, there are only so many people working there.

Just like the Encyclopedia Brittanica has found its match in WikiPedia so CYC will find its match in something open. The engine - if the comments here are to be believed as still currently relevant - is a core that may be relevant and a huge number of domain specific hacks. Let's hope sooner or later CYC management comes to their senses and revives OpenCYC.


That is SUMO Ontology (http://ontologyportal.org)! It is open, in GitHub and people can contribute.


same here, although i know unity has been staffing up their ml team eg arthur juliani doing good work at unity. more generally speaking, i think there is a clear need for reliable, robust simulators in ml. (openai released a few interesting environments a few years back but adoption seems to have plateaued, from my perspective.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: