to stop investing in mathematics research because apparently "symbols" are useless and AI is the future.
ljl imagine thinking his little AI gadgetry can compete against elite mathematicians.
I luv geoffrey hinton because he's such a massive b/s artist:
When I wrote a 2018 essay defending some ongoing role for symbol manipulation, LeCun scorned my entire defense of hybrid AI, dismissing it on Twitter as “mostly wrong.” Around the same time, Hinton likened focusing on symbols to wasting time on gasoline engines when electric engines were obviously the best way forward. Even as recently as November 2020, Hinton told Technology Review, “Deep learning is going to be able to do everything.”
https://www.noemamag.com/deep-learning-alone-isnt-getting-us-to-human-like-ai/
lmao... can you believe this clo/wn? unbelievable level of hubris.
Once these guys get invited into the billionaires' circle, their IQs drop about 50 points.
Geoffrey Hinton was a serious scientist for a long time. Unfortunately, just as people who have bad friends turn bad themselves, he eventually succumbed to the pull of the billionaires. The fact that he worked at Google didn't help; Google is a hive of militant mediocrity.
I luv geoffrey hinton because he's such a massive b/s artist:
When I wrote a 2018 essay defending some ongoing role for symbol manipulation, LeCun scorned my entire defense of hybrid AI, dismissing it on Twitter as “mostly wrong.” Around the same time, Hinton likened focusing on symbols to wasting time on gasoline engines when electric engines were obviously the best way forward. Even as recently as November 2020, Hinton told Technology Review, “Deep learning is going to be able to do everything.”
https://www.noemamag.com/deep-learning-alone-isnt-getting-us-to-human-like-ai/
lmao... can you believe this clo/wn? unbelievable level of hubris.
Symbolic AI was thoroughly researched for decades and failed to build anything like GPT. Where is the symbolic language generator? Where is the symbolic image recognition system? How about a competitive Go player?
It's funny you link approvably to an article by Gary Marcus. I think if he had a functioning model of the inherent problems with connectionist systems, he should be able to come up with examples of things GPT would just not be able to answer. Instead, he identified specific questions that GPt-2 wasn't able to answer which were then answered by GPT-3. Same thing happened with GPT-3 vs. 4. If his model just predicts he will be able to identify problems of an unspecified nature in future systems, it's basically w0rthl3ss and strongly implies he doesn't know what the inherent limits are of NNs. It's bizarre he was calling for a pause on AI scaling given this criticism too.
It's plausible that NNs learn circuits that implement symbolic operations.
NIce strawman, ai re//tahd.
I luv geoffrey hinton because he's such a massive b/s artist:When I wrote a 2018 essay defending some ongoing role for symbol manipulation, LeCun scorned my entire defense of hybrid AI, dismissing it on Twitter as “mostly wrong.” Around the same time, Hinton likened focusing on symbols to wasting time on gasoline engines when electric engines were obviously the best way forward. Even as recently as November 2020, Hinton told Technology Review, “Deep learning is going to be able to do everything.”
https://www.noemamag.com/deep-learning-alone-isnt-getting-us-to-human-like-ai/
lmao... can you believe this clo/wn? unbelievable level of hubris.Symbolic AI was thoroughly researched for decades and failed to build anything like GPT. Where is the symbolic language generator? Where is the symbolic image recognition system? How about a competitive Go player?
It's funny you link approvably to an article by Gary Marcus. I think if he had a functioning model of the inherent problems with connectionist systems, he should be able to come up with examples of things GPT would just not be able to answer. Instead, he identified specific questions that GPt-2 wasn't able to answer which were then answered by GPT-3. Same thing happened with GPT-3 vs. 4. If his model just predicts he will be able to identify problems of an unspecified nature in future systems, it's basically w0rthl3ss and strongly implies he doesn't know what the inherent limits are of NNs. It's bizarre he was calling for a pause on AI scaling given this criticism too.
It's plausible that NNs learn circuits that implement symbolic operations.