Blockchain currencies and now ‘AI’ (glorified pattern matchers)… I wonder what the next huge energy/resource sink grift will be? I want to know so I can get in early, make a zillion and dump before everyone realizes how stupid and wasteful it all was.
What pisses me off is that LLMs are such a brute-force ‘solution’ that goes against one of the major goals of Computer Science – to find algorithms that take less time and energy, instead of just throwing more compute cycles at things.
Hell, blockchain at least had a mathematical justification for why it needed such huge compute/power resources (proof of work).
I agree that LLMs being used as they are is a major waste. However, I don’t think it goes against any goals of CS. Computer Science has traditionally followed this pattern: discover a solution that works, enhance it, and then repeat the process as needed. Transformers are the newest best solution for some NLP tasks, and a hype was created. We’ll be back to the enhance phase shortly. Hopefully with a little less overhype and more consideration this time.
The hype has been there everytime we discovered something big. Only this time the hype overshadows the discovery a bit. Doesn’t mean we shouldn’t expect our scientific progress to revert back to this known pattern again.
Blockchain currencies and now ‘AI’ (glorified pattern matchers)… I wonder what the next huge energy/resource sink grift will be? I want to know so I can get in early, make a zillion and dump before everyone realizes how stupid and wasteful it all was.
What pisses me off is that LLMs are such a brute-force ‘solution’ that goes against one of the major goals of Computer Science – to find algorithms that take less time and energy, instead of just throwing more compute cycles at things.
Hell, blockchain at least had a mathematical justification for why it needed such huge compute/power resources (proof of work).
I agree that LLMs being used as they are is a major waste. However, I don’t think it goes against any goals of CS. Computer Science has traditionally followed this pattern: discover a solution that works, enhance it, and then repeat the process as needed. Transformers are the newest best solution for some NLP tasks, and a hype was created. We’ll be back to the enhance phase shortly. Hopefully with a little less overhype and more consideration this time.
Totally agree with this take.
The hype has been there everytime we discovered something big. Only this time the hype overshadows the discovery a bit. Doesn’t mean we shouldn’t expect our scientific progress to revert back to this known pattern again.