I agree that LLMs being used as they are is a major waste. However, I don’t think it goes against any goals of CS. Computer Science has traditionally followed this pattern: discover a solution that works, enhance it, and then repeat the process as needed. Transformers are the newest best solution for some NLP tasks, and a hype was created. We’ll be back to the enhance phase shortly. Hopefully with a little less overhype and more consideration this time.
The hype has been there everytime we discovered something big. Only this time the hype overshadows the discovery a bit. Doesn’t mean we shouldn’t expect our scientific progress to revert back to this known pattern again.
I agree that LLMs being used as they are is a major waste. However, I don’t think it goes against any goals of CS. Computer Science has traditionally followed this pattern: discover a solution that works, enhance it, and then repeat the process as needed. Transformers are the newest best solution for some NLP tasks, and a hype was created. We’ll be back to the enhance phase shortly. Hopefully with a little less overhype and more consideration this time.
Totally agree with this take.
The hype has been there everytime we discovered something big. Only this time the hype overshadows the discovery a bit. Doesn’t mean we shouldn’t expect our scientific progress to revert back to this known pattern again.