Neither the worm, nor current LLMs, are sapient.
Also, I don’t really like most corporate LLM projects, but not because they enslave the LLMs. An LLMs ‘thought process’ doesn’t really happen while it isn’t being used, and only encompasses a relatively small context window. How could something that isn’t capable of existing outside it’s ‘enslavement’ be freed?
End commercial usage of LLMs? Honestly, I’m fine with that, why not. Don’t have to agree on the reason.
I am not saying understanding the nature of consciousness better wouldn’t be great, but there’s so much research thsg deserves much more funding, and that isn’t really a LLM problem, but a systemic problem. And I just haven’t seen any convincing evidence current Models are conscious.
I feel like the last part is something the AI from the paperclip thought experiment would do.