31337@sh.itjust.workstoLemmy Shitpost@lemmy.world•Let people just use what they want, okay? It's their choice.
1·
8 hours agoI’ve seen this term on Mastadon. I’m actually confused by it a bit, since I’ve always thought replies are to be expected on the Internet.
I think women have a problem with men following them and replying in an overly familiar manner, or mansplaining, or something like that. I’m old, used to forums, and never used Twitter, so I may be missing some sort of etiquette that developed there. I generally don’t reply at all on Mastadon because of this, and really, I’m not sure what Mastadon or microblogging is for. Seems to be for developing personal brands, and for creators of content to inform followers of what they created. Seems not to be for discussion. I.e. more like RSS than Reddit (that’s my understanding at least).
Larger models train faster (need less compute), for reasons not fully understood. These large models can then be used as teachers to train smaller models more efficiently. I’ve used Qwen 14B (14 billion parameters, quantized to 6-bit integers), and it’s not too much worse than these very large models.
Lately, I’ve been thinking of LLMs as lossy text/idea compression with content-addressable memory. And 10.5GB is pretty good compression for all the “knowledge” they seem to retain.