🤷♂️ I only use local generators at this point,so I don’t care.
I wish just once we could have some kind of tech innovation without a bunch of douchebag techbros thinking it’s going to solve all the world’s problems with no side effects while they get super rich off it.
Of course most don’t actually even believe it, that’s just the pitch to get that VC juice. It’s basically fraud all the way down.
I just want a portable self hosted LLM for specific tasks like programming or language learning.
You can install Ollama in a docker container and use that to install models to run locally. Some are really small and still pretty effective, like Llama 3.2 is only 3B and some are as little as 1B. It can be accessed through the terminal or you can use something like OpenWeb UI to have a more “ChatGPT” like interface.
deleted by creator
Thank fuck. Can we have cheaper graphics cards again please?
I’m sure a RTX 4090 is very impressive, but it’s not £1800 impressive.
Just wait for the 5090 prices…
I just don’t get whey they’re so desperate to cripple the low end cards.
Like I’m sure the low RAM and speed is fine at 1080p, but my brother in Christ it is 2024. 4K displays have been standard for a decade. I’m not sure when PC gamers went from “behold thine might from thou potato boxes” to “I guess I’ll play at 1080p with upscaling if I can have a nice reflection”.
4k displays are not at all standard and certainly not for a decade. 1440p is. And it hasn’t been that long since the market share of 1440p overtook that of 1080p according to the Steam Hardware survey IIRC.
Maybe not monitors, but certainly they are standard for TVs (which are now just monitors with Android TV and a tuner built in).
Well, people aren’t sticking 4090s in their Samsung smart TVs, so idk that matters.