I’ve found that 4o is substantially worse than the previous model at a ton of things. So I run all of my LLMs locally now through OLLAMA.
I’m trying out Perplexity and it’s literally LMGTFY. To the point that sometimes I just open google.com to get what I need. Sometimes it’s just me being lazy and searching for a domain instead of typing “.com” at the end.
But here’s the thing- for the longest time, google was devolving into LMGTFY too. Don’t you think?
It’s worked better for me when I throw complex tech questions at it, instead of wading through mountains of StackOverflow and Reddit 10-yo bilge.
You can’t trust 2/3 of what ChatGPT generates or returns, and still have to know what you’re doing. But it’s a lot easier than clicking on 100 search results and finding 99 of them irrelevant.