![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
OpenAI is a terribly misleading name.
OpenAI is a terribly misleading name.
Well Musk already has one strike against him for retaining himself. Just need to find two flunky exec yesmen he’s keeping on and he would be fired by his own standard.
It’s a major unforced error by Musk – which to be fair happen to be the majority of his errors.
With the non compete clause void (which should have always been the case), Musk is just creating his own competition.
And non-white liberals? Are they simply self hating, because they disagree with you?
Frankly, at this point, I find the concept of you being a leftist utterly laughable. You’re indistinguishable from a conservative cosplaying as a “leftist” to show how rude and insensible they are.
That kind of has a nice ring to it actually. I’ll be seeing you around, conservative.
Sure, but there is some comfort in knowing that things have been worse, and our perception is colored by having greater information. Doesn’t mean things aren’t shitty. But it’s a bit easier to swallow.
There’s one missing piece here, and it’s startup capital. You don’t usually see new chemicals manufacturers for instance, because you need a lot of money to buy everything to start with.
Let’s also not forget that execs are horrible at estimating work.
“Oh this’ll just be a copy paste job right?” No you idiot this is a completely different system and because of xyz we can’t just copy everything we did on a different project.
Yeah the 59% in this survey are going to end up pretty successful and buy out the 41%
Due to the way it works. A bit like static error in control theory, you know that for different applications it may or may not be acceptable. The “I” in PID-regulators and all that. IIRC
Oh great, I’m getting horrible flashbacks now to my controls class.
Another way to look at it is if there’s sufficient lag time between your controlled variable and your observed variable, you will never catch up to your target. You’ll always be chasing your tail with basic feedback control.
Dark Souls is a great metaphor for depression. You go hollow when you give up. It’s by persevering and overcoming obstacles that we find not only joy but meaning.
Feel free to contractually agree to pay all their legal fees, in that case.
to an entity with greater legal capabilities?
Someone who has the necessary legal capabilities is going to be a corporation. And that’s exactly why we left Reddit.
Yeah they’re fucked. New moderation will never work if the users don’t support it. And judging from the comments, they definitely don’t support it.
On the one hand, this doesn’t seem like a lot. But on the other, this is just for June. A lot of people left or drastically cut down their usage at the very end of June, and we’re not seeing this reflected in the data yet.
Even so, no company wants to say they’ve lost 3% of their customers. With 1.7 billion total, that’s still 51 million people. It’s a notable loss, especially for a company trying to become profitable and have an IPO.
Do so at your own peril. Because the thing is, a person will learn from their mistakes and grow in knowledge and experience over time. An LLM is unlikely to do the same in a professional environment for two big reasons:
The company using the LLM would have to send data back to the creator of the LLM. This means their proprietary work could be at risk. The AI company could scoop them, or a data leak would be disastrous.
Alternatively, the LLM could self-learn and be solely in house without any external data connections. A company with an LLM will never go for this, because it would mean their model is improving and developing out of their control. Their customized version may end up being better than their the LLM company’s future releases. Or, something might go terribly wrong with the model while it learns and adapts. If the LLM company isn’t held legally liable, they’re still going to lose that business going forward.
On top of that, you need your inexperienced noobs to one day become the ones checking the output of an LLM. They can’t do that unless they get experience doing the work. Companies already have proprietary models that just require the right inputs and pressing a button. Engineers are still hired though to interpret the results, know what inputs are the right ones, and understand how the model works.
A company that tries replacing them with LLMs is going to lose in the long run to competitors.