I’ve found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?
I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.
In the sense that a forum I am on has had a huge amount of fun doing very silly things with Godzilla, yes.
https://forums.mst3k.com/t/dall-e-fun-with-an-ai/24697/8237
It’s best to start at the bottom. We didn’t start out with Godzilla when the thread began and it also began in 2022.
8220 posts, the majority Godzilla-related. I haven’t done too many lately, but here’s a few recent ones:
Tits, on an egg-laying reptile?
I’m not completely sure this is a real photo
How dare you mock a widow in mourning!
There’s a handful of actual good use-cases. For example, Spotify has a new playlist generator that’s actually pretty good. You give it a bunch of terms and it creates a playlist of songs from those terms. It’s just crunching a bunch of data to analyze similarities with words. That’s what it’s made for.
It’s not intelligence. It’s a data crunching tool to find correlations. Anyone treating it like intelligence will create nothing more than garbage.
An LLM (large language model, a.k.a. an AI whose output is natural language text based on a natural language text prompt) is useful for the tasks when you’re okay with 90% accuracy generated at 10% of the cost and 1,000% faster. And where the output will solely be used in-house by yourself and not served to other people. For example, if your goal is to generate an abstract for a paper you’ve written, AI might be the way to go since it turns a writing problem into a proofreading problem.
The Google Search LLM which summarises search results is good enough for most purposes. I wouldn’t rely on it for in-depth research but like I said, it’s 90% accurate and 1,000% faster. You just have to be mindful of this limitation.
I don’t personally like interacting with customer service LLMs because they can only serve up help articles from the company’s help pages, but they are still remarkably good at that task. I don’t need help pages because the reason I’m contacting customer service to begin with is because I couldn’t find the solution using the help pages. It doesn’t help me, but it will no doubt help plenty of other people whose first instinct is not to read the f***ing manual. Of course, I’m not going to pretend customer service LLMs are perfect. In fact, the most common problem with them seems to be that they go “off the script” and hallucinate solutions that obviously don’t work, or pretend that they’ve scheduled a callback with a human when you request it, but they actually haven’t. This is a really common problem with any sort of LLM.
At the same time, if you try to serve content generated by an LLM and then present it as anything of higher quality than it actually is, customers immediately detest it. Most LLM writing is of pretty low quality anyway and sounds formulaic, because to an extent, it was generated by a formula.
Consumers don’t like being tricked, and especially when it comes to creative content, I think that most people appreciate the human effort that goes into creating it. In that sense, serving AI content is synonymous with a lack of effort and laziness on the part of whoever decided to put that AI there.
But yeah, for a specific subset of limited use cases, LLMs can indeed be a good tool. They aren’t good enough to replace humans, but they can certainly help humans and reduce the amount of human workload needed.
I created a funny AI voice recording of Ben Shapiro talking about cat girls.
Then it was all worth it.
To me it’s glorified autocomplete. I see LLM as a potencial way of drastically lowering barrier of entry to coding. But I’m at a skill level that coercing a chatbot into writing code is a hiderance. What I need is good documentation and good IDE statical analysis.
I’m still waiting on a good, IDE integrated, local model that would be capable of more that autompleting a line of code. I want it to generate the boiler plate parts of code and get out of my way of solving problems.
What I don’t want, is a fucking chatbot.
If AI is for anything it’s for DnD campaign art.
Make your NPCs and towns and monsters!
Or helping to come up with some plot hooks in a pinch.
Same. When I’ve got a session coming upjwithjless than ideal prep time, I’ve used chat get to help figure out some story beats. Or reframe a movie plot into DnD terms. But more often than not I use the Story Engine Deck to help with writers block. I’d rather support a small company with a useful product than help Sam Altman boil the oceans.
Lol best me to it. For a lot of generic art, even more customized stuff, it works well.
It’s also pretty great at giving stars to home brew monsters, or making variations of regular monsters.
to copy my own comment from another similar thread:
I’m an idiot with no marketable skills. I put boxes on shelves for a living. I want to be an artist, a musician, a programmer, an author. I am so bad at all of these, and between having a full time job, a significant other, and several neglected hobbies, I don’t have time to learn to get better at something I suck at. So I cheat. If I want art done, I could commission a real artist, or for the cost of one image I could pay for dalle and have as many images as I want (sure, none of them will be quite what I want but they’ll all be at least good). I could hire a programmer, or I could have chatgpt whip up a script for me since I’m already paying for it anyway since I want access to dalle for my art stuff. Since I have chatgpt anyway, I might as well use it to help flesh out my lore for the book I’ll never write. I haven’t found a good solution for music.
I have in my brain a vision for a thing that is so fucking cool (to me), and nobody else can see it. I need to get it out of my brain, and the only way to do that is to actualize it into reality. I don’t have the skills necessary to do it myself, and I don’t have the money to convince anyone else to help me do it. generative AI is the only way I’m going to be able to make this work. Sure, I wish that the creators of the content that were stolen from to train the ai’s were fairly compensated. I’d be ok with my chatgpt subscription cost going up a few dollars if that meant real living artists got paid, I’m poor but I’m not broke.
These are the opinions of an idiot with no marketable skills.
I hate that it monetized general knowledge that use to be easily searchable then repackaged it as some sort of black box randomizer.
It tends to make Lemmy people mad for some reason, but I find GitHub copilot to be helpful.
A friend’s wife “makes” and sells AI slop prints. He had to make a twitter account so he could help her deal with the “harassment”. Not sure exactly what she’s dealing with, but my friend and I have slightly different ideas of what harassment is and I’m not interested in hearing more about the situation. The prints I’ve seen look like generic fantasy novel art that you’d see at the checkout line of a grocery store.
It looks impressive on the surface but if you approach it with any genuine scrutiny it falls apart and you can see that it doesn’t know how to draw for shit.
I find it helpful to chat about a topic sometimes as long as it’s not based on pure facts, You can talk about your feelings with it.
I used it the other day to spit out a ~150 line python script. It worked flawlessly on the first try.
I don’t know python.
It might not work so flawlessly on the 2nd, 3rd, or 100th time though. I use ChatGPT semi-frequently for coding, while it generally does a surprisingly good job, I often find things it overlooks, and need to keep prompting it for further refinements, or just fix it myself.
Yeah I’ve had to go back and fix (re prompt) some things like this in the past.
The headline is that it helps me code things far faster than if I was doing it myself. And sometimes saves me 100% of the work.
Good for rephrasing things when I’m having trouble.
I find ChatGPT useful in getting my server to work (since I’m pretty new with Linux)
Other than that, I check in on how local image models are doing around once every couple of months. I would say you can achieve some cool stuff with it, but not really any unusual stuff.
The applications of what you call ai are absolutely limitless. But to be clear what you’re calling “AI” isn’t AI in terms of what you might want it to be what you’re referring to are large language models or LLM’s. Which aren’t ai, not yet.
It’s short sighted statements like this that really get my blood boiling.
If humanity actually achieves artificial intelligence it’ll be the equivalent of the printing press or agriculture. It’ll be like inventing the superconductor or micro transistors all over again. Our world will completely change for the better.
If your interactions with these llms have been negative, I can only assume that you have a strong bias against this type of technology and have simply not used it in a way that’s applicable for you.
I personally use llms pretty much daily in my life and they have been nothing but an excellent tool.
How could you possibly know that achieving AI will change the world for the better? Change, I believe, but even people running the AI companies talk about how hard alignment is. There’s a chance it has a net positive effect on the world, but I guarantee you there is also a non-zero chance it has a net negative effect. If you have some way of predicting the future, maybe you should get into investing.
There’s no reason to assume that AI will be malevolent.
I also said it would be equivalent to other important events throughout human history.
For example, I believe the discovery of agriculture is one of the most detrimental things ever that happened to humanity. Doesn’t make it any less riveting.
If you do not understand or don’t want to understand the implications of a fully realized artificial intelligence then you are simply willfully ignorant or want to be intentionally contrary.
Either way when our ai overlords take over the earth your name won’t be in the protected scrolls. May God have mercy on your soul.
Lol. I think it’s very dangerous to assume something this revolutionary has a 0% chance of being a net negative. I’m not saying AI can’t be good, but I’m saying it never hurts to have some skepticism. Also, remember, history doesn’t repeat, it echoes.
I honestly don’t understand why suggesting there is a >0% chance things could go wrong would make you so angry. Maybe you should ask ChatGPT to coach you on handling opinions that don’t match your own.