• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • Laughter helps a lot. But if I’m consuming a ton of media, it’s sometimes better to just take a break and drink water while doing nothing else. I also have mantras about life like: “if I have my family, I’m ok”, “home can be anywhere”, “nothing in life is more important that food, shelter, water”, etc. Sometimes I worry about bills, future costs, etc. But worrying doesn’t always make it easier. A little bit of worry keeps me from ignoring finances all together. But too much worry isn’t helping. If you can free yourself from worrying about money, you’d be surprised how much weight gets lifted. I’m privileged because I have family and friends that I love. If I ever hit hard times, I know I have a home with them. Reminding myself of that keeps me from staying up all night with worry.


  • I just picture you floating in an endless void 100 billion years after entropy has moved every single subatomic particle away from each other. Somehow you have been sustained. The last sophisticated entity in the universe. Your billions of years of loneliness have already driven you to the point of insanity, enlightenment, insanity again, and finally a state of which no one could imagine. Because you don’t consume food or water, you’re in a perpetual state of hunger and thirst. You don’t feel harmed, but you do feel peckish all the time. You could do with a draught. Your wish didn’t allow for pain “thank God”, you think.






  • I see. Well without a command line, I wouldn’t call it a terminal. I think you just want tooling to be available on an Android? It would probably look like a button or series of buttons on an app. Maybe you could connect the dots between them to insinuate a pipe? E.g., you have a “mv” button and a “file” button. When you drag from mv -> file you could maybe kick off a process that moves the file. Maybe it would prompt you for other arguments like destination? I suppose this theoretical app could allow people to install additional tooling and make their own custom commands.

    But I just feel like a button UI for these kinds of things will always be awkward. If you don’t have a keyboard/terminal interface, it’s hard to implement anything that would even behave like terminals in terms of functionality.



  • I think this article does a good job of asking the question “what are we really measuring when we talk about LLM accuracy?” If you judge an LLM by its: hallucinations, ability analyze images, ability to critically analyze text, etc. you’re going to see low scores for all LLMs.

    The only metric an LLM should excel at is “did it generate human readable and contextually relevant text?” I think we’ve all forgotten the humble origins of “AI” chat bots. They often struggled to generate anything more than a few sentences of relevant text. They often made syntactical errors. Modern LLMs solved these issues quite well. They can produce long form content which is coherent and syntactically error free.

    However the content makes no guarantees to be accurate or critically meaningful. Whilst it is often critically meaningful, it is certainly capable of half-assed answers that dodge difficult questions. LLMs are approaching 95% “accuracy” if you think of them as good human text fakers. They are pretty impressive at that. But people keep expecting them to do their math homework, analyze contracts, and generate perfectly valid content. They just aren’t even built to do that. We work really hard just to keep them from hallucinating as much as they do.

    I think the desperation to see these things essentially become indistinguishable from humans is causing us to lose sight of the real progress that’s been made. We’re probably going to hit a wall with this method. But this breakthrough has made AI a viable technology for a lot of jobs. So it’s definitely a breakthrough. I just think either I finitely larger models (of which we can’t seem to generate the data for) or new models will be required to leap to the next level.