• 0 Posts
  • 8 Comments
Joined 4 months ago
cake
Cake day: March 13th, 2024

help-circle


  • This is a difficult issue to deal with, but I think the problem lies with our current acceptance of photographs as an objective truth. If a talented writer places someone in an erotic text, we immediately know that this is a product of imagination. If a talented artist sketches up a nude of someone, we can immediately recognize that this is a product of imagination. We have laws around commercial use of likenesses, but I don’t think we would make those things illegal.

    But now we have photographs that are products of imagination. I don’t have a solution for this specific issue, but we all need to calibrate how we establish trust with persons and information now that photographs, video, speech, etc can be faked by AI. I can even imagine a scenario in the not-too-distant future where face-to-face conversation cannot be immediately trusted due to advances in robotics or other technologies.

    Lying and deception are human nature, and we will always employ any new technologies for these purposes along with any good they may bring. We will always have to carefully adjust the line on what is criminal vs artistic vs non-criminal depravity.




  • But you are not reporting the underlying probability, just the guess. There is no way, then, to distinguish a bad guess from a good guess. Let’s take your example and place a fully occluded shape. Now the most probable guess could still be a full circle, but with a very low probability of being correct. Yet that guess is reported with the same confidence as your example. When you carry out this exercise for all extrapolations with full transparency of the underlying probabilities, you find yourself right back in the position the original commenter has taken. If the original data does not provide you with confidence in a particular result, the added extrapolations will not either.


  • I use LLMs all the time for work and hobbies, but my work and hobbies are well suited for LLM assistance.

    Writing boilerplate documents. I do this for work. I hate it. LLMs are very good at it.

    Writing boilerplate code. I do not like writing docstrings, making my code more maintainable, enforcing argument types, etc. I do a lot of research code and I need to spend my time testing and debuging. I can feed my spaghetti into an LLM and it will finish out all the boilerplate for me.


  • I do not want my information filtered through an opaque algorithm. My worldview is much too important to surrender to some corporation. I want to understand and have some control over any feed I use. My media diet includes Lemmy, AP news, PubMed/science journals, and conversations with friends and coworkers.

    I am very happy with Lemmy so far. Some have pointed out there is less content on Lemmy, but that is a bonus in my book. It is not healthy to spend hours scrolling.