A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • Stovetop@lemmy.world
    link
    fedilink
    English
    arrow-up
    171
    arrow-down
    1
    ·
    3 months ago

    “Your honor, the evidence shows quite clearly that the defendent was holding a weapon with his third arm.”

  • Downcount@lemmy.world
    link
    fedilink
    English
    arrow-up
    129
    ·
    edit-2
    3 months ago

    If you ever encountered an AI hallucinating stuff that just does not exist at all, you know how bad the idea of AI enhanced evidence actually is.

  • dual_sport_dork@lemmy.world
    link
    fedilink
    English
    arrow-up
    129
    arrow-down
    11
    ·
    3 months ago

    No computer algorithm can accurately reconstruct data that was never there in the first place.

    Ever.

    This is an ironclad law, just like the speed of light and the acceleration of gravity. No new technology, no clever tricks, no buzzwords, no software will ever be able to do this.

    Ever.

    If the data was not there, anything created to fill it in is by its very nature not actually reality. This includes digital zoom, pixel interpolation, movement interpolation, and AI upscaling. It preemptively also includes any other future technology that aims to try the same thing, regardless of what it’s called.

    • ashok36@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      3 months ago

      Digital zoom is just cropping and enlarging. You’re not actually changing any of the data. There may be enhancement applied to the enlarged image afterwards but that’s a separate process.

      • dual_sport_dork@lemmy.world
        link
        fedilink
        English
        arrow-up
        30
        ·
        3 months ago

        But the fact remains that digital zoom cannot create details that were invisible in the first place due to the distance from the camera to the subject. Modern implementations of digital zoom always use some manner of interpolation algorithm, even if it’s just a simple linear blur from one pixel to the next.

        The problem is not in how a digital zoom works, it’s on how people think it works but doesn’t. A lot of people (i.e. [l]users, ordinary non-technical people) still labor under the impression that digital zoom somehow makes the picture “closer” to the subject and can enlarge or reveal details that were not detectable in the original photo, which is a notion we need to excise from people’s heads.

        • CapeWearingAeroplane@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          I 100 % agree on your primary point. I still want to point out that a detail in a 4k picture that takes up a few pixels will likely be invisible to the naked eye unless you zoom. “Digital zoom” without interpolation is literally just that: Enlarging the picture so that you can see details that take up too few pixels for you to discern them clearly at normal scaling.

    • jeeva@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      3 months ago

      Hold up. Digital zoom is, in all the cases I’m currently aware of, just cropping the available data. That’s not reconstruction, it’s just losing data.

      Otherwise, yep, I’m with you there.

        • ioen@lemm.ee
          link
          fedilink
          English
          arrow-up
          13
          ·
          3 months ago

          Also since companies are adding AI to everything, sometimes when you think you’re just doing a digital zoom you’re actually getting AI upscaling.

          There was a court case not long ago where the prosecution wasn’t allowed to pinch-to-zoom evidence photos on an iPad for the jury, because the zoom algorithm creates new information that wasn’t there.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 months ago

        There’s a specific type of digital zoom which captures multiple frames and takes advantage of motion between frames (plus inertial sensor movement data) to interpolate to get higher detail. This is rather limited because you need a lot of sharp successive frames just to get a solid 2-3x resolution with minimal extra noise.

    • rottingleaf@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 months ago

      If people don’t get the second law of thermodynamics, explaining this to them is useless. EDIT: … too.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      5
      ·
      edit-2
      3 months ago

      It preemptively also includes any other future technology that aims to try the same thing

      No it doesn’t. For example you can, with compute power, for distortions introduced by camera lenses/sensors/etc and drastically increase image quality. For example this photo of pluto was taken from 7,800 miles away - click the link for a version of the image that hasn’t been resized/compressed by lemmy:

      The unprocessed image would look nothing at all like that. There’s a lot more data in an image than you can see with the naked eye, and algorithms can extract/highlight the data. That’s obviously not what a generative ai algorithm does, those should never be used, but there are other algorithms which are appropriate.

      The reality is every modern photo is heavily processed - look at this example by a wedding photographer, even with a professional camera and excellent lighting the raw image on the left (where all the camera processing features are disabled) looks like garbage compared to exactly the same photo with software processing:

      • CapeWearingAeroplane@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        3 months ago

        No computer algorithm can accurately reconstruct data that was never there in the first place.

        What you are showing is (presumably) a modified visualisation of existing data. That is: given a photo which known lighting and lens distortion, we can use math to display the data (lighting, lens distortion, and input registered by the camera) in a plethora of different ways. You can invert all the colours if you like. It’s still the same underlying data. Modifying how strongly certain hues are shown, or correcting for known distortion are just techniques to visualise the data in a clearer way.

        “Generative AI” is essentially just non-predictive extrapolation based on some data set, which is a completely different ball game, as you’re essentially making a blind guess at what could be there, based on an existing data set.

        • Richard@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          3 months ago

          making a blind guess at what could be there, based on an existing data set.

          Here’s your error. You yourself are contradicting the first part of your sentence with the last. The guess is not “blind” because the prediction is based on an existing data set . Looking at a half occluded circle with a model then reconstructing the other half is not a “blind” guess, it is a highly probable extrapolation that can be very useful, because in most situations, it will be the second half of the circle. With a certain probability, you have created new valuable data for further analysis.

          • UnpluggedFridge@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            But you are not reporting the underlying probability, just the guess. There is no way, then, to distinguish a bad guess from a good guess. Let’s take your example and place a fully occluded shape. Now the most probable guess could still be a full circle, but with a very low probability of being correct. Yet that guess is reported with the same confidence as your example. When you carry out this exercise for all extrapolations with full transparency of the underlying probabilities, you find yourself right back in the position the original commenter has taken. If the original data does not provide you with confidence in a particular result, the added extrapolations will not either.

            • CheeseNoodle@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 months ago

              And then circles get convictions so even if the model did somehow start off completely unbiassed people are going to start feeding it data that weighs towards finding more circles since a prosecution will be used as a ‘success’ to feed back into the model and ‘improve’ it.

          • CapeWearingAeroplane@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            3 months ago

            Looking at a half circle and guessing that the “missing part” is a full circle is as much of a blind guess as you can get. You have exactly zero evidence that there is another half circle present. The missing part could be anything, from nothing to any shape that incorporates a half circle. And you would be guessing without any evidence whatsoever as to which of those things it is. That’s blind guessing.

            Extrapolating into regions without prior data with a non-predictive model is blind guessing. If it wasn’t, the model would be predictive, which generative AI is not, is not intended to be, and has not been claimed to be.

      • dual_sport_dork@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        3 months ago

        None of your examples are creating new legitimate data from the whole cloth. They’re just making details that were already there visible to the naked eye. We’re not talking about taking a giant image that’s got too many pixels to fit on your display device in one go, and just focusing on a specific portion of it. That’s not the same thing as attempting to interpolate missing image data. In that case the data was there to begin with, it just wasn’t visible due to limitations of the display or the viewer’s retinas.

        The original grid of pixels is all of the meaningful data that will ever be extracted from any image (or video, for that matter).

        Your wedding photographer’s picture actually throws away color data in the interest of contrast and to make it more appealing to the viewer. When you fiddle with the color channels like that and see all those troughs in the histogram that make it look like a comb? Yeah, all those gaps and spikes are actually original color/contrast data that is being lost. There is less data in the touched up image than the original, technically, and if you are perverse and own a high bit depth display device (I do! I am typing this on a machine with a true 32-bit-per-pixel professional graphics workstation monitor.) you actually can state at it and see the entirety of the detail captured in the raw image before the touchups. A viewer might not think it looks great, but how it looks is irrelevant from the standpoint of data capture.

        • Richard@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 months ago

          They talked about algorithms used for correcting lens distortions with their first example. That is absolutely a valid use case and extracts new data by making certain assumptions with certain probabilities. Your newly created law of nature is just your own imagination and is not the prevalent understanding in the scientific community. No, quite the opposite, scientific practice runs exactly counter your statements.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 months ago

        This is just smarter post processing, like better noise cancelation, error correction, interpolation, etc.

        But ML tools extrapolate rather than interpolate which adds things that weren’t there

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        offtopic: I like the picture on the left more. It feels more alive. Colder in color, but warmer in expression. Dunno how to say that. And I’ve been in a forest yesterday, so my perception is skewed.

    • AlolanYoda@mander.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      In my first year of university, we had a fun project to make us get used to physics. One of the projects required filming someone throwing a ball upwards, and then using the footage to get the maximum height the ball reached, and doing some simple calculations to get the initial velocity of the ball (if I recall correctly).

      One of the groups that chose that project was having a discussion on a problem they were facing: the ball was clearly moving upwards on one frame, but on the very next frame it was already moving downwards. You couldn’t get the exact apex from any specific frame.

      So one of the guys, bless his heart, gave a suggestion: “what if we played the (already filmed) video in slow motion… And then we filmed the video… And we put that one in slow motion as well? Maybe do that a couple of times?”

      A friend of mine was in that group and he still makes fun of that moment, to this day, over 10 years later. We were studying applied physics.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 months ago

      No computer algorithm can accurately reconstruct data that was never there in the first place.

      Okay, but what if we’ve got a computer program that can just kinda insert red eyes, joints, and plums of chum smoke on all our suspects?

    • Richard@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 months ago

      That’s wrong. With a degree of certainty, you will always be able to say that this data was likely there. And because existence is all about probabilities, you can expect specific interpolations to be an accurate reconstruction of the data. We do it all the time with resolution upscaling, for example. But of course, from a certain lack of information onward, the predictions become less and less reliable.

    • JasonDJ@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      3 months ago

      Well that’s a bit close minded.

      Perhaps at some point we will conquer quantum mechanics enough to be able to observe particles at every place and time they have ever and will ever exist. Do that with enough particles and you’ve got a de facto time machine, albeit a read-only one.

      • BluesF@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        So many things we believe to be true today suggest this is not going to happen. The uncertainty principle, and the random nature of nuclear decay chief among them. The former prevents you gaining the kind of information you would need to do this, and the latter means that even if you could, it would not provide the kind of omniscience one might assume.

        • dual_sport_dork@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Limits of quantum observation aside, you also could never physically store the data of the position/momentum/state of every particle in any universe within that universe, because the particles that exist in the universe are the sum total of the materials with which we could ever use to build the data storage. You’ve got yourself a chicken-and-egg scenario where the egg is 93 billion light years wide, there.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Complexity relates nonlinearly to the amount of moving parts.

        We might be able to spend an ungodly amount of energy to do that for one particle for an hour of its existence.

        Being able to build a computer (in a wide sense) that can emulate in short time (less than human life) processes consistent of more energy than was spent on its creation - it’s something else.

    • Gabu@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      3 months ago

      By your argument, nothing is ever real, so let’s all jump on a chasm.

      • dual_sport_dork@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        3 months ago

        There’s a grain of truth to that. Everything you see is filtered by the limitations of your eyes and the post-processing applied by your brain which you can’t turn off. That’s why you don’t see the blind spot on your retinas where your optic nerve joins your eyeball, for instance.

        You can argue what objective reality is from within the limitations of human observation in the philosophy department, which is down the hall and to your left. That’s not what we’re talking about, here.

        From a computer science standpoint you can absolutely mathematically prove the amount of data that is captured in an image and, like I said, no matter how hard you try you cannot add any more data to it that can be actually guaranteed or proven to reflect reality by blowing it up, interpolating it, or attempting to fill in patterns you (or your computer) think are there. That’s because you cannot prove, no matter how the question or its alleged solution are rephrased, that any details your algorithm adds are actually there in the real world except by taking a higher resolution/closer/better/wider spectrum image of the subject in question to compare. And at that point it’s rendered moot anyway, because you just took a higher res/closer/better/wider/etc. picture that contains the required detail, and the original (and its interpolation) are unnecessary.

        • Richard@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 months ago

          You cannot technically prove it, that’s true, but that does not invalidate the interpolated or extrapolated data, because you will be able to have a certain degree of confidence in them, be able to judge their meaningfulness with a specific probability. And that’s enough, because you are never able to 100% prove something in physical sciences. Never. Even our most reliable observations, strongest theories and most accurate measurements all have a degree of uncertainty. Even the information and quantum theories you rest your argument on are unproven and unprovable by your standards, because you cannot get to 100% confidence. So, if you find that there’s enough evidence for the science you base your understanding of reality on, then rationally and by deductive reasoning you will have to accept that the prediction of a machine learning model that extrapolates some data where the probability of validity is just as great as it is for quantum physics must be equally true.

        • Gabu@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          3 months ago

          Unicorns are also real - we created them through our work in fiction.

  • emptyother@programming.dev
    link
    fedilink
    English
    arrow-up
    79
    arrow-down
    1
    ·
    3 months ago

    How long until we got upscalers of various sorts built into tech that shouldn’t have it? For bandwidth reduction, for storage compression, or cost savings. Can we trust what we capture with a digital camera, when companies replace a low quality image of the moon with a professionally taken picture, at capture time? Can sport replays be trusted when the ball is upscaled inside the judges’ screens? Cheap security cams with “enhanced night vision” might get somebody jailed.

    I love the AI tech. But its future worries me.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      3 months ago

      AI-based video codecs are on the way. This isn’t necessarily a bad thing because it could be designed to be lossless or at least less lossy than modern codecs. But compression artifacts will likely be harder to identify as such. That’s a good thing for film and TV, but a bad thing for, say, security cameras.

      The devil’s in the details and “AI” is way too broad a term. There are a lot of ways this could be implemented.

      • jeeva@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 months ago

        I don’t think loss is what people are worried about, really - more injecting details that fit the training data but don’t exist in the source.

        Given the hoopla Hollywood and directors made about frame-interpolation, do you think generated frames will be any better/more popular?

        • GenderNeutralBro@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          In the context of video encoding, any manufactured/hallucinated detail would count as “loss”. Loss is anything that’s not in the original source. The loss you see in e.g. MPEG4 video usually looks like squiggly lines, blocky noise, or smearing. But if an AI encoder inserts a bear on a tricycle in the background, that would also be a lossy compression artifact in context.

          As for frame interpolation, it could definitely be better, because the current algorithms out there are not good. It will not likely be more popular, since this is generally viewed as an artistic matter rather than a technical matter. For example, a lot of people hated the high frame rate in the Hobbit films despite the fact that it was a naturally high frame rate, filmed with high-frame-rate cameras. It was not the product of a kind-of-shitty algorithm applied after the fact.

    • Jimmycakes@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      3 months ago

      It will wild out for the foreseeable future until the masses stop falling for it in gimmicks then it will be reserved for the actual use cases where it’s beneficial once the bullshit ai stops making money.

    • elephantium@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      Cheap security cams with “enhanced night vision” might get somebody jailed.

      Might? We’ve been arresting the wrong people based on shitty facial recognition for at least 5 years now. This article has examples from 2019.

      On one hand, the potential of this type of technology is impressive. OTOH, the failures are super disturbing.

    • Dojan@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Probably not far. NVidia has had machine learning enhanced upscaling of video games for years at this point, and now they’ve also implemented similar tech but for frame interpolation. The rendered output might be 720p at 20FPS but will be presented at 1080p 60FPS.

      It’s not a stretch to assume you could apply similar tech elsewhere. Non-ML enhanced, yet still decently sophisticated frame interpolation and upscaling has been around for ages.

      • MrPoopbutt@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 months ago

        Nvidias game upscaling has access to game data and also training data generated by gameplay to make footage that is appealing to the gamers eye and not necessarily accurate. Security (or other) cameras don’t have access to this extra data and the use case for video in courts is to be accurate, not pleasing.

        Your comparison is apples to oranges.

        • Dojan@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 months ago

          No, I think you misunderstood what I’m trying to say. We already have tech that uses machine learning to upscale stuff in real-time, but I’m not that it’s accurate on things like court videos. I don’t think we’ll ever get to a point where it can be accurate as evidence because by the very nature of the tech it’s making up detail, not enhancing it. You can’t enhance what isn’t there. It’s not turning nothing into accurate data, it’s guessing based on input and what it’s been trained on.

          Prime example right here, this is the objectively best version of Alice in Wonderland, produced by BBC in 1999, and released on VHS. As far as I can tell there was never a high quality version available. Someone used machine learning to upscale it, and overall it looks great, but there are scenes (such as the one that’s linked) where you can clearly see the flaws. Tina Majorino has no face, because in the original data, there wasn’t enough detail to discern a face.

          Now we could obviously train a model to recognise “criminal activity”, like stabbing, shooting, what have you. Then, however, you end up with models that mistake one thing for another, like scratching your temple turning into driving while on the phone, now if instead of detecting something, the model’s job is to fill in missing data we have a recipe for disaster.

          Any evidence that has had machine learning involved should be treated with at least as much scrutiny as a forensic sketch, while while they can be useful in investigations, generally don’t carry much weight as evidence. That said, a forensic sketch is created through collaboration with an artist and a witness, so there is intent behind those. Machine generated artwork lacks intent, you can tweak the parameters until it generates roughly what you want, but it’s honestly better to just hire an artist and get exactly what you want.

    • Whirling_Cloudburst@lemmy.world
      link
      fedilink
      English
      arrow-up
      35
      ·
      edit-2
      3 months ago

      Unfortunately it does need pointing out. Back when I was in college, professors would need to repeatedly tell their students that the real world forensics don’t work like they do on NCIS. I’m not sure as to how much thing may or may not have changed since then, but based on American literacy levels being what they are, I do not suppose things have changed that much.

        • Whirling_Cloudburst@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          3 months ago

          Its certainly similar in that CSI played a role in forming unrealistic expectations in student’s minds. But. Rather than expecting more physical evidence in order to make a prosecution, the students expected magic to happen on computers and lab work (often faster than physically possible).

          AI enhancement is not uncovering hidden visual data, but rather it generates that information based on previously existing training data and shoe horns that in. It certainly could be useful, but it is not real evidence.

    • Stopthatgirl7@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      30
      ·
      3 months ago

      Yes. When people were in full conspiracy mode on Twitter over Kate Middleton, someone took that grainy pic of her in a car and used AI to “enhance it,” to declare it wasn’t her because her mole was gone. It got so much traction people thought the ai fixed up pic WAS her.

      • Mirshe@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        Don’t forget people thinking that scanlines in a news broadcast over Obama’s suit meant that Obama was a HOLOGRAM and ACTUALLY A LIZARD PERSON.

    • Altima NEO@lemmy.zip
      link
      fedilink
      English
      arrow-up
      19
      ·
      3 months ago

      The layman is very stupid. They hear all the fantastical shit AI can do and they start to assume its almighty. Thats how you wind up with those lawyers that tried using chat GPT to write up a legal brief that was full of bullshit and didnt even bother to verify if it was accurate.

      They dont understand it, they only know that the results look good.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        3 months ago

        The layman is very stupid. They hear all the fantastical shit AI can do and they start to assume its almighty. Thats how you wind up with those lawyers that tried using chat GPT to write up a legal brief that was full of bullshit and didnt even bother to verify if it was accurate.

        Especially since it gets conflated with pop culture. Someone who hears that an AI app can “enhance” an image might think it works like something out of CSI using technosmarts, rather than just making stuff up out of whole cloth.

    • lole@iusearchlinux.fyi
      link
      fedilink
      English
      arrow-up
      17
      ·
      3 months ago

      I met a student at university last week at lunch who told me he is stressed out about some homework assignment. He told me that he needs to write a report with a minimum number of words so he pasted the text into chatGPT and asked it about the number of words in the text.

      I told him that every common text editor has a word count built in and that chatGPT is probably not good at counting words (even though it pretends to be good at it)

      Turns out that his report was already waaaaay above the minimum word count and even needed to be shortened.

      So much about the understanding of AI in the general population.

      I’m studying at a technical university.

    • douglasg14b@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      3 months ago

      Of course, not everyone is technology literate enough to understand how it works.

      That should be the default assumption, that something should be explained so that others understand it and can make better, informed, decisions. .

      • ItsMeSpez@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        It’s not only that everyone isn’t technologically literate enough to understand the limits of this technology - the AI companies are actively over-inflating their capabilities in order to attract investors. When the most accessible information about the topic is designed to get non-technically proficient investors on board with your company, of course the general public is going to get an overblown idea of what the technology can do.

      • dual_sport_dork@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        And people who believe the Earth is flat, and that Bigfoot and the Loch Ness Monster exist, and there are reptillians replacing the British royal family…

        People are very good at deluding themselves into all kinds of bullshit. In fact, I posit that they’re better even at it than learning the facts or comprehending empirical reality.

    • melpomenesclevage@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      3 months ago

      Its not actually worse than eyewitness testimony.

      This is not an endorsement if AI, just pointing out that truth has no place in a courtroom, and refusing to lie will get you locked in a cafe.

      Too good, not fixing it.

  • Rob T Firefly@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    ·
    3 months ago

    According to the evidence, the defendant clearly committed the crime with all 17 of his fingers. His lack of remorse is obvious by the fact that he’s clearly smiling wider than his own face.

  • rottingleaf@lemmy.zip
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    3 months ago

    The fact that it made it that far is really scary.

    I’m starting to think that yes, we are going to have some new middle ages before going on with all that “per aspera ad astra” space colonization stuff.

    • Meowing Thing@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      3 months ago

      Aren’t we already in a kind of dark age?

      People denying science, people scared of diseases and vaccination, people using anything AI or blockchain as if it were magic, people defending power-hungry, all-promising dictators, people divided over and calling the other side barbaric. And of course, wars based on religion.

      Seems to me we’re already in the dark.

      • abhibeckert@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        3 months ago

        Aren’t we already in a kind of dark age?

        A bit over 150 years ago, slavery was legal (and commonplace) in the United States.

        Sure, lots of shitty stuff in the world today… but you don’t have to go far back to a time when a sherif with zero evidence relying on unverified accusations and heresy would’ve put up a “wanted dead or alive” poster with a drawing of the guy’s face created by an artist who had never even laid eyes on the alleged murderer.

        • rottingleaf@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 months ago

          Well, the dark ages came after the late antiquity where slavery was normal. And it took a few centuries for slavery to die out in European societies, though serfdom remained which wasn’t too different. And then serfdom in England formally existed even in XIXth century. I’m not talking about Russia, of course, where it played the same role as slavery in the US south.

          EDIT: What I meant - this is more about knowledge and civilization, not good and bad. Also 150 years is too much, but compared to 25 years ago - I think things are worse in many regards.

      • Krauerking@lemy.lol
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 months ago

        Oh for sure. We are already in a period that will have some fancy name in future anthropology studies but the question is how far down do we still have to go before we see any light.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        3 months ago

        Aren’t we already in a kind of dark age?

        In the sense of actually making things in the backbone of our civilization becoming a process and knowledge heavily centralized and removed from most people living their daily lives, yes.

        Via many small changes we’ve come to the situation where everybody uses Intel and AMD or other very complex hardware, directly or in various mechanisms, which requires infrastructure and knowledge more expensive than most nation-states to produce.

        People no more can make a computer usable for our daily processes via soldering something together using TTL logic and elements bought in a radio store, and we could perform many tasks via such computers, if not for network effect. We depend on something even smart people can’t do on their own, period.

        It’s like tanks or airplanes or ICBMs.

        A decent automatic rifle or grenade or a mortar can well be made in a workshop. Frankly even an alternative to a piece of 50s field artillery can be, and the ammunition.

        What we depend on in daily civilian computing is as complex as ICBMs, and this knowledge is even more sparsely distributed in the society than the knowledge of how ICBMs work.

        And also, of course, the tendency for things to be less repairable (remember the time when everything came with manuals and schematics?) and for people to treat them like magic.

        This is both reminiscent of Asimov’s Foundation (only there Imperial machines were massive, while Foundation’s machines were well miniaturized, but the social mechanisms of the Imperial decay were described similarly) and just psychologically unsettling.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      3 months ago

      It’s incredibly obvious when you call the current generation of AI by its full name, generative AI. It’s creating data, that’s what it’s generating.

    • TurtleJoe@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      3 months ago

      Everything that is labeled “AI” is made up. It’s all just statistically probable guessing, made by a machine that doesn’t know what it is doing.

    • Gabu@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      3 months ago

      Society = made up, so I’m not sure what your argument is.

      • hperrin@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 months ago

        My argument is that a video camera doesn’t make up video, an ai does.

        • Gabu@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          3 months ago

          video camera doesn’t make up video, an ai does.

          What’s that even supposed to mean? Do you even know how a camera works? What about an AI?

          • hperrin@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            3 months ago

            Yes, I do. Cameras work by detecting light using a charged coupled device or an active pixel sensor (CMOS). Cameras essentially take a series of pictures, which makes a video. They can have camera or lens artifacts (like rolling shutter illusion or lens flare) or compression artifacts (like DCT blocks) depending on how they save the video stream, but they don’t make up data.

            Generative AI video upscaling works by essentially guessing (generating) what would be there if the frame were larger. I’m using “guessing” colloquially, since it doesn’t have agency to make a guess. It uses a model that has been trained on real data. What it can’t do is show you what was actually there, just its best guess using its diffusion model. It is literally making up data. Like, that’s not an analogy, it actually is making up data.

            • Gabu@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 months ago

              Ok, you clearly have no fucking idea what you’re talking about. No, reading a few terms on Wikipedia doesn’t count as “knowing”.
              CMOS isn’t the only transducer for cameras - in fact, no one would start the explanation there. Generative AI doesn’t have to be based on diffusion. You’re clearly just repeating words you’ve seen used elsewhere - you are the AI.

              • hperrin@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 months ago

                Yes, I also mentioned CCDs. Charge Coupled Device is what that stands for. You can tell I didn’t look it up, because I originally called it a “charged coupled device” and not a “charge coupled device”. My bad, I should have checked Wikipedia.

                Can you point me to a generative AI that doesn’t make up data? GANs are still generative, and generative AIs make up data.

  • TheBest@midwest.social
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    3 months ago

    This actually opens an interesting debate.

    Every photo you take with your phone is post processed. Saturation can be boosted, light levels adjusted, noise removed, night mode, all without you being privy as to what’s happening.

    Typically people are okay with it because it makes for a better photo - but is it a true representation of the reality it tried to capture? Where is the line of the definition of an ai-enhanced photo/video?

    We can currently make the judgement call that a phones camera is still a fair representation of the truth, but what about when the 4k AI-Powered Night Sight Camera does the same?

    My post is more tangentially related to original article, but I’m still curious as what the common consensus is.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      Every photo you take with your phone is post processed.

      Years ago, I remember looking at satellite photos of some city, and there was a rainbow colored airplane trail on one of the photos. It was explained that for a lot of satellites, they just use a black and white imaging sensor, and take 3 photos while rotating a red/green/blue filter over that sensor, then combining the images digitally into RGB data for a color image. For most things, the process worked pretty seamlessly. But for rapidly moving objects, like white airplanes, the delay between the capture of red/green/blue channel created artifacts in the image that weren’t present in the actual truth of the reality being recorded. Is that specific satellite method all that different from how modern camera sensors process color, through tiny physical RGB filters over specific subpixels?

      Even with conventional photography, even analog film, there’s image artifacts that derive from how the photo is taken, rather than what is true of the subject of the photograph. Bokeh/depth of field, motion blur, rolling shutter, and physical filters change the resulting image in a way that is caused by the camera, not the appearance of the subject. Sometimes it makes for interesting artistic effects. But it isn’t truth in itself, but rather evidence of some truth, that needs to be filtered through an understanding of how the image was captured.

      Like the Mitch Hedberg joke:

      I think Bigfoot is blurry, that’s the problem. It’s not the photographer’s fault. Bigfoot is blurry, and that’s extra scary to me.

      So yeah, at a certain point, for evidentiary proof in court, someone will need to prove some kind of chain of custody that the image being shown in court is derived from some reliable and truthful method of capturing what actually happened in a particular time and place. For the most part, it’s simple today: i took a picture with a normal camera, and I can testify that it came out of the camera like this, without any further editing. As the chain of image creation starts to include more processing between photons on the sensor and digital file being displayed on a screen or printed onto paper, we’ll need to remain mindful of the areas where that can be tripped up.

      • NoRodent@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        The crazy part is that your brain is doing similar processing all the time too. Ever heard of the blindspot? Your brain has literally zero data there but uses “content-aware fill” to hide it from you. Or the fact, that your eyes are constantly scanning across objects and your brain is merging them into a panorama on the fly because only a small part of your field of vision has high enough fidelity. It will also create fake “frames” (look up stopped-clock illusion) for the time your eyes are moving where you should see a blur instead. There’s more stuff like this, a lot of it manifests itself in various optical illusions. So not even our own eyes capture the “truth”. And then of course the (in)accuracy of memory when trying to recall what we’ve seen, that’s an entirely different can of worms.

      • TheBest@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Fantasitc expansion of my thought. This is something that isn’t going to be answered with an exact scientific value but will have to decided based on our human experiences with the tech. Interesting times ahead.

    • fuzzzerd@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      This is what I was wondering about as I read the article. At what point does the post processing on the device become too much?

        • fuzzzerd@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          What would you classify google or apple portrait mode as? It’s definitely doing something. We can probably agree, at this point it’s still a reasonably enhanced version of what was really there, but maybe a Snapchat filter that turns you into a dog is obviously too much. The question is where in that spectrum is the AI or algorithm too much?

          • Natanael@slrpnk.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            3 months ago

            It varies, there’s definitely generative pieces involved but they try to not make it blatant

            If we’re talking evidence in court then it’s practically speaking more important if the photographer themselves can testify about how accurate they think it is and how well it corresponds to what they saw. Any significantly AI edited photo effectively becomes as strong evidence as a diary entry written by a person on the scene, it backs up their testimony to a certain degree by checking for the witness’ consistency over time instead of trusting it directly. The photo can lie just as much as the diary entry can, so it’s a test for credibility instead.

            If you use face swap then those photos are likely nearly unusable. Editing for colors and contrast, etc, still usable. Upscaling depends entirely on what the testimony is about. Identifying a person that’s just a pixelated blob? Nope, won’t do. Same with verifying what a scene looked like, such as identifying very pixelated objects, not OK. But upscaling a clear photo which you just wanted to be larger, where the photographer can attest to who the subject is? Still usable.

  • AnUnusualRelic@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    3 months ago

    Why not make it a fully AI court and save time if they were going to go that way. It would save so much time and money.

    Of course it wouldn’t be very just, but then regular courts aren’t either.

    • Sl00k@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      3 months ago

      In the same vein Bloomberg just did a great study on ChatGPT 3.5 ranking resumes and it had an extremely noticeable bias of ranking black names lower than the average and Asian/white names far higher despite similar qualifications.

      Archive source: https://archive.is/MrZIm

    • mojofrododojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 months ago

      Honestly, an open-source auditable AI Judge/Justice would be preferable to Thomas, Alito, Gorsuch and Barrett any day.

  • Voyajer@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 months ago

    You’d think it would be obvious you can’t submit doctored evidence and expect it to be upheld in court.

    • elshandra@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      If only it worked that way in practice eh.

      We found these marijuanas in his pocket, they were already in evidence bags for us even.

      The model that can separate the fact from fiction and falsehood will be worth far more than any model creating fiction and falsehood.

  • rustyfish@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    3 months ago

    For example, there was a widespread conspiracy theory that Chris Rock was wearing some kind of face pad when he was slapped by Will Smith at the Academy Awards in 2022. The theory started because people started running screenshots of the slap through image upscalers, believing they could get a better look at what was happening.

    Sometimes I think, our ancestors shouldn’t have made it out of the ocean.

  • Chemical Wonka@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    3 months ago

    Just for now, soon this practice will be normalized and widely used, after all we are in the late capitalism stage and all violations are relativized