• mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 hours ago

    Saying this like reddit hasn’t been bot infested for a decade.

    I mean there were some genuine looking bots well before LLMs and AI, and even then you could just be lazy and make a copy post bot that would repost old content for karma farming while the terminally online userbaae would upvote your slop for you.

  • NeonNight@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 hours ago

    The bots would pretend to be black in order to say “as a black man, I don’t like BLM” or pretend to be a male rape victim so they could say “the experience wasn’t as traumatic as some would say”. How the fuck do they reconcile this as ethical? They’re actively arguing with real people and acting like it’s all just random data. Social scientists once again de-humanizing their subjects, I guess

  • Dzso@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    5 hours ago

    My assumption is that a majority of the content we see on all “social” platforms these days is AI.

  • FinishingDutch@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    7 hours ago

    So they figured it was a good idea to use a racially-charged fake profile to provoke online users for an ‘experiment’? And one would assume these responses were subsequently studied without the poster’s consent?

    That’s going to run afoul of a few European privacy rules I imagine. Someone definitely needs to get fired, blackballed and sued for this. At the very least.

    • ArtificialHoldings@lemmy.world
      link
      fedilink
      arrow-up
      18
      arrow-down
      1
      ·
      9 hours ago

      Read the article instead of responding to the title. It was a university conducting formal research, which created AI bots that impersonated different identities. “As a black man…” style posts in ChangeMyView.

      The subreddit mods issued a formal complaint to the university when they learned of it, but the university is choosing not to block its publishing on the grounds of lack of harm.

      • ladfrombrad 🇬🇧@lemdro.id
        link
        fedilink
        English
        arrow-up
        23
        ·
        10 hours ago

        Indeed and it’s exactly what we did in r/BotDefense before greedy piggy spez shut down the API/communities some built.

        I used to watch in amazement at some of the guys who set up bots, to report the bots to us.

        • Squorlple@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          ·
          8 hours ago

          You were on the mod side of r/BotDefense? I was a very avid reporter to it (so much so that people thought that I was a bot) and I was eventually added to the secret Bot Defense subreddit that automatically flagged our reports as bots. I jumped ship when the API change came since I saw how deeply vital access to that info was.

          Do you know of any active analogous systems for Lemmy? Or do you have any ideas as to what we could implement here to abate bad actors?

          I had mentioned this idea some time ago but it’s way beyond me to know how to set something like it up. Would you be willing and/or able to help out? What are your suggestions?

          • ladfrombrad 🇬🇧@lemdro.id
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 hour ago

            Yeah like you I was an avid reporter to r/BotBust which when its owner went off the rails one of the team members then setup BotDefense, and I got recruited to resolve the reports from peeps like yourself and our little counterpart bot that flagged them at a fair old rate.

            I ain’t seen nothing like BD over here on Lemmy and some of the bot accounts here are at least listed as such but I have seen numerous ones that aren’t “self labelled”. It’d take a fair amount of effort but if you’ve got enough people to review reports (especially the ones from humans) I can’t see why not as it’s basically looking for common markers / traits / flags.

          • Randomgal@lemmy.ca
            link
            fedilink
            arrow-up
            10
            ·
            9 hours ago

            Nope. It’s just so small it doesn’t make sense to do it more than they already do. Lots of bots around already.

    • Shawdow194@fedia.io
      link
      fedilink
      arrow-up
      15
      ·
      11 hours ago

      Yeah how do they know they were interacting with real users and not another researcher/troll bot?

  • thedruid@lemmy.world
    link
    fedilink
    arrow-up
    45
    arrow-down
    3
    ·
    edit-2
    11 hours ago

    Ummm. They knew, guys.

    Also this is extremely violating. And this bullshit,

    . “We believe the potential benefits of this research substantially outweigh its risks"

    What kind of psychotic sssholes run an experiment on an unsuspecting public because they believed that it’s ok to fraudulently engage with others without theirspotential subjects being aware of it

    This is something trump would do

  • Deconceptualist@lemm.ee
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    2
    ·
    11 hours ago

    Wow. There’s no way this was actually greenlit by an ethics committee unless they’re all corrupt. This is so blatantly wrong and manipulative.

    • NeonNight@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      It was reviewed by an ethics board, but they changed the parameters of their study (shifting more towards using personal info against commenters/making up personal info to use in arguments) without it being approved by the ethics board. Essentially their study wasn’t going anywhere so they made it more unethical in order to have more data to work with. They’re on Reddit trying to defend themselves but no one is having it

  • Empricorn@feddit.nl
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 hours ago

    Reddit sold user content so it could be used for AI training! If they’re mad or take legal action, it’s only because they didn’t get a piece of the action…

  • P00ptart@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    9 hours ago

    Well, it USED to be a good place to ask a question because there were so many people on it, there was likely to be several experts on _______ at any given time. Then it got more and more left, then more and more right, and after a while it felt like being on twitter again. What I’m saying is that their value and name became pretty worthless, this just makes it 0.