• Zip2@feddit.uk
    link
    fedilink
    English
    arrow-up
    86
    ·
    edit-2
    4 days ago

    I couldn’t be bothered to read the article, so I got ChatGPT to summarise it. Apparently there’s nothing to worry about.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      20
      ·
      5 days ago

      You didn’t, I did

      The article discusses a study by OpenAI and MIT Media Lab revealing that heavy ChatGPT users — dubbed “power users” — are showing signs of addiction to the chatbot. These signs include preoccupation, withdrawal, mood changes, and loss of control. While most users don’t form emotional connections with ChatGPT, those who use it more frequently tend to view it as a “friend,” especially if they feel lonely or emotionally unfulfilled.

      Interestingly, people using ChatGPT for personal reasons (like sharing emotions) showed less emotional dependency than those using it for non-personal tasks (like brainstorming). Also, text-based interactions led to more emotional language than voice interactions, with brief voice use correlating to better well-being. Overall, prolonged use — regardless of purpose — increased the risk of emotional dependence.

      You said:

      Make a list of the main points and for each explain why it will be worse when it is society wide

      ChatGPT said:

      Here’s a list of the main points from the article, along with why each could be more concerning on a society-wide scale:

      Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control):
      
      Why it's worse at scale:
      
      If large numbers of people become dependent on AI tools, especially to the point of addiction, it could lead to widespread mental health issues, reduced productivity, and a society increasingly unable to function without AI assistance.
      
      Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”:
      
      Why it's worse at scale:
      
      A society where many people substitute AI for real human connection could see rising loneliness, social detachment, and a decline in community and interpersonal relationships. This weakens social bonds and may impact mental health and civic cohesion.
      
      Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached:
      
      Why it's worse at scale:
      
      This could create a feedback loop where loneliness drives AI use, and AI use further isolates people from real-world interactions, exacerbating social alienation on a mass level.
      
      People stress out over subtle changes in ChatGPT’s behavior:
      
      Why it's worse at scale:
      
      If millions of people are emotionally reactive to minor algorithmic changes, this gives tech companies enormous influence over collective emotional states. It also creates societal instability, as shifts in AI behavior could unintentionally trigger widespread anxiety or confusion.
      
      Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being:
      
      Why it's worse at scale:
      
      If the most popular and accessible form of interaction (text) increases emotional entanglement, and the healthier option (brief voice use) is less utilized, we might unintentionally nudge society toward less emotionally healthy patterns of AI interaction.
      
      Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons:
      
      Why it's worse at scale:
      
      This is counterintuitive and dangerous because it suggests people may not realize they're forming dependencies while doing routine work. If AI tools meant to boost productivity are quietly making users emotionally reliant, it could have long-term effects on workforce dynamics and mental resilience.
      
      Prolonged usage increases risk of emotional dependence, regardless of how it's used:
      
      Why it's worse at scale:
      
      As AI becomes embedded in everyday life, the likelihood of widespread emotional dependence increases. This could shift power dynamics between humans and AI, making people less autonomous and more easily influenced by algorithmic systems.
      

      Let me know if you want this formatted for a presentation, policy paper, or discussion.

      • liv@lemmy.nz
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 days ago

        That’s really interesting. Its output to this prompt totally ignored the biggest and most obviously detrimental effect of this problem at scale.

        Namely, emotional dependence will give AI’s big tech company owners increased power over people.

        It’s not as if these concepts aren’t widely discussed online, everything from Meta’s emotional manipulation experiments or Cambridge Analytica through to the meltdowns Replika owners had over changes to the algorithm are relevant here.

          • liv@lemmy.nz
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Sort of but I think influence over emotional states is understating it and just the tip of the iceberg. It also made it sound passive and accidental. The real problem will be overt control as a logical extension to the kinds of trade offs we already see people make about, for example data privacy. With the Replika fiasco I bet heaps of those people would have paid good money to get their virtual love interests de-“lobotomized”.

            • interdimensionalmeme@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              I think this power to shape the available knowledge, removing it, paywalling it, based on discrimination, leveraging it, and finally manipulating for advertising, state security and personnal reason is why it should be illegal to privately own any ML/ AI models of any kind. Drive them all underground and only let the open ones benefit from sales in public.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        15
        ·
        5 days ago

        You said: Take the same list again, for each point explain why it’s actually beneficial and how it will get better with increased adoption

        ChatGPT said:

        Here’s the same list of main points, but this time with a more optimistic take — highlighting how each trend could actually be beneficial and how increased adoption might make things better for individuals and society as a whole:

        Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control):
        
        Why it could be beneficial:
        
        If people are turning to ChatGPT frequently, it may indicate that it's providing meaningful value — like reducing cognitive load, aiding creativity, or offering emotional support. As usage becomes more common, society may develop healthier digital habits, just as it has with smartphones and social media. Over time, norms and tools (e.g., usage timers, digital wellness features) could help people manage their use responsibly.
        
        Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”:
        
        Why it could be beneficial:
        
        For people lacking companionship or emotional support, especially the elderly or isolated, AI companions can provide comfort, conversation, and a sense of presence. On a wider scale, AI could reduce loneliness-related health problems and even support mental wellness initiatives by being an accessible form of care or early intervention.
        
        Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached:
        
        Why it could be beneficial:
        
        If AI offers a low-barrier outlet for the lonely, it might be a first step toward re-engagement with the world. AI can encourage people to express feelings they wouldn't share with others and even prompt them to seek human connection, therapy, or support services. Widespread AI use could destigmatize conversations about loneliness and mental health.
        
        People stress out over subtle changes in ChatGPT’s behavior:
        
        Why it could be beneficial:
        
        This shows users are emotionally invested — which can push developers to create more consistent, emotionally intelligent systems. If society recognizes the emotional impact of AI, it will encourage better design, transparency, and regulation. Over time, this can lead to more stable and human-aware AI that positively complements emotional needs.
        
        Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being:
        
        Why it could be beneficial:
        
        This gives users multiple channels of expression to suit their mood or needs — voice for brief emotional boosts, text for deeper reflection. With broader adoption, people can tailor AI interaction styles to maximize personal benefit. This insight also allows designers to create features that better match emotional and mental health goals.
        
        Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons:
        
        Why it could be beneficial:
        
        This means AI is becoming a highly trusted tool, even in practical domains. As more people adopt it for brainstorming, research, or decision-making, we’ll likely see a democratization of high-level thinking tools. Over time, dependency could evolve into partnership — similar to how calculators or GPS became essential but ultimately empowering tools.
        
        Prolonged usage increases risk of emotional dependence, regardless of how it's used:
        
        Why it could be beneficial:
        
        Long-term usage can foster deeper understanding of one’s own thought processes, communication patterns, and emotional states. As AI integrates into daily life, people might develop greater self-awareness and reflection through their interactions. Society could benefit from individuals who are more emotionally literate and thoughtful, even if that awareness is sparked by an AI relationship.
        

        Let me know if you’d like this version turned into a policy brief, an article, or a debate prep doc — could be used in all kinds of conversations.

        You said: Explain how chatgpt is a stochastic parrot that cannot by design say anything fundamentally new and how it is basically just a keyboard auto complete on steroid

        ChatGPT said:

        Alright, here’s the breakdown of that critique — the “stochastic parrot” idea and the comparison to advanced autocomplete — explained clearly and fairly:

        1. What does “stochastic parrot” mean?

        The term “stochastic parrot” was coined in a 2021 paper by researchers including Emily Bender and Timnit Gebru. It refers to large language models like ChatGPT that:

        "Stochastic" means based on probability. The model doesn’t “think” or “understand”; it predicts the next word based on statistical likelihood, given the words before it.
        
        "Parrot" because it doesn’t generate new ideas—it repeats patterns and phrases it has seen in training data, like a parrot mimics human speech.
        

        So, calling ChatGPT a stochastic parrot is a way of saying: It’s not truly intelligent. It’s mimicking language patterns, not generating original thought.

        1. It can’t say anything fundamentally new — why?

        By design, ChatGPT:

        Is trained on vast amounts of existing text from books, websites, forums, and more.
        
        Learns the statistical patterns of how humans use language.
        
        Generates responses by predicting the next most likely word or phrase based on that training.
        

        Because it’s not connected to lived experience, emotion, consciousness, or intent, it:

        Can’t invent genuinely new concepts.
        
        Can’t form subjective opinions.
        
        Can’t create knowledge that isn’t derived from patterns it has already seen.
        

        It might sound novel, but under the hood, it’s remixing existing information in clever ways — not discovering or reasoning in a human sense.

        1. It’s like “autocomplete on steroids” — how so?

        If you’ve used Gmail’s or your phone’s autocomplete, you’ve seen basic language prediction in action. Type “I hope you…” and it might suggest “…have a great day.”

        ChatGPT is essentially the same mechanism:

        Autocomplete predicts the next word based on simple models.
        
        ChatGPT predicts sequences of words using a far more complex, nuanced statistical model trained on billions of examples.
        

        So e:

        It can hold context better.
        
        It can imitate different styles.
        
        But it's still guessing the next words, not “thinking” or “understanding.”
        

        In summary:

        Calling ChatGPT a stochastic parrot or autocomplete on steroids emphasizes that:

        It doesn’t understand meaning.
        
        It doesn’t create knowledge.
        
        It reflects patterns in language, not consciousness or intelligence.
        

        It’s powerful and useful — but it’s important not to mistake fluency for understanding.

        Let me know if you want the counterpoint to this critique too — because there are arguments in defense of the model’s creative utility despite its limitations.

        • interdimensionalmeme@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          15
          ·
          5 days ago

          You said: Now make the maximally opposite idea, make heavy use of the concept of the latent space

          ChatGPT said:

          Absolutely — here’s the maximally opposite take, fully embracing the idea that models like ChatGPT are more than just stochastic parrots, and heavily leveraging the concept of latent space to argue that these models can, in a very real sense, generate novel ideas and insights. ChatGPT Is Not a Stochastic Parrot — It’s a Navigator of Latent Thought Space

          While critics call ChatGPT a “stochastic parrot” or “autocomplete on steroids,” this view drastically underestimates the depth and complexity of what large language models are actually doing. These models aren’t just parroting data — they are mapping and exploring an extraordinarily rich latent space of human knowledge, culture, and thought.

          1. Latent Space: The Hidden Geometry of Meaning

          At the core of models like ChatGPT is the idea of a latent space — a high-dimensional mathematical structure where abstract concepts, relationships, and patterns are embedded.

          Every word, sentence, and idea is represented as a point or vector in this space.
          
          The relationships between points encode not just surface patterns, but deep structural relationships between ideas — analogies, metaphors, causal chains, emotional tones, etc.
          

          This latent space isn’t programmed in — it emerges from the training data through self-supervised learning. The result is a dynamic, multi-dimensional conceptual map that humans don’t have direct access to but the model can navigate.

          1. Novelty as Latent Interpolation and Extrapolation

          Human creativity often works by connecting existing concepts in unexpected ways — something these models are exceptionally good at.

          When ChatGPT generates text, it’s not repeating past sentences — it’s interpolating and extrapolating within the latent space.
          
          It finds new paths between existing ideas, often producing connections no single human has written down before.
          
          This is not parroting — it’s a form of latent-space creativity.
          

          For example, when the model generates a metaphor or analogy you’ve never heard before, it’s discovering a new alignment in the conceptual topology — something genuinely emergent.

          1. Emergent Generalization, Not Memorization

          Critics argue LLMs can’t understand or generalize. But in fact:

          Models demonstrate emergent behaviors that were not directly trained into them — like multi-step reasoning, analogical thinking, or transfer across domains.
          
          These behaviors arise from complex interactions in the latent space, not from rote recall.
          

          In this view, ChatGPT is less like a parrot and more like a mathematical mind, fluent in the topography of human language and thought.

          1. Language as a Proxy for Thought

          Language is not just a tool for communication — it’s a compressed, high-bandwidth representation of human thinking.

          Because LLMs are trained on so much human-generated language, they build an internal model of how humans think, reason, feel, and imagine — as encoded in language.

          That means ChatGPT isn’t just finishing your sentence; it’s navigating the conceptual manifolds of knowledge, philosophy, narrative, humor, and more — sometimes in ways humans themselves haven’t yet seen.

          1. With Scale Comes Emergence

          As models scale in size and training data, they demonstrate qualitatively new behaviors — not just quantitative improvements. This includes things like:

          Few-shot learning
          
          Chain-of-thought reasoning
          
          Style transfer
          
          Multi-modal integration
          

          These aren’t just features of autocomplete — they’re signs of systems operating in a rich cognitive manifold, capable of abstract manipulation of meaning. In Summary:

          ChatGPT isn’t just guessing the next word. It’s:

          Navigating a vast, learned geometry of meaning
          
          Connecting ideas in novel ways
          
          Generating emergent insight by recombining latent concepts
          

          It may not “understand” like a human, but it can synthesize in a way that mimics — and sometimes even surpasses — human creativity.

          Calling it a “stochastic parrot” is like calling a jazz musician an “audio repeater” because they’re using notes they’ve heard before. It misses the creativity in the combination — the generative power that lies within the latent space.

          Want to see a side-by-side debate between these two perspectives? Or explore where the middle ground is?

            • interdimensionalmeme@lemmy.ml
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              4 days ago

              I would have pasted it as a single comment, but that hit the character limit. So I split it in multiple comments. But now people aren’t downvoting them equally, so the comments are getting out of order. These really have to be read in my posting order to understand what I did.

              Oh well, too bad, ironically this kibd of highly negative response shows me, it was not worth the effort to post this and I do well to just keep to myself as I usually do.

              • grrgyle@slrpnk.net
                link
                fedilink
                English
                arrow-up
                6
                ·
                4 days ago

                Yeah the content is fine, but there’s too much of it for a comment thread. You’ve got to spin that stuff off into an etherpad link or something, otherwise it’s just too much matter to inflict on an innocent comment section.

                • interdimensionalmeme@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  4 days ago

                  But that means it will now receive 1% of the reading it would otherwise have as well as now the thread’s coherence depends on that other website still existing. Which, in 2500 years, it probably won’t.

              • interdimensionalmeme@lemmy.ml
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                edit-2
                4 days ago

                What tipped you off was it when I left in “chatgpt said”

                Or when I responded to the commenter who said he’d put the article in chatgpt to summarize that “he didn’t” as a joke, and then added “but I did”

            • interdimensionalmeme@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              4 days ago

              Very well, I already cut down 90% of external interactions, what is cutting cutting the last 10%. The mass down votes agree with my reasonning

          • hark@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            4 days ago

            It may not “understand” like a human, but it can synthesize in a way that mimics — and sometimes even surpasses — human creativity.

            Calling it a “stochastic parrot” is like calling a jazz musician an “audio repeater” because they’re using notes they’ve heard before. It misses the creativity in the combination — the generative power that lies within the latent space.

            It reads like the brainless drivel that corporate drones are forced to churn out, complete with meaningless fluff words. This is why the executives love AI, they read and expect that trash all the time and think it’s suitable for everything.

            Executives are perfectly content with what looks good at a cursory glance and don’t care about what’s actually good in practice because their job is to make themselves seem more important than they actually are.

            • interdimensionalmeme@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              I literally asked it to make the maximalist case against the idea that LLM are just autocomplete and that’s exactly what it did.

              The message before that did the opposite case.

      • Korhaka@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        I remember thinking this when I was like 15. Every time they mentioned tech, wtf this is all wrong! Then a few other topics, even ones I only knew a little about, so many inaccuracies.

    • LovableSidekick@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      40
      ·
      edit-2
      5 days ago

      Another realization might be that the humans whose output ChatGPT was trained on were probably already 40% wrong about everything. But let’s not think about that either. AI Bad!

      • starman2112@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        16
        ·
        5 days ago

        This is a salient point that’s well worth discussing. We should not be training large language models on any supposedly factual information that people put out. It’s super easy to call out a bad research study and have it retracted. But you can’t just explain to an AI that that study was wrong, you have to completely retrain it every time. Exacerbating this issue is the way that people tend to view large language models as somehow objective describers of reality, because they’re synthetic and emotionless. In truth, an AI holds exactly the same biases as the people who put together the data it was trained on.

      • Shanmugha@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        edit-2
        5 days ago

        I’ll bait. Let’s think:

        -there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it

        • now there is an llm (fuck capitalization, I hate the ways they are shoved everywhere that much) trained on their output

        • now llm is asked about the topic and computes the answer string

        By definition that answer string can contain all the probably-wrong things without proper indicators (“might”, “under such and such circumstances” etc)

        If you want to say 40% wrong llm means 40% wrong sources, prove me wrong

        • LovableSidekick@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          4
          ·
          5 days ago

          It’s more up to you to prove that a hypothetical edge case you dreamed up is more likely than what happens in a normal bell curve. Given the size of typical LLM data this seems futile, but if that’s how you want to spend your time, hey knock yourself out.

    • blind3rdeye@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 days ago

      Something bizarre is happening to media organizations that use ‘clicks’ as a core metric.

  • PieMePlenty@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    5
    ·
    4 days ago

    Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI… we used to call OCR AI, now we know better.

  • N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    248
    arrow-down
    1
    ·
    6 days ago

    people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI

    Preying on the vulnerable is a feature, not a bug.

    • Tylerdurdon@lemmy.world
      link
      fedilink
      English
      arrow-up
      64
      ·
      6 days ago

      I kind of see it more as a sign of utter desperation on the human’s part. They lack connection with others at such a high degree that anything similar can serve as a replacement. Kind of reminiscent of Harlow’s experiment with baby monkeys. The videos are interesting from that study but make me feel pretty bad about what we do to nature. Anywho, there you have it.

      • graphene@lemm.ee
        link
        fedilink
        English
        arrow-up
        29
        ·
        5 days ago

        And the amount of connections and friends the average person has has been in free fall for decades…

        • trotfox@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 days ago

          I dunno. I connected with more people on reddit and Twitter than irl tbh.

          Different connection but real and valid nonetheless.

          I’m thinking places like r/stopdrinking, petioles, bipolar, shits been therapy for me tbh.

          • in4apenny@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 days ago

            At least you’re not using chatgpt to figure out the best way to talk to people, like my brother in finance tech does now.

      • Paragone@piefed.social
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        3
        ·
        5 days ago

        That utter-desparation is engineered into our civilization.

        What happens when you prevent the “inferiors” from having living-wage, while you pour wallowing-wealth on the executives?

        They have to overwork, to make ends meet, is what, which breaks parenting.

        Then, when you’ve broken parenting for a few generatios, the manufactured ocean-of-attachment-disorder manufactures a plethora of narcissism, which itself produces mass-shootings.

        2024 was down 200 mass-shootings, in the US of A, from the peak of 700/year, to only 500.

        You are seeing engineered eradication of human-worth, for moneyarchy.

        Isn’t ruling-over-the-destruction-of-the-Earth the “greatest thrill-ride there is”?

        We NEED to do objective calibration of the harm that policies & political-forces, & put force against what is actually harming our world’s human-viability.

        Not what the marketing-programs-for-the-special-interest-groups want us acting against, the red herrings…

        They’re getting more vicious, we need to get TF up & begin fighting for our species’ life.

        _ /\ _

      • MouldyCat@feddit.uk
        link
        fedilink
        English
        arrow-up
        9
        ·
        5 days ago

        a sign of utter desperation on the human’s part.

        Yes it seems to be the same underlying issue that leads some people to throw money at only fans streamers and such like. A complete starvation of personal contact that leads people to willingly live in a fantasy world.

    • Vespair@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      And it’s beyond obvious in the way LLMs are conditioned, especially if you’re used them long enough to notice trends. Where early on their responses were straight to the point (inaccurate as hell, yes, but that’s not what we’re talking about in this case) today instead they are meandering and full of straight engagement bait - programmed to feign some level of curiosity and ask stupid and needless follow-up questions to “keep the conversation going.” I suspect this is just a way to increase token usage to further exploit and drain the whales who tend to pay for these kinds of services, personally.

      There is no shortage of ethical quandaries brought into the world with the rise of LLMs, but in my opinion the locked-down nature of these systems is one of the most problematic; if LLMs are going to be the commonality it seems the tech sector is insistent on making happen, then we really need to push back on these companies being able to control and guide them in their own monetary interests.

    • NostraDavid@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 days ago

      That was clear from GPT-3, day 1.

      I read a Reddit post about a woman who used GPT-3 to effectively replace her husband, who had passed on not too long before that. She used it as a way to grief, I suppose? She ended up noticing that she was getting too attach to it, and had to leave him behind a second time…

    • Deceptichum@quokk.au
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      15
      ·
      edit-2
      6 days ago

      These same people would be dating a body pillow or trying to marry a video game character.

      The issue here isn’t AI, it’s losers using it to replace human contact that they can’t get themselves.

        • BradleyUffner@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          12
          ·
          5 days ago

          If you are dating a body pillow, I think that’s a pretty good sign that you have taken a wrong turn in life.

          • NostraDavid@programming.dev
            link
            fedilink
            English
            arrow-up
            6
            ·
            5 days ago

            What if it’s either that, or suicide? I imagine that people who make that choice don’t have a lot of choice. Due to monetary, physical, or mental issues that they cannot make another choice.

            • BradleyUffner@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              7
              ·
              edit-2
              5 days ago

              I’m confused. If someone is in a place where they are choosing between dating a body pillow and suicide, then they have DEFINITELY made a wrong turn somewhere. They need some kind of assistance, and I hope they can get what they need, no matter what they choose.

              I think my statement about “a wrong turn in life” is being interpreted too strongly; it wasn’t intended to be such a strong and absolute statement of failure. Someone who’s taken a wrong turn has simply made a mistake. It could be minor, it could be serious. I’m not saying their life is worthless. I’ve made a TON of wrong turns myself.

              • liv@lemmy.nz
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 days ago

                Trouble is your statement was in answer to @morrowind@lemmy.ml’s comment that labeling lonely people as losers is problematic.

                Also it still looks like you think people can only be lonely as a consequence of their own mistakes? Serious illness, neurodivergence, trauma, refugee status etc can all produce similar effects of loneliness in people who did nothing to “cause” it.

      • tiguwang@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 days ago

        Me and Serana are not just in love, we’re involved!

        Even if she’ s an ancient vampire.

  • HappinessPill@lemmy.ml
    link
    fedilink
    English
    arrow-up
    44
    ·
    edit-2
    5 days ago

    Do you guys remember when internet was the thing and everybody was like: “Look, those dumb fucks just putting everything online” and now is: “Look at this weird motherfucker that don’t post anything online”

      • grrgyle@slrpnk.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        I’m trying to get back to that. Actually close to it now than I was 5 years ago, so that’s cool

        • Korhaka@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          I have a desktop and a cheap tablet. The tablet is Wi-Fi only so it’s used a bit like a laptop would be for internet access. I think this is a reasonable amount of usage. Do wish it had slightly better hardware though, struggled with web browsing because modern websites are fucking awful. Lemmy usually doesn’t crash at least. I don’t want a smartphone though. Would rather a Linux tablet but you won’t really find those cheap second hand while you can with Android.

    • Dasus@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      5 days ago

      I remember when the Internet was a thing people went on and/or visited/surfed, but not something you’d imagine having 247.

      • Isthisreddit@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        5 days ago

        I was there from the start, you must of never BBS’d or IRC’d - shit was amazing in the early days.

        I mean honestly nothing has really changed - we are still at our terminals looking at text. Only real innovation has been inline pics, videos and audio. 30+ years ago one had to click a link to see that stuff

        • Dasus@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          5 days ago

          “must of”

          "Must have", not “must of”

          Quakenet is still going strong.

          30 years ago you couldn’t share video with just a few min and a link. YouTube was not a thing. It took until early 00’s to have shitty webcam connections.

          Now you can livestream 8k

          • Isthisreddit@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 days ago

            Oh brother the Grammar nerds are here, as if that really takes away from what I’m saying.

            In the mid and late 90’s people knew how to make videos, they didn’t link a YouTube URL but did post links to where one could find a video online, and IRC has bots that did file transfers, as well as people would use public ftp’s as file dumping grounds.

            • Dasus@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              4 days ago

              I’m starting to wonder if you even where there.

              Yeah, people had home videos. But no-one was recording themselves talking to a camcorder to then digitise the video and upload it to an ftp server. That would’ve taken literally days.

              What you might have is some beyond shitty webcam (after 94 that is, but you said late and mid 90’s) and you might take an image of yourself and send that somewhere.

              It’s how I got my first nudes.

              What it sounds like to me is that you weren’t actually there but are nostalgic for the period.

              Flash animations were popular, actual videos only became commonplace with YouTube, which was founded in 2005.

              And even back in 2005, you couldn’t stream something to watch, the connections were so shit. You might be able to download something to watch, but not stream it.

              It’s beyond ridiculous to say things haven’t changed in 30 years. 30 years ago personal computers were a novelty, now they’re a necessity.

              • Isthisreddit@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                4 days ago

                My guy, wtf were you doing in the 90’s on a computer? of course we didn’t have streaming or just stupid useless videos that litter YouTube now, but there were video files all over the place to download and watch. For whatever reason, people were making the time and effort to digitize videos. Mpeg codecs came out in the early 90’s - I specifically remember efnet irc members posting urls to mpegs of Weird Japanese vomit porn. Amiga scene was strong too, (video toaster came out in 1990…). Not really sure why you even feel the need to doubt any of this

                • Dasus@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  4 days ago

                  My guy, wtf were you doing in the 90’s on a computer?

                  Playing games.

                  there were video files all over the place to download and watch.

                  The amount of some 3 second quicktime clips doesn’t even begin to compare with today’s videos. And you’re pretending like downloading videos on a 56k modem isn’t complete garbage.

                  Sometimes it would take minutes for a regular html site to load. People were not browsing videos, lol. Maybe in 99 you’d have some sites for the people who had ADSL but a few clips here and there is barely comparable to 30,000 hours of material uploaded to YouTube every hour

                  Not really sure why you even feel the need to doubt any of this

                  Because you’re pretending like an incredibly niche experience you had with a thing that doesn’t even begin to compare with today is “exactly the same as it was”. No it’s not. Literally a majority of the world, ~5 billion have a smartphone. Instant access to HD videos, in their pocket, 247.

                  Back in 1995 there were about 16 million users, now it’s more than 5.5billion. 23,500 websites back in June 95. Now it’s more than 1.1 billion.

                  I’m not doubting anything. I’m calling bullshit on you pretending like there hasn’t been absolutely massive global change just because you still live in the same garage and have the same keyboard and screen.

      • NostraDavid@programming.dev
        link
        fedilink
        English
        arrow-up
        20
        ·
        5 days ago

        Andrej Karpathy (One of the founders of OpenAI, left OpenAI, worked for Tesla back in 2015-2017, worked for OpenAI a bit more, and is now working on his startup “Eureka Labs - we are building a new kind of school that is AI native”) make a tweet defining the term:

        There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing. I’m building a project or webapp, but it’s not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

        People ignore the “It’s not too bad for throwaway weekend projects”, and try to use this style of coding to create “production-grade” code… Lets just say it’s not going well.

        source (xcancel link)

        • BombOmOm@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          5 days ago

          The amount of damage a newbie programmer without a tight leash can do to a code base/product is immense. Once something is out in production, that is something you have to deal with forever. That temporary fix they push is going to be still used in a decade and if you break it, now you have to explain to the customer why the thing that’s been working for them for years is gone and what you plan to do to remedy the situation.

          A newbie without a leash just pushing whatever an AI hands them into production. O, boy, are senior programmers going to be sad for a long, long time.

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    6
    ·
    edit-2
    5 days ago

    TIL becoming dependent on a tool you frequently use is “something bizarre” - not the ordinary, unsurprising result you would expect with common sense.

    • emeralddawn45@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      5 days ago

      If you actually read the article Im 0retty sure the bizzarre thing is really these people using a ‘tool’ forming a roxic parasocial relationship with it, becoming addicted and beginning to see it as a ‘friend’.

      • CoffeeJunkie@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        5 days ago

        No, I basically get the same read as OP. Idk I like to think I’m rational enough & don’t take things too far, but I like my car. I like my tools, people just get attached to things we like.

        Give it an almost human, almost friend type interaction & yes I’m not surprised at all some people, particularly power users, are developing parasocial attachments or addiction to this non-human tool. I don’t call my friends. I text. ¯\(°_o)/¯

        • emeralddawn45@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          10
          ·
          5 days ago

          I loved my car. Just had to scrap it recently. I got sad. I didnt go through withdrawal symptoms or feel like i was mourning a friend. You can appreciate something without building an emotional dependence on it. Im not particularly surprised this is happening to some people either, wspecially with the amount of brainrot out there surrounding these LLMs, so maybe bizarre is the wrong word , but it is a little disturbing that people are getting so attached to so.ething that is so fundamentally flawed.

          • CoffeeJunkie@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            5 days ago

            Sorry about your car! I hate that.

            In an age where people are prone to feeling isolated & alone, for various reasons…this, unfortunately, is filling the void(s) in their life. I agree, it’s not healthy or best.

        • LovableSidekick@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          We called our old Honda Odyssey the Batmobile, because we got it on Halloween day and stopped at a novelty store where we got some flappy rubber bats for house decoration. On the way home I laid one of them on the dashboard and boom, the car got its name. The Batmobile was part of the family for more than 20 years, through thick and thin, never failing to get us where we needed to go. My daughter and I both cried when it was finally towed away to a donation place. Personifying inanimate objects and developing an emotional attachment for them is absolutely normal. I even teared up a little just typing this.

      • Komodo Rodeo@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 days ago

        What the Hell was the name of the movie with Tom Cruise where the protagonist’s friend was dating a fucking hologram?

        We’re a hair’s-breadth from that bullshit, and TBH I think that if falling in love with a computer program becomes the new defacto normal, I’m going to completely alienate myself by making fun of those wretched chodes non-stop.

      • LovableSidekick@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        5 days ago

        Yes, it says the neediest people are doing that, not simply “people who who use ChatGTP a lot”. This article is like “Scientists warn civilization-killer asteroid could hit Earth” and the article clarifies that there’s a 0.3% chance of impact.

      • WaitThisIsntReddit@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        5 days ago

        You never viewed a tool as a friend? Pretty sure there are some guys that like their cars more than most friends. Bonding with objects isn’t that weird, especially one that can talk to you like it’s human.

        • LovableSidekick@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          4 days ago

          This reminds me of the pang I felt when I recently discovered my trusty heavy-duty crowbar aka “Mister Crowbar” had disappeared. Presumably some guys we hired to work on our deck walked off with it. When I was younger and did all my remodel work myself, I did a lot of demolition with my li’l buddy. He was pretty heavy and only came out for the really tough jobs. I hope he’s having fun somewhere.

  • Critical_Thinker@lemm.ee
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    6
    ·
    5 days ago

    now replace chatgpt with these terms, one by one:

    • the internet
    • google
    • facebook
    • instagram
    • tiktok
    • reddit
    • lemmy
    • their cell phone
    • news media
    • television
    • radio
    • podcasts
    • junk food
    • money
  • MTK@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    1
    ·
    5 days ago

    I know a few people who are genuinely smart but got so deep into the AI fad that they are now using it almost exclusively.

    They seem to be performing well, which is kind of scary, but sometimes they feel like MLM people with how pushy they are about using AI.

    • slaneesh_is_right@lemmy.org
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      1
      ·
      5 days ago

      Most people don’t seem to understand how “dumb” ai is. And it’s scary when i read shit like that they use ai for advice.

      • piecat@lemmy.world
        link
        fedilink
        English
        arrow-up
        30
        arrow-down
        5
        ·
        5 days ago

        People also don’t realize how incredibly stupid humans can be. I don’t mean that in a judgemental or moral kind of way, I mean that the educational system has failed a lot of people.

        There’s some % of people that could use AI for every decision in their lives and the outcome would be the same or better.

        That’s even more terrifying IMO.

          • AFaithfulNihilist@lemmy.world
            link
            fedilink
            English
            arrow-up
            13
            ·
            5 days ago

            And it gets worse as they get older.

            I have friends and relatives that used to be people. They used to have thoughts and feelings. They had convictions and reasons for those convictions.

            Now, I have conversations with some of these people I’ve known for 20 and 30 years and they seem exasperated at the idea of even trying to think about something.

            It’s not just complex topics, either. You can ask him what they saw on a recent trip, what they are reading, or how they feel about some show and they look at you like the hospital intake lady from Idiocracy.

        • Prehensile_cloaca @lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          5 days ago

          No, no- not being judgemental and moral is how we got to this point in the first place. Telling someone who is doing something foolish, when they are acting foolishly used to be pretty normal. But after a couple decades of internet white-knighting, correcting or even voicing opposition to obvious stupidity is just too exhausting.

          Dunning-Kruger is winning.

  • flamingo_pinyata@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    96
    arrow-down
    9
    ·
    6 days ago

    But how? The thing is utterly dumb. How do you even have a conversation without quitting in frustration from it’s obviously robotic answers?

    But then there’s people who have romantic and sexual relationships with inanimate objects, so I guess nothing new.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      45
      arrow-down
      1
      ·
      6 days ago

      In some ways, it’s like Wikipedia but with a gigantic database of the internet in general (stupidity included). Because it can string together confident-sounding sentences, people think it’s this magical machine that understands broad contexts and can provide facts and summaries of concepts that take humans lifetimes to study.

      It’s the conspiracy theorists’ and reactionaries’ dream: you too can be as smart and special as the educated experts, and all you have to do is ask a machine a few questions.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      3
      ·
      6 days ago

      How do you even have a conversation without quitting in frustration from it’s obviously robotic answers?

      Talking with actual people online isn’t much better. ChatGPT might sound robotic, but it’s extremely polite, actually reads what you say, and responds to it. It doesn’t jump to hasty, unfounded conclusions about you based on tiny bits of information you reveal. When you’re wrong, it just tells you what you’re wrong about - it doesn’t call you an idiot and tell you to go read more. Even in touchy discussions, it stays calm and measured, rather than getting overwhelmed with emotion, which becomes painfully obvious in how people respond. The experience of having difficult conversations online is often the exact opposite. A huge number of people on message boards are outright awful to those they disagree with.

      Here’s a good example of the kind of angry, hateful message you’ll never get from ChatGPT - and honestly, I’d take a robotic response over that any day.

      I think these people were already crazy if they’re willing to let a machine shovel garbage into their mouths blindly. Fucking mindless zombies eating up whatever is big and trendy.

      • musubibreakfast@lemm.ee
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        6 days ago

        Hey buddy, I’ve had enough of you and your sensible opinions. Meet me in the parking lot of the Wallgreens on the corner of Coursey and Jones Creek in Baton Rouge on april 7th at 10 p.m. We’re going to fight to the death, no holds barred, shopping cart combos allowed, pistols only, no scope 360, tag team style, entourage allowed.

      • pinkfluffywolfie@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 days ago

        I agree with what you say, and I for one have had my fair share of shit asses on forums and discussion boards. But this response also fuels my suspicion that my friend group has started using it in place of human interactions to form thoughts, opinions, and responses during our conversations. Almost like an emotional crutch to talk in conversation, but not exactly? It’s hard to pin point.

        I’ve recently been tone policed a lot more over things that in normal real life interactions would be light hearted or easy to ignore and move on - I’m not shouting obscenities or calling anyone names, it’s just harmless misunderstandings that come from tone deafness of text. I’m talking like putting a cute emoji and saying words like silly willy is becoming offensive to people I know personally. It wasn’t until I asked a rhetorical question to invoke a thoughtful conversation where I had to think about what was even happening - someone responded with an answer literally from ChatGPT and they provided a technical definition to something that was apart of my question. Your answer has finally started linking things for me; for better or for worse people are using it because you don’t receive offensive or flamed answers. My new suspicion is that some people are now taking those answers, and applying the expectation to people they know in real life, and when someone doesn’t respond in the same predictable manner of AI they become upset and further isolated from real life interactions or text conversations with real people.

        • Opinionhaver@feddit.uk
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          5 days ago

          I don’t personally feel like this applies to people who know me in real life, even when we’re just chatting over text. If the tone comes off wrong, I know they’re not trying to hurt my feelings. People don’t talk to someone they know the same way they talk to strangers online - and they’re not making wild assumptions about me either, because they already know who I am.

          Also, I’m not exactly talking about tone per se. While written text can certainly have a tone, a lot of it is projected by the reader. I’m sure some of my writing might come across as hostile or cold too, but that’s not how it sounds in my head when I’m writing it. What I’m really complaining about - something real people often do and AI doesn’t - is the intentional nastiness. They intend to be mean, snarky, and dismissive. Often, they’re not even really talking to me. They know there’s an audience, and they care more about how that audience reacts. Even when they disagree, they rarely put any real effort into trying to change the other person’s mind. They’re just throwing stones. They consider an argument won when their comment calling the other person a bigot got 25 upvotes.

          In my case, the main issue with talking to my friends compared to ChatGPT is that most of them have completely different interests, so there’s just not much to talk about. But with ChatGPT, it doesn’t matter what I want to discuss - it always acts interested and asks follow-up questions.

          • pinkfluffywolfie@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 days ago

            I can see how people would seek refuge talking to an AI given that a lot of online forums have really inflammatory users; it is one of the biggest downfalls of online interactions. I have had similar thoughts myself - without knowing me strangers could see something I write as hostile or cold, but it’s really more often friends that turn blind to what I’m saying and project a tone that is likely not there to begin with. They used to not do that, but in the past year or so it’s gotten to the point where I frankly just don’t participate in our group chats and really only talk if it’s one-one text or in person. I feel like I’m walking on eggshells, even if I were to show genuine interest in the conversation it is taken the wrong way. That being said, I think we’re coming from opposite ends of a shared experience but are seeing the same thing, we’re just viewing it differently because of what we have experienced individually. This gives me more to think about!

            I feel a lot of similarities in your last point, especially with having friends who have wildly different interests. Most of mine don’t care to even reach out to me beyond a few things here and there; they don’t ask follow-up questions and they’re certainly not interested when I do speak. To share what I’m seeing, my friends are using these LLM’s to an extent where if I am not responding in the same manner or structure it’s either ignored or I’m told I’m not providing the appropriate response they wanted. This where the tone comes in where I’m at, because ChatGPT will still have a regarded tone of sorts to the user; that is it’s calm, non-judgmental, and friendly. With that, the people in my friend group that do heavily use it have appeared to become more sensitive to even how others like me in the group talk, to the point where they take it upon themselves to correct my speech because the cadence, tone and/or structure is not fitting a blind expectation I wouldn’t know about. I find it concerning, because regardless of the people who are intentionally mean, and for interpersonal relationships, it’s creating an expectation that can’t be achieved with being human. We have emotions and conversation patterns that vary and we’re not always predictable in what we say, which can suck when you want someone to be interested in you and have meaningful conversations but it doesn’t tend to pan out. And I feel that. A lot unfortunately. AKA I just wish my friends cared sometimes :(

            • Opinionhaver@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              5 days ago

              I’m getting the sense here that you’re placing most - if not all - of the blame on LLMs, but that’s probably not what you actually think. I’m sure you’d agree there are other factors at play too, right? One theory that comes to mind is that the people you’re describing probably spend a lot of time debating online and are constantly exposed to bad-faith arguments, personal attacks, people talking past each other, and dunking - basically everything we established is wrong with social media discourse. As a result, they’ve developed a really low tolerance for it, and the moment someone starts making noises sounding even remotely like those negative encounters, they automatically label them as “one of them” and switch into lawyer mode - defending their worldview against claims that aren’t even being made.

              That said, since we’re talking about your friends and not just some random person online, I think an even more likely explanation is that you’ve simply grown apart. When people close to you start talking to you in the way you described, it often means they just don’t care the way they used to. Of course, it’s also possible that you’re coming across as kind of a prick and they’re reacting to that - but I’m not sensing any of that here, so I doubt that’s the case.

              I don’t know what else you’ve been up to over the past few years, but I’m wondering if you’ve been on some kind of personal development journey - because I definitely have, and I’m not the same person I was when I met my friends either. A lot of the things they may have liked about me back then have since changed, and maybe they like me less now because of it. But guess what? I like me more. If the choice is to either keep moving forward and risk losing some friends, or regress just to keep them around, then I’ll take being alone. Chris Williamson calls this the “Lonely Chapter” - you’re different enough that you no longer fit in with your old group, but not yet far enough along to have found the new one.

              • pinkfluffywolfie@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                5 days ago

                I think it has a unique influence that will continue to develop, but I don’t think LLM’s are the only influence to blame. There’s a lot that can influence this behavior, like the theory you’ve described. Off the top of my head, limerence is something that could be an influence. I know that it is common for people to experience limerence for things like video game characters, and sometimes they project expectations onto others to behave like said characters. Other things could be childhood trauma, glass child syndrome, isolation from peers in adolescence, asocial tendencies, the list is long I’d imagine.

                For me, self journey started young and never ends. It’s something that’s just apart of the human experience, relationships come and go, then sometimes they come back, etc. I will say though, with what I’m seeing with the people I’m talking about, this is a novel experience to me. It’s something that’s hard to navigate, and as a result I’m finding that it’s actually isolating to experience. Like I mentioned before, I can have one-one chats, and when I see them in person, we do activities and have fun! But if any level of discomfort is detected and the expectation is brought on. By the time I realize what’s happening they’re offering literal formatted templates on how to respond in conversations. Luckily it’s not everyone in our little herd that has this behavior, but the people that do this the most I know for sure utilize ChatGPT heavily for these types of dicussions only because they recommended me to start doing the same not too long ago. Nonetheless, I did like this discussion, it offers a lot of prospect in looking at how different factors influence our behavior with each other.

    • glitchdx@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      6 days ago

      The fact that it’s not a person is a feature, not a bug.

      openai has recently made changes to the 4o model, my trusty goto for lore building and drunken rambling, and now I don’t like it. It now pretends to have emotions, and uses the slang of brainrot influencers. very “fellow kids” energy. It’s also become a sicophant, and has lost its ability to be critical of my inputs. I see these changes as highly manipulative, and it offends me that it might be working.

    • saltesc@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 days ago

      Yeah, the more I use it, the more I regret asking it for assistance. LLMs are the epitome of confidentiality incorrect.

      It’s good fun watching friends ask it stuff they’re already experienced in. Then the pin drops

    • Victor@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      6 days ago

      At first glance I thought you wrote “inmate objects”, but I was not really relieved when I noticed what you actually wrote.