The Inventor Behind a Rush of AI Copyright Suits Is Trying to Show His Bot Is Sentient::Stephen Thaler’s series of high-profile copyright cases has made headlines worldwide. He’s done it to demonstrate his AI is capable of independent thought.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    112
    arrow-down
    1
    ·
    edit-2
    1 year ago

    What stupid bullshit. There is nothing remotely close to an artificial general intelligence in a large language model. This person is a crackpot fool. There is no way for a LLM to have persistent memory. Everything outside of the model that pre and post processes information is where the smoke and mirrors exist. This just just databases and standard code.

    The actual model is just a system of categorization and tensor math. It is complex vector math. That is it. There is nothing else going on inside the model. If you want to modify it, you need to recalculate a bunch of math as it relates to the existing vectors/tensor tables. All of this math is static. It can’t change. It can’t adapt. It can’t plan. It has some surprising features that one might not expect to be embedded in human language alone, but that is all this is. Try offline, open source, AI. Use Oobabooga, get models from Hugging Face, start with something like a Llama2 7B. This is not hard. You do not need a graphics card. There are lots of models that work great on just a CPU. You will need a good amount of RAM for running a really good model. A 7B is like talking to a teenager prone to lying, a 13B is like a 20 year old, a 30B at 8bit quantization is like an inexperienced late twenty-something. A 70B at 4 bit quantization is like a 30yo with a masters degree. A 70B at 4 bits will need around 14+ CPU logical cores, and 64GB of system memory to generate around 2 tokens a second, this is around 1-2 words per second and is about as slow as is practical.

    Don’t believe anything you read in bullshit media about AI right now, and ignore the proprietary stalkerware garbage. The open source offline AI world is the future and it is yours to do as you please. Try it! It is fun.

    • CatWhoMustNotBeNamed@geddit.social
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      1 year ago

      Wow, that’s some of the most concrete, down-to-earth explanation of what everyone is calling AI. Thanks.

      I’m technical, but haven’t found a good article explaining today’s AI in a way I can grasp well enough to help my non-technical friends and family. Any recommendations? Maybe something you’ve written?

        • solstice@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          I read once we shouldn’t be worried when AI starts passing Turing tests, we should worry when they start failing them again 🤣

        • pewter@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          I read a physical book about using chatGPT that I’m pretty sure was written by chatGPT.

          Sidenote: you don’t need to read a book about using chatGPT.

      • dave@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        I’ve had most success explaining LLM ‘fallibility’ to non-techies using the image gen examples. Google ‘AI hands’, and ask them if they see anything wrong. Now point out that we’re _extremely_sensitive to anything wrong with our hands, and so these are very easy for us to spot. But the AI has no concept of what a hand is, it’s just seen a _lot _ of images from different angles, sometimes fingers are hidden, sometimes intertwined etc. So it will happily generate lots more of those kinds of images, with no regard to whether they could / should actually exists.

        It’s a pretty similar idea with the LLMs. It’s seen a lot of text, and can put together words in a convincing-looking way. But it has no concept of what it’s writing, and the equivalent of the ‘hands’ will be there in the text. It’s just that we can’t see them at first glance like we can with the hands.

      • j4k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Yann LeCun is the main person behind open source offline AI as far as putting the pieces in place and events that lead to where we are now. Maybe think of him as the Dennis Ritchie or Stallman of AI research. https://piped.video/watch?v=OgWaowYiBPM

        I am not the brightest kid in the room. I’m just learning this stuff in practice and sharing some of what I have picked up thus far. I am at a wall when it comes to things like understanding rank 3 tensors or greater, and I still can’t figure out exactly how the categorization network is implemented. I think that last one has to do with Transformers and has something to do with rotation of vectors in an efficient way, but I haven’t figured it out intuitively yet. Thanks for the complement through.

    • oats@110010.win
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      This plus any LLM model is incapable of critical thinking. It can imitate it to the point where people might think it’s able to, but that’s just because it has seen the answers to the problems people are asking during the training process.

      • fidodo@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        It’s basically a book you can talk to. A book can contain incredibly knowledge, but it’s a preserve artifact of intelligence, not intelligence.

    • Mr_Blott@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      This is the thing, what do you do with it? I can’t imagine it being able to do something a human couldn’t do better

      • j4k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        It is much faster than stack overflow for code snippets. The user really needs a basic skepticism about all outputs even with an excellent model, but like, a basic 70B Llama2 can generate decent Python code. When it makes an error, pasting that error into the prompt will almost always generate a fix. This only applies to short single operations type tasks, but it is super useful if you already know the basics of code like variables, types, and branching constructs. It can explain API’s and libraries too.

        The real value comes from integrating databases and other AI models. I currently have a combination I can talk to with a mic and it can reply as an audio clip with a LLM generating the reply text. I’m working on integrating a database to help teach myself the computer science curriculum using free materials and a few books. Individualized education is a major application. You can also program a friend, or professional colleague, a councillor, or ask medical questions. There is a lot of effort going into getting accurate models for stuff like medical where they can provide citations. Even with sketchy information from basic models, they will still generate terms and hints that you can search in a regular search engine to find new information in many instances. This will help you escape the search engine echo chambers that are so pervasive now. Heck I even asked the 70B about meat smoker heat and timing settings and it made better suggestions than several YT examples I watched and tried. I needed an industrial adhesive a couple of weeks ago and found nothing searching google and bing, but after asking the 70B it gave me 4 of 6 valid results for products. After plugging these in to search, suddenly the search engines knew of thousands of results for what I was looking for. I honestly didn’t expect it to be as useful as it really is. Like I turn on my computer, and start the 70B first thing every day. It unloads itself from memory while idle, but I’m constantly asking it stuff. I go many days without even going online from my workstation.

          • j4k3@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            I do use Oobabooga a lot. I am developing my own scripts and modifying some of Oobabooga too. I also use Koboldcpp. I am on a 12gen i7 with 20 logic cores and 64GB of system memory along with a 3080Ti with 16GBV. The 70B 4 bit quantized model running with 14 layers offloaded onto the GPU generates 3 tokens a second. So it is 1.5 times faster than just on the CPU.

            If I was putting together another system, I would only get something with AVX-512 instructions support in the CPU. That instruction is troublesome for CVE issues. You’ll probably need to look into this depending on your personal privacy/security threat model. The ability to run larger models is really important. You really want all the RAM. The answer to the question of how much is always yes. You are not going to get enough memory using consumer GPUs you can only offload a few layers onto a consumer grade GPU. I can’t say how well even larger models than the 70B will perform as the memory bottlenecks. I can’t even say how a 30B or larger runs at full quantization. I can’t add any more memory to my system. Running the full models, as a rule of thumb, requires double the token size in RAM. So a 30B will require around 60GB of memory to initial load. Most of these models are float-16. So running them 8-bit cuts the size in half with penalties in areas like accuracy. Running 4 bit splits the size again. There is tuning, bias, and asymmetry in the way quantization is done to preserve certain aspects like emerging phenomena in the original data. This is why a larger model with a smaller quantization may outperform a smaller model running at full quantization. For GPUs, if you are at all serious about this, you need at least 16GBV at a bare minimum. Really, we need to see a descent priced 40-80GBV consumer option. The thing is that GPU memory is directly tied to compute hardware. There isn’t the overhead of a memory management system like system memory has. This is what makes GPUs ideal and fast, but it is the biggest chunk of bleeding edge silicon in consumer hardware already, and we need it to be 4× larger and cheap. That is not going to happen any time soon. This means the most accessible path to larger models is using the system memory. While you’ll never get the parallelism of a GPU, having cpu instructions that are 512 bits wide is a big performance boost. You also need max logic cores. That is just my take.

    • Plibbert@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Yup yup my guy. This is looking like just another ploy for companies and people to be able to patent and copyright everything under the fucking sun.

    • primbin@lemmy.one
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      1 year ago

      While I agree that LLMs probably aren’t sentient, “it’s just complex vector math” is not a very convincing argument. Why couldn’t some complex math which emulates thought be sentient? Furthermore, not being able to change, adapt, or plan may not preclude sentience, as all that is required for sentience is the capability to percieve and feel things.

        • primbin@lemmy.one
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          What I’m saying is, we don’t know what physical or computational characteristics are required for something to be sentient.

          • SlikPikker@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Language is not a requirement for sentience, and these models clearly show that you can have language without having sentience.

            As would any text user interface.