…care to contribute a link to their favorite site for an AI activity? I’d be really interested in seeing what’s out there, but the field is moving and growing so fast and search engines suck so hard that I know I’m missing out.

Cure my FOMO please!

  • Zeth0s@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    There is not a single if/else in a neural network. You are confusing it with decision trees that are used for classification

    • EveryMuffinIsNowEncrypted@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Could you please explain? I don’t think I understand.

      Isn’t every neural network, even the one(s) in our brain, just complicated “If A, then B” statements. Even just

      “Given Image 1, Image 2, and Image 3, generate Image 4 by mixing them together according to Criteria 1 and 2”

      would be equivalent to saying

      IF((Image1, Image2, Image3) AND (Criterion1, Criterion2)),

      THEN(Image4)

      , would it not? :/

       


      Edit: A word.

      • セリャスト@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I would like to add that if it were the case, that generative image “AIs” were if/else statement, they could not run on graphics cards, that are optimized for the same raw matrix calculations repeated on a lot of variables. If it was just if/else statement, they wouldn’t need to do all the vector calculations stuff.

      • Zeth0s@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        No, what you describe is a basic decision tree. Let’s say the simplest possible ML algorithm, but it is not used as is in practice anywhere. Usually you find “forests” of more complex trees, and they cannot be used for generation, but are very powerful for labeling or regression (eli5 predict some number).

        Generative models are based on multiple transformations of images or sentences in extremely complex, nested chains of vector functions, that can extract relevant information (such as concepts, conceptual similarities, and so on).

        In practice (eli5), input is transformed in a vector and passed to a complex chain of vector multiplications and simple mathematical transformations until you get an output that in the vast majority of cases is original, i.e. not present in the training data. Non original outputs are possible in case of few “issues” in the training dataset or training process (unless explicitly asked).

        In our brain there are no if/else, but electrical signals modulated and transformed, which is conceptually more similar to the generative models than to a decision tree.

        In practice however our brain works very differently than generative models

        • I’m gonna be honest: I’m still rather confused. While I do now understand that perhaps our brains work differently than typical neural networks (or at least generative neural networks?), I do not yet comprehend how. But your explanation is a starting point. Thanks for that.

          • Zeth0s@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            1 year ago

            In the easiest example of a neuron in a artificial neural network, you take an image, you multiply every pixel by some weight, and you apply a very simple non linear transformation at the end. Any transformation is fine, but usually they are pretty trivial. Then you mix and match these neurons to create a neural network. The more complex the task, the more additional operations are added.

            In our brain, a neuron binds some neurotransmitters that trigger a electrical signal, this electrical signal is modulated and finally triggers the release of a certain quantity of certain neurotransmitters on the other extreme of the neuron. Detailed, quantitative mechanisms are still not known. These neurons are put together in an extremely complex neural network, details of which are still unknown.

            Artificial neural network started as an extremely coarse simulation of real neural networks. Just toy models to explain the concept. Since then, they diverged, evolving in a direction completely unrelated to real neural network, becoming their own thing.