Breakthrough Technique: Meta-learning for Compositionality

Original :
https://www.nature.com/articles/s41586-023-06668-3

Vulgarization :
https://scitechdaily.com/the-future-of-machine-learning-a-new-breakthrough-technique/

How MLC Works
In exploring the possibility of bolstering compositional learning in neural networks, the researchers created MLC, a novel learning procedure in which a neural network is continuously updated to improve its skills over a series of episodes. In an episode, MLC receives a new word and is asked to use it compositionally—for instance, to take the word “jump” and then create new word combinations, such as “jump twice” or “jump around right twice.” MLC then receives a new episode that features a different word, and so on, each time improving the network’s compositional skills.

  • DigitalMus
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    While in not in the field either, I do know that it is quite unusual in computer science academics to publish in actual peer reviewed journals. This is because it can be a long process, and the field is very fast moving, so your results would be outdated by the time you publish. Thus, a paper is typically synonymous with a conference proceeding, and can be found on arxiv. I found this Paper on the arxiv from 2017/2018 which seems to be when this paper was originally published for the scientific community and presented at a very “good” (if I had to guess) conference. Google scholar says this paper has 650 citations, so it probably has had quite some impact. However, I would guess this method is well known and is already implemented in many models, if it was truly disruptive.

    • Chobbes@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      To be clear, the papers at conferences undergo a peer review process as well. There are journal publications in CS, but a lot of publishing is done through conferences. Arxiv, while a great resource, has little to do with the conferences and it is worth noting that the papers on arxiv do not go through a peer review process (but are often published at conferences where the paper has gone under peer review — some papers on arxiv may be preprint versions from before the peer review process).

    • KingRandomGuy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      For reference, ICML is one of the most prestigious machine learning conferences alongside ICLR and NeurIPS.

    • A_A@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Good to know, Thanks.
      Yours is the type of comment I was really hoping to read here.

      You are right : it’s the same authors (Brenden M. Lake & Marco Baroni) with mostly the same content.

      But, they also write (in nature) that modern systems (GPT-4) do not yet incorporate these abilities :

      Preliminary experiments reported in Supplementary Information 3 suggest that systematicity is still a challenge, or at the very least an open question, even for recent large language models such as GPT-4.

      • DigitalMus
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        This certainly could be part of the motivation for publishing it this way, to make themselves more noticed by the big players. Btw, publishing in open source nature is expensive, it’s like 6-8000 euro for the big ones, so there definitely is a reason.