I have my own Lemmy instance (Lemmy.emphisia.nl) but I had to take it offline, after running a while it was using more than half of the system memory (16gb in total). Causing the computer to crash as there are a lot of other services also running on it. The problem appears to come from postgress. In btop I can see it spawning a lot of processes all taking up about 80 MB in the beginning but rising as it runs, rising as high as 250 MB per process. I have already tried adjusting the postgress config to be optimized for less memory, but it seems to do nothing.

Is this normal, and if not, what is going wrong?

  • Jeena@jemmy.jeena.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    Since my upgrade to 0.19 I really struggle to keep my servr online. It sounds like what is happening to me too. The whole server beco.en unesponsive after the load goes to 100. After I kicked NextCloud from the server in only kept happening every coupple of days. Let’s see if this workaround helps to fix it. If not then I’ll remove swap.

    • poVoq@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      I have been running a cron script to automatically restart the lemmy backend, which in turn resets the postgres memory use ever since this problem started to happen months ago. For me 0.19.x actually made it less bad, but it is still an annoying issue.

      • redcalcium@lemmy.institute
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Try limiting the database connection pool size too in lemmy.hjson. It helped a lot in my instance. I set mine to 30 in a small server with 8gb ram. You can set it to even lower value for lower postgres memory consumption.

        database: {
                host: dbhost
                user: "lemmy"
                password: "secret"
                database: "lemmy"
                pool_size: 30
        }