I’m trying to better understand hosting a Lemmy Instance. Lurking discussions it seems like some people are hosting from the Cloud or VPS. My understanding is that it’s better to futureproof by running your own home server so that you have the data and the top most control of hardware, software etc. My understanding is that by hosting an instance via Cloud or VPS you are offloading the data / information to a 3rd party.

Are people actually running their own actual self-hosted servers from home? Do you have any recommended guides on running a Lemmy Instance?

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    1 year ago

    actually have a server at home

    I haven’t got any piece of hardware that was sold with the firstname “Server”.

    But there’s this self-built PC in my room that’s running 24/7 without having to reboot in several years…

    • yeehaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      28
      ·
      1 year ago

      Well technically a “server” is a machine dedicated to “serving” something, like a service or website or whatever. A regular desktop can be a server, it’s just not built as well as a “real” server.

      • VonReposti
        link
        fedilink
        English
        arrow-up
        28
        ·
        1 year ago

        There is though reasons to stray from certain consumer products for server equipment.

        • tristan@aussie.zone
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          1 year ago

          Yeah I’d stay away from Mac too… but seriously most modern laptops can disable any sleep/hibernation on lid close

          My go to lately is Lenovo tiny, can pick them up super cheap with 6-12 month warranties, throw in some extra ram, a new drive, haven’t had any fail on me yet

          • Valmond@lemmy.mindoki.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            You should think before releasing dangerous information on the internet!

            You can get a 2core 8GB / 240GB for 75€!!

            Uh oh, I think I’ll have to buy one now…

            • tristan@aussie.zone
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              1 year ago

              This is my little setup at the moment. Each is 8500t CPU, 32gb ram, 2tb nvme and 1tb SATA SSD all running in a proxmox cluster

              Edit: also check out Dell micro or the hp… Uh I want to say it’s g6 micro? You might need to search for what is actually called

                • tristan@aussie.zone
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  Thanks :D the frame and all parts are self designed and 3d printed… was a fun project

                  The whole thing runs from just 2 power cables with room for another without adding any extra power cables

              • Valmond@lemmy.mindoki.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                1 year ago

                Not at all overkill? :-D

                Future proofing or is it really used ? I don’t know proxmox, is it some docker launcher thingy?

                Very cool anyways!

                • tristan@aussie.zone
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  Proxmox is like esxi, it lets you setup virtual machines. So you can fire up a virtual Linux machine and allocate it like 2gb ram and limit it to 2 cores of the CPU or give it the whole lot depending on what you need to do

                  Having them in a cluster let’s them move virtual machines between the physical hardware and have complete copies so if one goes down the next can just start up

                  It is a little overkill, I’m probably only using about 20% of its resources but it’s all for a good cause. I’m currently unable to work due to kidney failure but I’m working towards a transplant. If I do get a transplant and can return to work, being able to say “well this is my home setup and the various things I know how to do” looks a lot better than “I sat on my ass for the last 4 years so I’m very rusty”

                  This whole setup cost me about $1000aud and uses 65-70w on average

                  • Valmond@lemmy.mindoki.com
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    edit-2
                    1 year ago

                    Hey good luck man!

                    Good idea, just sitting around isn’t good for mental health either.

                    So back to tech :-) is it like docker / Kubernetes but with VM right? What’s the good/bad things concerning VM Vs Docker?

                    BTW that’s not a lot of power consumption!

                    And yeah if it’s not overkill they you are morally obliged to search for ways to make it so, right :-) ?!

                    Cheers

            • Synestine@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Only if you’ve got it cranking all day. I’ve got a couple of Tiny (they’re Micro, which is the same thing) systems that are silent when idle and nearly silent when running less than a load avg of 5. It’s only if I try to spin up a heavy, CPU-bound process that their singular fan spins fast enough to be noticable.

              So don’t use one as a Mining rig, but if you want something that runs x64 workloads at 9-20 watts continuously, they’re pretty good.

              • tristan@aussie.zone
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Even running at full speed mine are pretty quiet but I also have 80mm silent low rpm fans blowing air across them too which seems to help

                I also recently went through with fresh thermal paste

        • yeehaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          100%, and this is why businesses don’t use laptops as servers… typically 😂.

    • Nix@merv.news
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      How do you install security updates etc without restarting?

      Linux servers prompt you do restart after certain updates do you just not restart?

      • VonReposti
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 year ago

        Enterprise distributuions can hot-swap kernels, making it unnecessary to reboot in order to make system updates.

        • Pringles@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Microsoft needs to get its shit together because reboots were a huge point of contention when I was setting up automated patching at my company.

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        The right way ™ is to have the application deployed with high availability. That is every component should have more than one server serving it. Then you can take them offline for a reboot sequentially so that there’s always a live one serving users.

        This is taken to an extreme in cloud best practices where we don’t even update any servers. We update the versions of the packages we want in some source code file. From that we build a new OS image contains the updated things along with the application that the server will run and it’s ready to boot. Then in some sequence we kill server VMs running the old image and create news ones running the new. Finally the old VMs are deleted.

      • poVoq@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        You can just restart… with modern SSDs it takes less than a minute. No one is ging to have a problem with 1 minute downtime per month or so.

      • NeoNachtwaechter@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        install security updates etc without restarting?

        Actually I am lazy with updates on the “bare metal” debian/proxmox. It does nothing else than host several vm’s. Even the hard disks belong to a vm that provides all the file shares.

      • morras@links.hackliberty.org
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        First, you need a use-case. It’s worthless to have a server just for the sake of it.

        For example, you may want to replace google photos by a local save of your photos.

        Or you may want to share your movies accross the home network. Or be able to access important documents from any device at home, without hosting them on any kind of cloud storage

        Or run a bunch of automation at home.

        TL;DR choose a service you use and would like to replace by something more private.

        • tinysalamander@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          1 year ago

          Proxmox absolutely changed the game for me learning Linux. Spinning up LXC containers in seconds to test out applications or simply to learn different Linux OSs without worrying about the install process each time has probably saved me days of my life at this point. Plus being able to use and learn ZFS natively is really cool.

          • bender@insaneutopia.com
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 year ago

            Ive been using esxi (free copy) for years. Same situation. Being able to spin up virtual machines or take a snapshot before a major change has been priceless. I started off with smaller nuc computers and have upgraded to full fledged desktops.

      • PlutoniumAcid@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        The simple way is to Google ‘yunohost’ and install that on your spare machine, then just play around with what that offers.

        If you want, you could also dive deeper by installing Linux (e.g.Ubuntu), then installing Docker, then spin up Portainer as your first container.

    • HamsterRage@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Well, there are specific hardware configurations that are designed to be servers. They probably don’t have graphics cards but do have multiple CPUs, and are often configured to run many active processes at the same time.

      But for the most part, “server” is more related to the OS configuration. No GUI, strip out all the software you don’t need, like browsers, and leave just the software you need to do the job that the server is going to do.

      As to updates, this also becomes much simpler since you don’t have a lot of the crap that has vulnerabilities. I helped manage comuter department with about 30 servers, many of which were running Windows (gag!). One of the jobs was to go through the huge list of Microsoft patches every few months. The vast majority of which, “require a user to browse to a certain website” in order to activate. Since we simply didn’t have anyone using browsers on them, we could ignore those patches until we did a big “catch up” patch once a year or so.

      Our Unix servers, HP-UX or AIX, simply didn’t have the same kind of patches coming out. Some of them ran for years without a reboot.