Y’all, this is gonna be super broad, and I apologize for that, but I’m pretty new to all this and am looking for advice and guidance because I’m pretty overwhelmed at the moment. Any help is very, very appreciated.

For the last ~3 years, I’ve been running a basic home server on an old computer. Right now, it is hosting HomeAssistant, Frigate NVR, their various dependencies, and other things I use (such as zigbee2mqtt, zwave-js-ui, node-red, mosquitto, vscode, etc).

This old server has been my “learning playground” for the last few years, as it was my very first home server and my first foray into linux. That said, it’s obviously got some shortcomings in terms of basic setup (it’s probably not secure, it’s definitely messy, some things don’t work as I’d like, etc). It’s currently on its way out (the motherboard is slowly kicking the bucket on me), so it’s time to replace it, and I kind of what to start over (not completely - I’ve hundreds of automations in home assistant and node-red, for instance, that I don’t want to have to completely re-write, so I intend to export/import those as needed) and do it “right” this time - at this point, I think this is where I’m hung up, paralyzed by a fear of doing it “wrong” and winding up with an inefficient, insecure mess.

The new server, I want to be much more robust in terms of capability, and I have a handful of things I’d really love to do: pi-hole (though I need to buy a new router for this, so that has to come later on unless it’d save a bunch of headache doing it from the get-go), NAS, media server (plex/jellyfin), *arr stuff, as well as plenty of new things I’d love to self-host like Trilium notes, Tandoor or Mealie, Grocy, backups of local PCs/phones/etc (nextcloud?)… obviously this part is impossible to completely cover, but I suspect the hardware (list below) should be capable?

I would love to put all my security cameras on their own subnet or vlan or something to keep them more secure.

I need everything to be fully but securely accessible from outside the network. I’ve recently set up nginx for this on my current server and it works well, though I probably didn’t do it 100% “right.” Is something like Tailscale something I should look to use in conjuction with that? In place of? Not at all?

I’ve also looked at something like Authelia for SSO, which would probably be convenient but also probably isn’t entirely necessary.

Currently considering Proxmox, but then again, TrueNAS would be helpful for the storage aspect of all this. Can/should you run TrueNAS inside Proxmox? Should I be looking elsewhere entirely?

Here’s the hardware for the recently-retired gaming PC I’ll be using:
https://pcpartpicker.com/list/chV3jH
Also various SSDs and HDDs.

I’m in this weird place where I don’t have too much room to play around because I want to get all my home automation and security stuff back up as quickly as possible, but I don’t want to screw this all up.

Again, any help/advice/input at all is super, super appreciated.

  • LufyCZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    10 months ago

    Just fyi - running TrueNAS with zfs as a VM under Proxmox is a recipe for disaster, as me how I know.

    Zfs needs direct drive access, with VMs, the hypervisor virtualizes the adapter which is then passed through, which can mess things up.

    What you’d need to do is buy a sata/sas card and pass the whole card through, then you can use a vm.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 months ago

      The more replies like this I get, the more I’m inclined to set up a second computer with just TrueNAS and let it do nothing but handle that. I assume that, then, would be usable by the server running proxmox with all its containers and whatnots.

      Thank you for the input!

      • LufyCZ@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        10 months ago

        If you want to learn zfs a bit better though, you can just stick with Proxmox. It supports it, you just don’t get the nice UI that TrueNAS provides, meaning you’ve got to configure everything manually, through config files and the terminal.

      • Lakuz@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        You can run Virtual Machines and containers in TrueNAS Scale directly. The “Apps” in TrueNAS run in K3s (a lightweight Kubernetes) and you can run plain Docker containers as well if you need to.

        TrueCharts provides additional apps and services on top of the official TrueNAS supported selection.

        I have used Proxmox a lot before TrueNAS. At work and in my homelab. It’s great, but the lack of Docker/containerd support made me switch eventually. It is possible to run Docker on the same host as Proxmox, but in the end everything I had was running in Docker. This made most of what Proxmox offers redundant.

        TrueNAS has been a better fit for me at least. The web interface is nice and container based services are easier to maintain through it. I only miss the ability to use BTRFS instead of ZFS. I’ve had some annoying issues with TrueCharts breaking applications on upgrades, but I can live with the occasional troubleshooting session.

  • ninjan@lemmy.mildgrim.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    10 months ago

    My best advice is use that your old setup hasn’t died yet while you can. I.e. start now and setup Proxmox because it’s vastly superior to TrueNAS for the more general type hardware you have and then run a more focused NAS project like Openmediavault in a proxmox VM.

    My recommendation, from experience, would be to setup a VM for anything touching hardware directly, like a NAS or Jellyfin (if you want to have GPU assisted transcoding) and I personally find it smoothest to run all my Docker containers from one Docker dedicated VM. LXCs are popular for some but I strongly dislike how you set hardware allocations for them, and running all Docker containers in one LXC is just worse than doing it in a VM. My future approach will be to move to more dedicated container setup as opposed to the VM focused proxmox but that is another topic.

    I also strongly recommend using portainer or similar to get a good overview of your containers and centralize configuration management.

    As for external access all I can say is do be careful. Direct internet exposure is likely a really bad idea unless you know what you’re doing and trust the project you expose. Hiding access behind a VPN is fairly easy if your router has a VPN server built in. And WireGuard (like Netbird / tailscale / Cloudflare tunnels etc all use) is great if not.

    As for authentication it’s pretty tricky but well worth it and imo needed if you want to expose stuff to friends/family. I recommend Authentik over other alternatives.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      I like the advice to use a VM for anything specifically touching hardware. I think I’ll run with that. Thank you! External access is tricky, I know, and doing it securely and safely is really paramount for me. This is the one thing that’s keeping me from just “jumping in” with things. I don’t want to mess that part up.

      • ninjan@lemmy.mildgrim.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Well good part there is that you can build everything for internal use and then add external access and security later. While VLAN segmentation and overall secure / zero-trust architecture is of course great it’s very overkill for a selfhosted environment if there isn’t an additional purpose like learning for work or you find it fun. The important thing really is the shell protection, that nothing gets in. All the other stuff is to limit potential damage if someone gets in (and in the corporate world it’s not “if” it’s “when”, because with hundreds of users you always have people being sloppy with their passwords, MFA, devices etc.). That’s where secure architecture is important, not in the homelab.

        • Malice@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          That is true that the most important part is just to keep the outside… out. I’d love to learn more intricate/advanced network setups and security too. I do work in IT, and knowing this stuff certainly wouldn’t be bad on my resume, and I’ve actually always been interested in learning it regardless. But perhaps you make a good point that I can secure it from the outside and get things functional, and then work on further optimization down the line. Makes things a little less daunting, haha.

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      10 months ago

      Why would you virtualize a file server? You want direct access to disks for raid and raid-like things.

      • ninjan@lemmy.mildgrim.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        There’s absolutely no issues whatsoever with passing through hardware directly to a VM. And Virtualized is good because we don’t want to “waste” a whole machine for just a file server. Sure dedicated NAS hardware has some upsides in terms of ease of use but you also pay an, imo, ridiculous premium for that ease. I run my OMV NAS as a VM on 2 cores and 8 GB of RAM (with four hard drives) but you can make do perfectly fine on 1 Core and 2 GB RAM if you want and don’t have too many devices attached / do too many iops intensive tasks.

        • atzanteol@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          And Virtualized is good because we don’t want to “waste” a whole machine for just a file server.

          Hmm. I strongly disagree. You’ve created a new dependency now for the fileserver to come up - a system that many other services will also depend on and which will likely contain backups.

          A dedicated system is less likely to fail as it won’t be sensitive to a bad proxmox upgrade or some other VM exhausting system resources on the host.

          You can get cheap hardware if cost is an issue.

          • ninjan@lemmy.mildgrim.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            Sure, I’m not saying its optimal, optimal will always be dedicated hardware and redundancy in every layer. But my point is that you gain very little for quite the investment by breaking out the fileserver to dedicated hardware. It’s not just CPU and RAM needed, it’s also SATA headers and an enclosure. Most people doing selfhosted have either one or more SBCs and if you have more than one SBC then yeah the fileserver should be dedicated. The other common thing is having an old gaming/office PC converted to server use and in that case Proxmox the whole server and run NAS as a VM makes the most sense instead of buying more hardware for that very little gain.

            • atzanteol@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              Sure, I’m not saying its optimal,

              Question title: Starting over and doing it “right”

              But my point is that you gain very little for quite the investment by breaking out the fileserver to dedicated hardware.

              You gain stability - which is the single best thing you can get from a file server. It’s not a glamorous job - but it’s an important one.

              Most people doing selfhosted have either one or more SBCs and if you have more than one SBC then yeah the fileserver should be dedicated.

              When somebody new to hosting services asks what they should do we should provide them with best practices rather than “you can run this on the microcontroller in your toaster” advice. Possible != good.

              The other common thing is having an old gaming/office PC converted to server use and in that case Proxmox the whole server and run NAS as a VM makes the most sense instead of buying more hardware for that very little gain.

              Running your NAS on a VM on Proxmox only makes good sense if you’re just cheap. I’ve been there! I get it. But I wouldn’t tell anyone what I was doing was a good idea and certainly wouldn’t recommend it to others. It’s a hack. Own it.

              You can find old servers on eBay for ~$200. Here’s the one I use for <$200. It’s been running for more than a decade without trouble. Even when I mess up other systems it’s always available. When I changed to Proxmox from how I previously managed some other systems it was already available and running. When an upgrade on my laptop goes wrong the backups are available on my fileserver. When a raspberry pi SD card dies the backup images are available on the fileserver. It. Just. Works.

              • ninjan@lemmy.mildgrim.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 months ago

                Yes, but in the post they also stated what they were working with in terms of hardware. I really dislike giving the advice “buy more stuff” because not everyone can afford to when selfhosting often comes from a frugal place.

                Still you’re absolutely not wrong and I see value in both our opinions being featured here, this discussion we’re having is a good thing.

                Circling back to the VM thing though, even if I had dedicated hardware, if I would’ve used an old server for a NAS I still would’ve virtualized it with proxmox if for no other reason than that gives me mobility and an easier path to restoration if the hardware, like the motherboard, breaks.

                Still, your advice to buy a used server is good and absolutely what the OP should do if they want a proper setup and have the funds.

                • atzanteol@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 months ago

                  Circling back to the VM thing though, even if I had dedicated hardware, if I would’ve used an old server for a NAS I still would’ve virtualized it with proxmox if for no other reason than that gives me mobility and an easier path to restoration if the hardware, like the motherboard, breaks.

                  I can see the allure. I’ve just had a lot more experiences where “some idiot” (cough) made changes at 2AM to an un-related service that causes the entire fileserver and anything else on that system to become unavailable… Happens more often than a hardware error in my experience. :-)

                  Do you have two proxmox servers each with enough disk space to store everything on the fileserver? And I assume off-site backups to copy back from?

                  If my T110 exploded I’d just buy a new machine, restore from off-site, and re-provision with Ansible scripts. But have ~8TB in storage on my server so just copying that to a second system is not an option. I’m not going to have a system with a spare 10T of disk just sitting around…

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    10 months ago

    As a general rule: One system, one service. That system can be metal, vm, or container. Keeping things isolated makes maintenance much easier. Though sometimes it makes sense to break the rules. Just do so for the right reasons and not out of laziness.

    Your file server should be it’s own hardware. Don’t make that system do anything else. Keeping it simple means it will be reliable.

    Proxmox is great for managing VMs. Your could start with one server, and add more as needed to a cluster.

    It’s easy enough to setup wireguard for roaming systems that you should. Make a VM for your VPN endpoint and off you go.

    I’m a big fan of automation. Look into ansible and terraform. At least consider ansible for updating all your systems easily - that way you’re more likely to do it often.

    • theRealBassist@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Right now my TrueNAS is virtualized and I truly hate it. It’s been a constant issue for me.

      That said, I can’t afford separate hardware atm. I will be able to soon, but not quite yet lol

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      10 months ago

      One rule one system is very bad practice. You should run a bunch of services with docker compose. If you have enough resources to warrant 3 VMs you could setup a swarm.

  • teawrecks@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 months ago

    I need everything to be fully but securely accessible from outside the network

    I wouldn’t be able to sleep at night. Who is going to need to access it from outside the network? Is it good enough for you to set up a VPN?

    The more stuff visible on the internet, the more you have to play IT to keep it safe. Personally, I don’t have time for that. The safest and easiest system to maintain a system is one where possible connections are minimized.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      I sometimes travel for work, as an example, and need to be able to access things to take care of things while I’m away and the girlfriend is home, or when she’s with me and someone else is watching the place (I have a dog that needs petsat). I definitely have the time to tinker with it. Patience may be another thing, though, lol.

      • Linuturk@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        10 months ago

        Tailscale would allow you access to everything inside your network without having it publicly accessible. I highly recommend that since you are new to security.

        • teawrecks@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          It’s not clear to me how tailscale does this without being a VPN of some kind. Is it just masking your IP and otherwise just forwarding packets to your open ports? Maybe also auto blocking suspicious behavior if they’re clearly scanning or probing for vulnerabilities?

          • lowdude@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 months ago

            That’s exactly what it is. I haven’t looked into it too much, but as far as I know it’s main advantage is simplifying the setup process, which in turn reduces the chances of a misconfigured VPN.

  • Monkey With A Shell@lemmy.socdojo.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    10 months ago

    The right way is the way that works best for your own use case. I like a 3 box setup, firewall, hypervisor, nas, with a switch in between. Let’s you set up vlans to your heart’s content, manage flows from an external point (virtual firewalls are fine, but if it’s the authoritative DNS/DHCP for your net it gets a bit chicken and egg when it’s inside a vm host), and store the actual data like vids/pics/docs on the NAS that has just that one job of storing the files, less chance of borking it up that way.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      I might be able to scrounge together another physical server to use strictly as a NAS, that isn’t a bad idea. Thank you for the suggestion!

  • BearOfaTime@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 months ago

    Not sure why you need a new router for PiHole. If your machines all point to the Pihole for DNS, it works. Router has almost nothing to do with what provides DNS, other than maybe having it’s DHCP config include the Pihole for DNS.

    Even then, you can setup the Pihole to be both DHCP and DNS (which helps for local name resolution anyway), and then just turn off DHCP in your router.

    As I understand it, Tailscale and Nginx fulfill the same requirements. I lean toward TS myself, I like how administration works, and how it’s a virtual network instead of an in-bound VPN. This means devices just see each other on this network, regardless of the physical network to which they’re connected. This makes it easy to use the same local-network tools you normally use. For example, you can use just one sync tool, rather than one inside the LAN, and one that can span the internet. You can map shares right across a virtual network as if it were a LAN. TS also enables you to access devices that can’t run TS, such as printers, routers, access points, etc, by enabling its Subnet Router.

    Tailscale also has a couple features (Funnel and Share) which enable you to (respectively), provide internet access to specific resources for anyone, or enable foreign Tailscale networks to access specific resources.

    I see Proxmox and TrueNAS as essentially the same kind of thing - they’re both Hypervisors (virtualizatiin hosts) with True adding NAS capability. So I can’t think of a use-case for running one on the other (TrueNAS has some docs around virtualizing it, I assume the use-case is for a test lab, I wouldn’t think running TN, or any NAS, virtualized is an optimal choice, but hey, what do I know? ).

    While I haven’t explored both deeply, I lean toward TrueNAS, but that’s because I need a NAS solution and a hypervisor, and I’ve seen similar solutions spec’d many times for businesses - I’ve seen it work well. Plus TrueNAS as a company seems to know what they’re doing, they have a strong commercial arm with an array of hardware options. This tells me they are very invested in making True work well, and they do a lot of testing to ensure it works, at least on their hardware. Having multiple hardware products requires both an extensive test group and support organization.

    Proxmox seems equivalent, except they do just the software part, as far as I’ve seen.

    Two similar products for different, but similar/overlapping use-cases.

    Best advice I have is to make a list of Functional Requirements, abstract/high-level needs, such as “need external access to network for management”. Don’t think about specific solutions, just make the list of requirements. Then map those Functional requirements to System requirements. This is often a one-to-many mapping, as it often takes multiple System requirements to address a single functional requirement.

    For example, that “external access” requirement could map out to a VPN system requirement, but also to an access control requirement like SSO, and then also to user management definitions.

    You don’t have to be that detailed, but it’s good to at least have the Functional-to-System mapping so you always know why you did something.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      You make a very good argument for Tailscale, and I think I’ll definitely be looking deeper into that.

      I like your suggestion to map out functional requirements, and then go from there. I think I’ll go ahead and start working on a decent map for that.

      As far as the new router for pi-hole… my super-great, wonderful, most awesome ISP (I hope the sarcasm is evident, haha; the provider is AT&T) dictates that I use their specific modem/router (not optional), and they also do not allow me to change DHCP on that mandated hardware. So my best option, so far as I’ve seen, is to use the ISP’s box in pass-through with a better router behind it that I can actually set up to use pi-hole.

      Thank you for your thoughts and suggestions! I’m going to take a deeper look at Tailscale and get started properly mapping high-level needs/wants out, with options for each.

      • BearOfaTime@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Lol, sarcasm received, loud n clear!

        Yea, they all suck that way. I still use my own router for wifi. It’s just routing, and your own router will know which way to the internet, unless there’s something I don’t understand about your internet connection. See my other comment below.

        Yea, requirements mapping like this is standard stuff in the business world, usually handled by people like Technical Business/Systems Analysts. Typically they start with Business/Functional Requirements, hammered out in conversations with the organization that needs those functions. Those are mapped into System Requirements. This is the stage where you can start looking at solutions, vendor systems, etc, for systems that meet those requirements.

        System Requirements get mapped into Technical Requirements - these are very specific: cpu, memory, networking, access control, monitor size, every nitpicky detail you can imagine, including every firewall rule, IP address, interface config. The System and Technical docs tend to be 100+/several hundred lines in excel respectively, as the Tech Requirements turn into your change management submissions. They’re the actual changes required to make a system functional.

      • terminhell@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Ya don’t need ATT’s modem. Some copy pasta I’ve put together:

        If it’s fiber, you don’t need the modem. You’ll still need it once every few months.

        Things you’ll need:

        1. your own router
        2. cheap 4 port switch (1gig pref)

        Setup: Connect gpon (the little fiber converter box they installed on the wall near modem) wan to any port on 4port switch. Then from switch to gpon port of modem (usually red or green port). Make sure modem fully syncs. Once this happens, you can move the cable from the modem to your own routers wan port. Done! Allow router a few moments to sync as well.

        Now, every once in a while they’ll send a line refresh signal that will break this, or if a power outage occurs. In such case, you’ll just plug back in their modem, move cable back to gpon port of modem, wait for sync. Move cable back to router.

        Bonus: Hook up all this to a battery backup and you’ll have Internet even during power outages, at least for a while.

        • BearOfaTime@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          Since their modem is handing out DHCP addresses, is there any reason why you couldn’t just connect that cable to your router’s internet port, and configure it for DHCP on that interface? Then the provider would always see their modem, and you’d still have functional routing that you control.

          Since consumer routers have a dedicated interface for this, you don’t have to make routing tables to tell it which way to the internet, it already knows it’s all out that interface.

          Just make sure your router uses a different private address range for your network than the one handed out by the modem.

          So your router should get a DHCP and DNS settings from the modem, and will know it’s the first hop to the internet.

          I do this to create test networks at home (my cable modem has multiple ethernet ports), using cheap consumer wifi routers. By using the internet port to connect, I can do some minimal isolation just by using different address ranges, not configuring DNS on those boxes, and disabling DNS on my router.

          • Malice@lemmy.dbzer0.comOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            Their modem is my router; it’s both. That’s why I need a new one, to do exactly as you’re describing (is my understanding, although another post here suggests otherwise).

            • BearOfaTime@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              You should still be able to run your own router with it treating their router as the next hop.

        • Malice@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Huh, this is interesting, I’ll have to take another look into this. Thanks for the lead!
          And I do have a UPS, and it is, indeed, pretty glorious that my internet, security cameras, and server all stay online for a good bit of time after an outage, and don’t even flinch when the power is only out briefly. Convenience and peace of mind. Well worth a UPS.

  • VelociCatTurd@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    I will provide a word of advice since you mentioned messiness. My original server was just one phyiscla host which I would install new stuff to. And then I started realizing that I would forget about stuff or that if I removed something later there may still be lingering related files or dependencies. Now I run all my apps in docker containers and use docker-compose for every single one. No more messiness or extra dependencies. If I try out an app and don’t like it, boom container deleted, end of story.

    Extra benefit is that I have less to backup. I only need to backup the docker compose files themselves and whatever persistent volumes are mounted to each container.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I forgot to mention, I do use docker-compose for (almost) all the stuff I’m currently using and, yes, it’s pretty great for keeping things, well… containerized, haha. Clean, organized, and easy to tinker with something and completely ditch it if it doesn’t work out.

      Thanks for the input!

  • paf@jlai.lu
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    If z2m, zwavejs,… Are installed from the adon store of HA, all you have to do is create a full backup of HA, and all your automations will be saved and restored automatically.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I am running HA in a container, so that’s not an option, unfortunately. If I’m being honest, though, it’s probably not a bad idea to start fresh with HA and re-import individual automations one-by-one, because HA has a lot of “slop” leftover from when I was first learning it and playing around with it.

  • ratman150@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 months ago

    I’ll freely admit to skimming a bit but yes proxmox can run trunas inside of it. Proxmox is powerful but might be a little frustrating to learn at first. For example by default proxmox expects to use the boot drive for itself and it’s not immediately clear how to change that to use that disk for other things.

    The noctua dh-15 is overkill for that cpu btw unless you’re doing an overclock which I wouldn’t recommend for server use. What’s your plans for the 1060? If using proxmox you’ll want to get one of the “G” series AMD CPUs do that proxmox binds to the apu and then you should be able to do gpu passthrough on the 1060.

    • Malice@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I’d planned on using the GPU for things like video transcoding (which I know it’s probably way overkill for). Perhaps something like stable diffusion to play around with down the line? I’m not entirely sure. I do know that, since the CPU isn’t a G series, it’ll need to be plugged in at least if/when I need to put a monitor on it. Laziness suggests I’ll likely just end up leaving it in there, lol. As far as the dh-15, yeah, that’s outrageously overkill, I know, and I may very well slap the stock cooler on it and sell the dh-15.

      Thank you!

      • ratman150@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        I have a proxbox with a R5 4600G even under extreme loads the stock cooler is fine. Honestly once prox is setup you don’t need a GPU. The video output of proxmox is just a terminal (Debian) so as long as things are running normally you can do everything through the web interface even without the gpu. I do highly recommend a second GPU (either a G series CPU or a cheap GPU) if you want to try proxmox GPU passthrough. I’ve done it and can say it is extremely difficult to get working reliably with just a single GPU.

        • Malice@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Yeah, I’d definitely considered the fact that I can probably just take the GPU out as soon as proxmox is set up. The only thing I’d leave it for is for transcoding, which may or may not be something I even need to/want to bother with.

  • OminousOrange@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    For ease of setup and use, I’ve found Twingate to be great for outside access to my network.

  • SteadyGoLucky@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    I have had a lot of fun setting up Unraid. It has everything you are looking for. It does cost some money to start, but was very much worth it to me.

  • cryo420@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    10 months ago

    yunohost; setup modules, are custom modules (there is a program made to do so from source files; infact theres even a yunohost module of some form for that (and even if that was only a template; any programming & related module (including anything from basic ide to full on llm assists; are even just a fullon system like turbopilot; in a vm module), can be used to make a module for that if needed; im certain vm modules & thus yunohost and modules for it can be stacked, and also general preexisting programming modules can be repurposed for (if those and pre-repurposed versions dont exist already) doing so already; if not in setup then inside said module & then that current modules in current configuration, can be exported & keep such config, so varients can be made; same for the entire yunohost system)

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    HA Home Assistant automation software
    ~ High Availability
    IP Internet Protocol
    LXC Linux Containers
    NAS Network-Attached Storage
    PSU Power Supply Unit
    PiHole Network-wide ad-blocker (DNS sinkhole)
    SATA Serial AT Attachment interface for mass storage
    SBC Single-Board Computer
    SSO Single Sign-On
    VPN Virtual Private Network
    ZFS Solaris/Linux filesystem focusing on data integrity

    [Thread #453 for this sub, first seen 25th Jan 2024, 21:15] [FAQ] [Full list] [Contact] [Source code]