I didn’t see a brief description for this community, so please excuse me if I’m off topic.
Small victories: I set up my first containerized WordPress application with the whole nine yards. Object cache, DB, PHP, web server in separate containers connected together by a simple and readable compose file. The task was easy. What was hard was changing the way I think about running a server as this monolithic thing. True, it’s all on one physical server in the end, but the changes in mindset are becoming more difficult for me as I get older. I had always hated Docker as this wasteful oxymoronic “serverless” thing, but then I saw how I could use it to control dev environments. From there I’ve started to understand when the tool makes sense. For the first time, I feel like I get it.
Oh boy that’s a loaded question for me.
Started migrating my SPOF server setup with docker-compose, that hosts my media and home automation setup to a k3s 4 node cluster deployment, in order to get things to be mostly HA.
I have the k3s cluster setup with Cilium and it does L2 ARP broadcasts to make the control plane HA alongside a few other apps like Traefik and PiHole. I also have Vault setup to store all my secrets and cert-manager to generate letsEncrypt certificates for all my services.
The idea was to have all my media moved to an NFS and to use longhorn as a distributed storage for my configs and DBs. Unfortunately it turns out that longhorns performance is less than ideal, and my fallback of storing my DBs and configs temporarily on my old server acting as NFS also did not work extremely well, most likely because of a network bottleneck.
So for now I have the Pods running with local storage with the exception of a few things like PiHole and Vault that I definitely want to be HA. And I did a full DR simulation and know I can restore from backup and do a full data recovery from the cloud in about 3hrs (data restore). I’ll eventually tackle moving configs and DBs off local storage again but not sure when.
I now have my full set of media (plex and *arr) apps running on k8s. I’ll also be migrating the home automation stuff soon.
On a side note I’ve grown to hate Duplicati it’s extremely slow and 90% of the time just plain fails to restore files. I’ve ended up moving to Kopia which seems to be working ok but isn’t the most intuitive.
P.S. Please forgive the unorganized brain dump, it’s late and it was a long day.
I recently replaced my old server (big case + i5-4590 8GB) with a new mini-PC (with a 5560U 16GB). The new one sips power and is so quiet in comparison. The 6TB HDD I had in the old one is currently just attached by a USB-SATA cable. I do intend to 3D print a mount for it to look a bit nicer, but you know what they say: there’s nothing more permanent than a temporary solution.
I’m also trying Alpine + Podman on it instead of Ubuntu server + Docker. Most of it went without a hitch, though I do have an issue with PiHole not working because something called aardvark is using port 53 and Podman needs that running in order to work.
I also attached an old(ish) midrange phone to it with Termux for use as a low-power ARM server (just for playing around, really). I also intend to use it to test Rust code on ARM. I’m not sure what else to use it for though, so any suggestions are appreciated :)
Edit: Changed port 22 to 53, whoops.
@Pyroglyph @possiblylinux127
SFF is great for homelab ! Enjoy and remember this is step towards another nodePort 22 is for ssh and isn’t used by pihole. Are you sure you have the right port?
Whoops, I meant 53. My bad, I’ll edit my comment.
Your good. No problem
I finally got around to hardwiring 3 WAP’s in my house, routed through the attic into a rack in my office.
“New To Me” Dell310 with Proxmox, Pihole, Zabbix, MS2019 Server jump box.
Now to get a domain controller up and running on the network.
What are you going to use a domain controller for?
Just to get more comfortable working in a domain environment and to have one. Right now I’m emulating an environment through Hyper-V so I can test/play around with GPO’s.
You can use Samba AD if you don’t want to deal with the pain that is windows server
I’ll get there. I was hoping to replicate from MS.AD to Samba AD to practice working in both env’s.
Im trying to find time 😄
Got some projects in the backlog:
- Change SSD from 256MB to 1 TB
- Reinstall synapse with another url
- Move some entries from wallabag to bookstack
- Upgrade proxmox to v8
- Spin up a IT-tools docker
- Customize my nostr relay
- Start using Paperless 😅
- Fix grocy, which broke when i updated it.
But the main problem is to find time as always hehe
Do you mean GB?
Yeah 😅 #damnautocorrect
I moved from Maryland to Colorado this summer and left my homelab on Maryland after begging my dad to have it in his house temporarily. I’m getting a replacement system up and running in Colorado using my old gaming PC. I’m moving away from enterprise gear to cut down on noise, heat, and power usage, but I’m going to miss the insane amount of RAM, ECC RAM, and nice hotswap HDD cases.
This week I switched my home assistant system from a raspberry pi to a VM under proxmox, so I started turning that raspberry pi into a display for my kitchen. I’m playing around with MagicMirror, but it doesn’t do everything that I want, and I’m more comfortable writing Python than JavaScript, so I think I’m going to make a home assistant dashboard instead (I’ll definitely need to make some custom integrations).
I’m also going to make the display a remote rhasspy system for the rhasspy server I’m adding to home assistant now that it’s not running on a pi. If I can get rhasspy working well, this will get me one step closer to degoogling my life. All that’s left after that is trying to setup my own invidious instance or using yt-dlp to get YouTube videos into jellyfin, and switching to grapheneOS on my phone and sandboxing Google maps. I unfortunately still need Google Maps when I occasionally drive. OSM is great for biking and walking, but it’s not there for driving yet.
What part of Colorado? I live in the springs and organic maps works fine for me. You can always update the map
Denver. I drive maybe 5 times a month. Between construction and traffic, Google’s live traffic data is frankly unbeatable, and I often use a vehicle with Android Auto.
deleted by creator
My UPS just died :( so I’m trying to repair it. It start beeping like it’s overloaded even with no load attached. I’m suspecting an issue around the current transformer ADC.
Apart from that, I have a TuringPi 2 loaded with SOQuartz boards to start up, I was thinking of trying kubernetes (k0s) to have some resilience for the base infra (dns resolver, dns root zone for the home domain, metrics) but I need a couple of days to start…
Be very careful with faulty UPS units. They can blow up in your face. You wear gloves and proper safety glasses just in case.
Thanks, i’m aware of the risks involved and (mostly) know what I’m doing. Right now I’m just probing for faulty caps
At the moment just playing with home assistant.
Mostly in a state of stability at the moment.
I recently migrated off of a pair of ESXi servers, and consolidated down to just put my VMs on my TrueNAS Scale server, primarily to save power and generation so that I would only be running two servers instead of four. It’s not as fancy or flexible, but the VMs run and do what I need.
So now my lab consists of:
- 1 Dell R620 w/8x 1TB HDD in RAIDZ2 - backups (backuppc)
- 1 Dell R620 w/8x 2TB SSD in RAIDZ2 + JBOD w/16x 3TB HDD in 2xRAIDZ2 pool - NAS + VMs for Plex, k8s, ansible/terraform, etc. (TrueNAS Scale)
- Unifi UDMPro + 3APs
- Unifi 10Gb Aggregation Switch
- Unifi 24-port POE switch standard
I then have another pair of R620s, plus 2 more JBOD trays and disks as cold spares. I may run the servers some during the winter, but it’s too hot in the garage closet in the summer to run them all without additional cooling.
Not so much server-based, but the experimental part of “lab” is well covered: I replaced my late-2013 27" iMac’s internal HDD with an SSD. It’s a really delicate procedure, as the display is glued to the chassis; it needs to be cut loose and very carefully removed (it’s tempered glass), and then re-glued with special adhesive strips. But the performance gain is worth it. In addition, it also now runs Ventura, even with the nVIDIA card, thanks to OpenCore Legacy Patcher. Feels like a new machine now, and is perfectly adequate even for small video editing tasks with its 32 GB RAM.
I have been looking at a silly project, totally not needed properly not worth it…. Of taking an old Mac mini G4 and installing an old Ubuntu os on it, then running old game servers for maximum retro loading times and keeping it in a old computer for the same reason but I think choosing a powerpc is the mistake….
Otherwise I have set overseer and starting to migrate people to Jellyfin