

How is the kubernetes (k3s/rke2) migration coming along?


How is the kubernetes (k3s/rke2) migration coming along?


Why not cloudinit instead of ssh?


I guess the project was abandoned but supposdly spirtually succeed by https://github.com/matomo-org/?footer


Great now i can Kompose convert my converted helm charts!


RKE2 is the next step up from k3s Same group mantains it (Rancher) but its built for bigger productuon uses (i.e. it deploys etcd instead of sqllite by default).


Reverse proxies! They can redirect based on the dns name used to get to them. This is based on layer 7 data though so just http(s) services and not multiple ssh tunnels for example.
k3s/rke2 (k8s distros) do it automatically with Traefik when you use the gateway or ingres apis
Also for DNS a fun option is sslip.io which lets you do <some service>-192-168-1-10.sslip.io and it redirects to your ip but with a dns name added.
Though your router likely has an easy way to add local entries for dns and also upstream for the rest (i.e. 8.8.8.8)
That’s a good one ARM or x86?
What k8s distro? (Vanilla, k3s, rke2, minikube, harvester, whatever redhats open source open shift is called, etc?) What issues?


Yes NextCloud. No OnlyOffice it seems. On the Netherlands produced production focused fork https://github.com/MinBZK/mijn-bureau-infra
Tell me about it. If you find a off the shelf built kit that works let me know so i dont have to build one too
The hunting cams all use cell modems, which honestly sucks (a subscription to pics on land near you…). Ive been looking at wifi halow (802.11ah) for it personally. Which can support low power longer range (km/miles long) at 4-40 mbs.
I can find m.2 modules that support it too(i.e. GW16167) so its just matter of SBC and camera choice from there to me. The RPI camera honestly seems pretty decent.


IPFS backend and some automated pinning system for Peertube would go a long way to me


I was using Odoo at home for a while


Definitely overkill lol. But I like it. Haven’t found a more complete solutions that doesn’t feel like a comp sci dissertation yet.
The goal is pretty simple. Make as much as possible, helm values, k8s manifests, tofu, ansible, cloud init as possible and in that order of preference because as you go up the stack you get more state management for “free”. Stick that in git and test and deploy from that source as much as possible. Everything else is just about getting to there as fast as possible, and keeping the 3-2-1 rule alive and well for it all (3 backups, 2 different media, 1 off-site).


Fleet from Rancher to deploy everything to k8s. Baremetal management with Tinkerbell and Metal3 to management my OS deployments to baremetal in k8s. Harvester is the OS/K8S platform and all of its configs can be delivered in install or as cloudinit k8s objects. Ansible for the switches (as KubeOVN gets better in Harvester default separate hardware might be removed), I’m not brave enough for cross planning that yet. For backups I use velero, and shoot that into the cloud encrypted plus some nodes that I leave offline most of the time except to do backups and updating them. I user hauler manifests and a kube cronjob to grab images, helm charts, rpms, and ISO to local store. I use SOPS to store the secrets I need to boot strap in git. OpenTofu for application configs that are painful in helm. Ansible for everything else.
For total rebuilds I take all of that config and load it into a cloudinit script that I stick on a Rocky or sles iso that, assuming the network is up enough to configure, rebuilds from scratch, then I have a manual step to restore lost data.
That covers everything infra but physical layout in a git repo. Just got a pikvm v4 on order along with a pikvm switch, so hopefully I can get more of the junk on Metal3 for proper power control too and less IPXE shenanigans.
Next steps for me are CI/CD pipelines for deploying a mock version of the lab into Harvester as VMs, running integrations tests, and if it passes merge the staged branch into prod. I do that manually a little already but would really like to automate it. One I do that I start running Renovate to grab the latest stable for stuff for me.


Git lab CI is my goto for git repo based things (unit tests, integration tests, etc). Fleet through Rancher for real deployments (manages and maintains state because kubernetes). Tekton is my in between catchall.


I know a person that swore up and down about it, but personally I didn’t see the appeal. I am biased towards containers though so Harvester is my Promo alternative I’m running


I just learned about Internet2 at SuperCompute in my decades of being in the networking space


Yep! It uses open stacks Ironic under the hood, but tracks config and stack via k8s.
For OS building I’ve been moving to Elemental which builds OS images from container images and cloud init scripts into Suse Micro immutable OSs (which use btrfs for the snapshot management under the hood for updates).
I have my shared data on Longhorn, so for services that’s just longhorn as a PVC on rke2(k8s) and for clients I expose the NFS for mounts from a longhorn PVC to them to mount to.