

Which containers do automatic DB backups? Normally the database is a separate container, unless the app is using SQLite. Is there a MySQL or PostgreSQL container that does automated backups?
Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
https://d.sb/
Mastodon: @dan@d.sb
Which containers do automatic DB backups? Normally the database is a separate container, unless the app is using SQLite. Is there a MySQL or PostgreSQL container that does automated backups?
Where’s the MySQL option? Some of my servers are running MySQL instead of MariaDB because it allowed binding to multiple IP addresses (although I think Maria has implemented this now), and some query plan optimizations were implemented in MySQL but not MariaDB.
You still need to know what database system is being used in order to make backups of the database. You can’t just snapshot or backup the data directory while a database is running, because you might end up with an inconsistent state that won’t restore properly. You need to either stop the DB before doing the backup, or use the relevant DB-specific tools to perform a backup.
I’m a C# developer and run .NET apps on Linux all the time. I usually work on CLI and server apps, but recently released my first Linux desktop app written in C#: https://flathub.org/apps/com.daniel15.wcc
Even before .NET Core, I was using Mono to run C# apps on Linux. There used to be quite a few GNOME apps written in C#.
There’s .NET and then there’s .NET Core which is a mere subset of .NET.
Nope. The old .NET Framework has been deprecated for a long time. The latest version, 4.8.1, is not very different to 4.6 which was released 10 years ago.
The modern versions are just called .NET, which is what .NET Core used to be, but with much more of the framework implemented in a cross-platform way. Something like 95% of the Windows-only .NET Framework has been reimplemented in a cross-platform way.
The list of .NET stuff that will actually run on .NET Core (alone) is a barren wasteland.
All modern .NET code is built on the cross-platform framework. Only legacy apps used the old Windows-only .NET Framework.
If you get the free community version of Visual Studio and create a new C# project, it’ll be using the latest cross-platform framework. You can even cross-compile for Linux on a Windows system.
Thanks, this is a good insight.
That’s a very old way of thinking of things. C# has been cross platform for a long time.
Almost everything ever written in C# uses Windows-specific APIs
Not really. Most C# apps use .NET (since the framework and standard library is quite feature-rich) rather than direct Win32 calls, and .NET is cross-platform. A lot of web services are written in C# and deployed to Linux servers.
basically no one installs the C# runtime on Linux anymore
You can compile a C# app to a single executable that doesn’t require the framework to be installed.
Are you running Jellyfin, the *arr suite, slskd, or Technitium DNS? They’re all written in C#.
Yeah it’s definitely not possible to reach 50MB with a Node.js Docker image, but <150MB should be doable with a distroless base image + compiling the app into one JS file (for example, using Parcel or esbuild).
It’s possible to reach ~50-60MB Docker image with a C# app. Rust and Go definitely produce more compact binaries though.
This makes sense! You get the same advantage if the app uses Go or C# though, and both of those can compile to a single statically-linked executable too.
if it were written in NodeJS or Python or something I’d be less interested.
Does it matter if it’s running in Docker and the container is lightweight (say less than 50MB), though? I like apps being written in a language I know well so I can contribute if needed, but other than that, I mostly treat a Docker image as a black box.
Looks like a good project, but I genuinely don’t quite get why Rust projects feel the need to advertise “written in Rust” as a feature. Do you find that a lot of users care which programming language your app is written in? Does it help with finding contributors?
I don’t know which programming language most of my self-hosted apps use, and I don’t mind since they all work well and do their job.
This is a decent idea. You can configure the VPS to be an exit node on the Tailnet, and configure the clients to use it as their exit node. Then you’d just need to configure some nftables rules to masquerade (source NAT) to the VPN network interface.
Having said that… At that point, why do you need the other VPN? You can just use the VPS as your exit node.
Tailscale is “mostly” self-hosted, in that the VPN connection itself is peer-to-peer almost all the time. You can host your own Headscale and DERP/Relay servers to make it fully self-hosted, but tbh I’m fine not self-hosting the control plane.
The relay server is only used if both ends have very restrictive NAT and none of the NAT hole punching techniques work, which is rare other than on very locked down corporate networks. If you have IPv6 enabled on both ends, you shouldn’t have issues making a direct connection, as IPv6 doesn’t use NAT. Even with regular NAT (like a home internet connection) on both ends, Tailscale can use UDP hole punching on both ends to establish a direct connection.
The book was written to sell Windows Home Server. https://en.wikipedia.org/wiki/Windows_Home_Server
Install Nginx, add autoindex on;
to the default site config, throw the files into /var/www/html
or whatever default folder it uses, and delete the default index.html
file. If you need to do it via Docker then use the official Nginx image https://hub.docker.com/_/nginx
You could also just share the files via SMB. Easy to use on a PC - you could configure their computers to mount the share as a network drive on boot (e.g. R:
, for recipes). Not sure about other phones but the built-in files app on my Galaxy S25 Ultra supports SMB too.
I use Plex for music just because they currently have the best app (Plexamp). My Plex server is mostly just music, and TV shows I record off an antenna using HDHomeRun.
This seems like overkill compared to just running it on a VPS and having a second VPS as a hot spare.
Add a second SSD, if the motherboard has a SATA port (I assume your current one is an NVMe drive). A SATA SSD is still more than fast enough as a second drive.
Moving to a bigger SSD also isn’t too difficult, as long as you have a system where you can have both the old and new SSD connected at the same time. It can be a different system if needed. Download Clonezilla onto a USB stick with Ventoy on it, and boot into it. Just make sure you have backups and do the clone in the correct direction (don’t clone the blank new drive onto the old one!!)
This is how I handle it for most software: Read (or at least skim) the changelogs for all minor and major versions between your current version and the latest version.
If you’re using Docker, diff your current docker-compose to the latest one for the project. See if any third-party dependencies (like PostgreSQL, Redis, etc) have breaking changes.
If there’s any versions with major breaking changes, upgrade to each one separately (eg. 1.0 to 2.0, then 2.0 to 3.0, etc) rather than jumping immediately to the latest one, as a lot of developers don’t sufficiently test upgrading across multiple versions.
Take a snapshot before each upgrade (or if your file system doesn’t support snapshots, manually take a backup before each upgrade).
…or just don’t read anything, YOLO it, and restore a snapshot if that fails. A lot of software is simple enough that all you need to do is change the version number in docker-compose (if you’re using Docker).
In addition to backups, consider using snapshots if your file system supports it (ZFS, Btrfs, or LVM).
I use ZFS and have each of my Docker volumes in a separate ZFS dataset (similar to a Btrfs subvolume). This lets me snapshot each container independently. I take a snapshot before an upgrade. If the upgrade goes badly, I can instantly revert back to the point before I performed the upgrade.
I’d say 9/10 aren’t doing proper backups given most people don’t actually do DR runs and verify whether they can fully recover from their backups. If you don’t test your backups, you don’t have backups!