

I don’t think there’s any irony in people trying to make money with their work. And I’d much much MUCH rather have them insert multiple sponsors in a video that I can just skip over than worry about ads and their constant tracking.


I don’t think there’s any irony in people trying to make money with their work. And I’d much much MUCH rather have them insert multiple sponsors in a video that I can just skip over than worry about ads and their constant tracking.
Heimdall or Dashy are the first things that come to mind. However, what I would do in your case is using local URLs that you can resolve via a local DNS like pihole. That way, you don’t have to remember IPs and ports, but just services. If you need different ports, you might need a proxy in between, which is also set up fairly quickly with nginx.
wtf that is not even REMOTELY what he wants lmaoo. Grafana is a complete overkill, that’s like telling someone who wants to host a hello-world docker container to use kubernetes.


You’re still querying search engines with your IP
IP in itself might not be as much of a problem, unless you have a static IP, which most consumers don’t. And even if you do, you are also hiding a lot of baggage relating to user agents or other fingerprintable settings. IP alone is rarely used as a sole point to link your traffic to other datapoints. On top of that, you can still just decide to exclude google, bing etc from your search results and rely more “open” ones like DDG or ecosia.
Another huge upside of searxng is the aggregation of results. The search results of google are all up to, well, google. Same with bing, which is controlled by microsoft. If these companies now decide to “surpress” certain information, people using only those engines directly would no longer see those news. However, if you get your results from multiple search engines, you are not - or lets say less - affected by that kind of nonsense.
As always with news and information, the truth usually lies somewhere in the middle. And that’s where searxng helps out tremendously.


simply don’t have time
Sorry, but that is no reason. That’s a bit akin to having a dog and saying: “Nah I don’t have time to walk the dog now”. Selfhosting something that is publicly available (not as in “everyone can use it” but “everyone can access it”) bears some level of responsibility. You either make the time to properly set up and maintain it, or you shouldn’t selfhost stuff.


I would add searxng - a bit finnicky to set up but very powerful and customizeable.


Adding certificates is a 5 step process: Settings -> Privacy and Security -> View Certificates -> Import -> Select file and confirm. That’s on firefox at least, idk about chrome, but probably not significantly more complex. With screenshots, a small guide would be fairly easy to follow.
Don’t get me wrong, I do get your point, but I don’t feel like making users add client certs to their browser storage is more work than helping them every 2 weeks because they forgot their password or shit like that lol. At least, that’s my experience. And the cool thing about client certs is they can’t really break it, unlike passwords which they can forget, or change them because they forgot, just to then forget they changed it. Once it runs, it runs.


The “average user” shouldn’t selfhost anything. Might sound mean or like gatekeeping, but it’s the truth. It can be dangerous. There’s a reason why I hire an electrician to do my house installation even tho I theoretically know how to do it myself - because I’m not amazingly well versed in it and might burn down my house, or worse, burn down other peoples houses.
People who are serious about selfhosting need to learn how to do it. Halfassing it will only lead to it getting breached, integrated into a botnet and being a burden on the rest of humanity.


And I kinda don’t want to know if complex passwords and low retries before an account gets locked out are enough.
I’ve created a custom cert that I verify within my nginx proxy using ssl_client_certificate and ssl_verify_client on. I got that cert on every device I use in the browser storage, additionally on a USB stick on my keychain in case I’m on a foreign or new machine. That is so much easier that bothering with passwords and the likes, and it’s infinitely more secure.


People who don’t care about security are the cancer of the selfhosting-world. Billions of devices are part of a botnet because lazy/stupid owners don’t care about even the most basic shit, like changing the stock password. It’s insane.


As long as you don’t directly connect it to the internet, it’s not hard.
When you do, it does become hard.


True, but I got two problems with that thought chain:


I’m kinda confused by all of the people here doing that tbh.
The entire point of dockerfiles is to have them produce the same image over and over again. Meaning, I can take the dockerfile, spin it up on any machine on gods green earth and have it run there in the exact same state as anywhere else, minus eventual configs or files that need to be mounted.
Now, if I’m worried about an image disappearing from a remote registry, I just download the dockerfile and have it stored locally somewhere. But backuping the entire image seems seriously weird to me and kinda goes against of the spirit of docker.


I think because there are ways to protect your entire systems with cryptographic keys - there’s no need for individual applications to do that themselves. You can either only make your network accessible via an SSH tunnel (which would then use SSH-Keys), use a VPN or use mTLS which would require you to install a cert into your browsers key storage.
There’s many good solutions to this problem - no need for individual applications to do it themselves.


Congrats to all the execs, you’ve completely ruined the tech industry.
No - I think they made it (involuntary) better by forcing people into looking into self hosting everything and taking control over their own infrastructure.


Terraform and Puppet. Not very simple to get into, but extremely powerful and reliable.


How do you notify yourself about the status of a container?
I usually notice if a container or application is down because that usually results in something in my house not working. Sounds stupid, but I’m not hosting a hyper available cluster at home.
Is there a “quick” way to know if a container has healthcheck as a feature.
Check the documentation
Does healthcheck feature simply depend on the developer of each app, or the person building the container?
If the developer adds a healthcheck feature, you should use that. If there is none, you can always build one yourself. If it’s a web app, a simple HTTP request does the trick, just validate the returned HTML - if the status code is 200 and the output contains a certain string, it seems to be up. If it’s not a web app, like a database, a simple SELECT 1 on the database could tell you if it’s reachable or not.
Is it better to simply monitor the http(s) request to each service? (I believe this in my case would make Caddy a single point of failure for this kind of monitor).
If you only run a bunch of web services that you use on demand, monitoring the HTTP requests to each service is more than enough. Caddy being a single point of failure is not a problem because your caddy being dead still results in the service being unusable. And you will immediately know if caddy died or the service behind it because the error message looks different. If the upstream is dead, caddy returns a 502, if caddy is dead, you’ll get a “Connection timed out”
There’s a lot of options. There’s countless paid services that offer exactly that.
If you wanna build something yourself for free, you could probably set up a site accessible via HTTP on your server and create a script on your phone that pings it every 30 seconds or so. Afaik, termux has a termux-notification function that lets you send a notification.
Codewise, it would look somewhere like this I think:
#!/usr/bin/env bash # Config NOTIFY_TITLE="Server Alert" NOTIFY_MESSAGE="Server returned a non‑200 status." HOST="funnysite.com" PORT=8080 PATH="/healtcheck" URL="http://${HOST}:${PORT}${PATH}" # Config HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "$URL") if [[ "$HTTP_CODE" != "200" ]]; then termux-notification -t "$NOTIFY_TITLE" -c "$NOTIFY_MESSAGE $HOST:$PORT" fi exit 0Afaik, termux doesn’t ship the cron daemon, but you can install cronie or use an external task scheduler. There, just set to run the script every 60 seconds or so. Whatever you need.
I haven’t tested anything of this, but in my head, it sounds like it should work fine.