

You’re supposed to keep “improving” it until it breaks, and then you have something to do fixing it :)


You’re supposed to keep “improving” it until it breaks, and then you have something to do fixing it :)


I tried several and I found casdoor pretty painless.


An excellent read, thank you.


I just use my free portainer business for 3 nodes to show in the containers view which ones are outdated, and I check it regularly. Really whish there could be some kind of notification but oh well. I also follow the releases for all the projects I self host so I know when to check. Automating this makes me too nervous for comfort.


I’m a bit late to the party but the stack I run is what Beeper uses. If you don’t mind handing them your IMs it’s exactly what they host and it works great (used it for a year before I decided to host my own things). They make you use their client but AFAIK beeper.com is really just a very fancy matrix instance so you could use probably any Matrix client.
As for managing an instance, see my recent comment about DB maintenance. There’s nothing more to it than that as far as maintenance is concerned for just a few users instance. Then installing the bridges isn’t hard because the docs are really good.


Matrix bridges. I run my own matrix instance with bridges to the services I use (google messages, whatsapp, irc, discord; there are other bridges available) so I can use one client for all.


Seriously… kitty, rawtherapee, keepassxc, python, the freaking linux kernel!


In my understanding that’s the idea, the local ones are lost unless another federated instances synced them. As for the remote ones, maybe they’re backed up but I really don’t care for an instant messaging platform to not have a rear view past 2 weeks.


I don’t know, can’t speak for the devs. It is weird that if you don’t implement these API calls buried a bit deep in the wiki, you end up storing every meme and screenshot anybody posted on any instance for the rest of time. But I found these through issue reports with many people asking for these to be implemented by default with for instance a simple setting “purge after X days” and a list of rooms to include or exclude from the history clean-up.


I purge 2 weeks old media using these. Then I purge the largest rooms’ history events using these. Then I compress the DB using this.
It looks like this:
export PGPASSWORD=$DB_PASS
export MYTOKEN="yourtokengoeshere"
export TIMESTAMP=$(date --date='2 weeks ago' '+%s%N' | cut -b1-13)
echo "DB size:"
psql --host $DB_HOST -U $DB_USER -d $DB_NAME -c "SELECT pg_size_pretty(pg_database_size('$DB_NAME'));"
echo "Purging remote media"
curl \
-X POST \
--header "Authorization: Bearer $MYTOKEN" \
"http://localhost:8008/_synapse/admin/v1/purge_media_cache?before_ts=%24%7BTIMESTAMP%7D"
echo ''
echo 'Purging local media'
curl \
-X POST \
--header "Authorization: Bearer $MYTOKEN" \
"http://localhost:8008/_synapse/admin/v1/media/delete?before_ts=%24%7BTIMESTAMP%7D"
echo ''
echo 'Purging room Arch Linux'
export ROOM='!usBJpHiVDuopesfvJo:archlinux.org'
curl \
-X POST \
--header "Authorization: Bearer $MYTOKEN" \
--data-raw '{"purge_up_to_ts":'${TIMESTAMP}'}' \
"http://localhost:8008/_synapse/admin/v1/purge_history/$%7BROOM%7D"
echo ''
echo 'Purging room Arch Offtopic'
export ROOM='!zGNeatjQRNTWLiTpMb:archlinux.org'
curl \
-X POST \
--header "Authorization: Bearer $MYTOKEN" \
--data-raw '{"purge_up_to_ts":'${TIMESTAMP}'}' \
"http://localhost:8008/_synapse/admin/v1/purge_history/$%7BROOM%7D"
echo ''
echo 'Compressing db'
/home/northernlights/scripts/synapse_auto_compressor -p postgresql://$DB_USER:$DB_PASS@$DB_HOST/$DB_NAME -c 500 -n 100
echo "DB size:"
psql --host $DB_HOST -U $DB_USER -d $DB_NAME -c "SELECT pg_size_pretty(pg_database_size('$DB_NAME'));"
unset PGPASSWORD
And periodically I run vacuum;


And, importantly, run the db on postgre, not sqlite, and implement the regular db maintenance steps explained in the wiki. I’ve been running mine like that in a small VM for about 6 months, i join large communities, run whatsapp, gmessages and discord bridges, and my DB is 400MB.
Before when I was still testing and didn’t implement the regular db maintenance it balloned up to 10GB in 4 months.



I look forward to these posts/emails every week, keep up the good work please :)


As someone who tried to develop for Tizen it’s a feat. The environment, the docs, everything was abysmal.


Just use the web version? i’ve been doing that on my linux desktop for over a year, works just fine.


Yep which is why I use oauth2-proxy between these services and casdoor.


I honestly think it’s hilarious. I’m conflicted to announce that i’m kind of not proud but kind of to be a node :)
Like it says in the README: “Why did you make this? The homelab must grow. ¯(ツ)/¯”



The High-Availability Solution to a Problem That Doesn’t Exist.
"You need a service that:
Enter Hypermind."
They sure have a sense of humor though :)


Leantime has that. Im a very happy user of it.


Seconded, been using them for years. It’s just… mail. No weird stuff.
draw.io in my nextcloud
And leantime to keep track of what I want to do with notes and such
And a mess of notes in Joplin.