• 1 Post
  • 26 Comments
Joined 8 months ago
cake
Cake day: January 13th, 2025

help-circle

  • Do you mean this config option?

    [server] 
    hosts = 0.0.0.0:5232, [::]:5232
    

    That is binding the service to a network interface and port. For example your computer probably has a loopback interface and an Ethernet interface and WiFi interface. And you can bind to an IPv4 and or IPv6 address on those interfaces. Which ones do you want radicale to listen to traffic from and on what port? The example above listens on all interfaced both IPv4 and IPv6 and uses port 5323 on all. Of course that port must not be in use on any interface. Generally using this notation is insecure, but fine for testing. Put the real IP addresses when you’re ready.


  • Yeah, the definitions are actually more about alignment with the US political parties rather than left or right. And since both parties are demonstrably right of center, just to different degrees, the bias meter should only be used to determine which political party’s sponsors likely biased the article.

    For example, an article saying climate change is not human caused and presenting debunked evidence will be ranked mostly center and second mostly right. But an article calling for incentives to reduce use of fossils fuels will be ranked mostly left. That’s mostly center if anything. An article calling for the government to explicitly force companies to stop using fossil fuels would be mostly left and center. One further advocating for the government to take over energy companies that don’t comply and make energy production public would be mostly left. Just presenting scientific evidence and refusing to give a voice to debunked “alternative facts” is not a leftist position, it’s a centrist one at best and should be the baseline.



  • NFS is really good inside a LAN, just use 4.x (preferably 4.2) which is quite a bit better than 2.x/3.x. It makes file sharing super easy, does good caching and efficient sync. I use it for almost all of my Docker and Kubernetes clusters to allow files to be hosted on a NAS and sync the files among the cluster. NFS is great at keeping servers on a LAN or tight WAN in sync in near real time.

    What it isn’t is a backup system or a periodic sync application and it’s often when people try to use it that way that they get frustrated. It isn’t going to be as efficient in the cloud if the servers are widely spaced across the internet. Sync things to a central location like a NAS with NFS and then backups or syncs across wider WANs and the internet should be done with other tech that is better with periodic, larger, slower transactions for applications that can tolerate being out of sync for short periods.

    The only real problem I often see in the real world is Windows and Samba (sometimes referred to as CIFS) shares trying to sync the same files as NFS shares because Windows doesn’t support NFS out of the box and so file locking doesn’t work properly. Samba/CIFS has some advantages like user authentication tied to active directory out of the box as well as working out of the box on Windows (although older windows doesn’t support versions of Samba that are secure), so if I need to give a user access to log into a share from within a LAN (or over VPN) from any device to manually pull files, I use that instead. But for my own machines I just set up NFS clients to sync.

    One caveat is if you’re using this for workstations or other devices that frequently reboot and/or need to be used offline from the LAN. Either don’t mount the shares on boot, or take the time to set it up properly. By default I see a lot of people get frustrated that it takes a long time to boot because the mount is set as a prerequisite for completing the boot with the way some guides tell you to set it up. It’s not an NFS issue; it’s more of a grub and systemd (or most equivalents) being a pain to configure properly and boot systems making the default assumption that a mount that’s configured on boot is necessary for the boot to complete.




  • This is why I never used their images for any of my projects and do everything I can to use official charts made by the software vendor itself or create my own and put them in my personal git repo for automated deployments.

    Any business that gives away middleware for free, likely does that in the hopes of monetizing that pretty directly and eventually will be pressured to increase monetization of those things by those investors or will be forced to stop developing those products due to lack of funding. Middleware really doesn’t have many other good ways to monetize.



  • If you want something similar, you could set up a cheap VPS with your own reverse proxy making sure that all of your connections are secure between the servers and VPS. But it really depends on your situation. If you have an ISP that assigns you a block of static IPv6 addresses, it’s fairly easy to then get a domain and direct based on subdomains to those addresses. I’m not lucky enough to have a halfway decent ISP available in my area, so I can’t get that or even a reasonably priced single IPv4 address for residential service, so I have to make due with dynamic DNS which makes things more complex. I fortunately don’t have an ISP that forces double NAT on me at least. So I have set up a VPS with a reverse proxy and Wireguard VPN tunnel and I use cloudflare as my domain registrar and their DDNS which I update using my OPNSense router which is also the endpoint of the VPN. I’ve been considering moving to hosting headscale on the VPS instead, but haven’t gotten around to it. It really depends on how many servers, his many services, if you have a domain, if you have a VPS or itger server outside of your home network, if your ISP gives static IPs, and you are behind a double nat kind of situation. Also depends a lot on your bandwidth. Having low upload speeds is a common problem especially if you have cable internet service. I’m lucky enough to have symmetrical fiber direct to my modem even if the ISP is way behind and doesn’t offer IPv6 other than 6rd which was meant to be a transitional system like two decades ago and is barely functional.


  • It’s just a hosted reverse proxy with a proprietary server backend, as far as I can tell. I don’t usually trust “free” things lime that. It’s not that expensive to do it yourself, the real expense come in high bandwidth flowing through the proxy which most self hosted applications for personal use don’t really do.

    Anyway, with a reverse proxy on the security end there’s a chance of man in the middle attacks depending on the configuration. And on the privacy end, they will have the ability to log all connections. That may be where they’re planning to make money by selling that info and/or allowing MiTM attacks to inject ads like many ISPs have talked about. But “free” stuff usually isn’t actually free in the long term even if it is now while it’s being tested. Usually just takes a sale to a large corporation for it to become less free even of the original intent wasn’t to do that.


  • Really the first issue is your IP address. How does your ISP hand out IP addresses IPv4 and/or IPv6?

    If you have an ISP that gives a static block of IPv6 addresses that simplifies things immensely. But also consider that many legacy, monopoly ISPs have not implemented IPv6 for their customers, especially in the US, and so domains without an IPv4 address aren’t accessible from people’s homes that use those ISPs. But it means you could assign static IPv6 addresses to each service if you wanted to and add subdomains for each. Then you just need to deal with security on that system.

    Otherwise you’ll likely need to deal with dynamic DNS. If your router and your domain registrar’s DNS can work together for DDNS that’s ideal. For example, my OpnSense router updates my cloudflare registered domain directly when my ISP changes my IPv4 address (I have one of those ISPs that doesn’t assign IPv6 still but I don’t have any choice if I want > 5-10Mbps upload speeds).

    Then you need to deal with routing. The best way is with a reverse proxy like Caddy or I actually like Traefik a lot because it works well with my complex setup with docker and kubernetes among other things. Basically your router needs to route all the inbound traffic on the appropriate inbound ports to the reverse proxy to it to then route to the appropriate service based on the subdomain and/or port of the request.

    Once you route the subdomain to the appropriate service you need to deal with security. Once a service is exposed, it’s going to eventually start getting hit by bots trying to access it. Best to implement something like fail2ban to stop them from wasting your processing power with failed logins and 404 errors and such.


  • I set up separate VLANs for devices that do or don’t get filtering with different DNS servers assigned. And I have two different wifi SIDs on my access point for the different VLANs as well as having ports on my primary switch aligned to one or the other VLAN. I did end up having one other switch that has devices from both VLANs in a different area and had to set up one port on the primary switch with a couple of MAC-based filters for assigning the VLAN for just devices on that remote switch, but those are static devices, so that wasn’t an issue. I don’t attach any other devices to that.


  • My servers that have been around for a while get thousands of scans per day. In fact I am going to move away from crowdsec because I exceed the free limits on log entries within the first day of the month usually, sometimes just an hour or so. I mean it still works and blocks stuff, but the web portal is basically useless for any research into what I need to give attention to. That and the fact that you can no longer delete decisions on the web portal with the free account.



  • I’ve used java Scanner objects to do this extremely efficiently with minimal memory required even with multiple parallel searches. Indexing is only necessary if you want to search for information many times and don’t know what exactly the search will be. For one time searches, it’s not going to be useful. Grep honestly is going to be faster and more efficient for most one time searches.

    The initial indexing or searching of the files will be bottlenecked by the speed of the disk the files are on, no matter what you do. It only helps to index because you can move future searches to faster memory.

    So it greatly depends on what and how often you need to search and the tradeoff is memory usage, but only for multiple searches of data you choose to index from the files in the first pass.