• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2024

help-circle



  • A few suggestions:

    • Some of those components may end up costing a lot to operate. You said you’re doing it as a portfolio piece. May want to create a spreadsheet with all the services, then run a cost simulation. You can use the AWS Cost calculator, but it won’t be as flexible doing ‘what if’ scenarios. Any prospective employer will appreciate that you’ve given some thought to runtime pricing.

    • You may want to bifurcate static media out and put them in S3 buckets, plus put a CloudFront CDN in front for regional scaling (and cost). Static media coming out of local server uses up processing power, bandwidth, storage, and memory. S3/CloudFront is designed for just this and is a lot cheaper. All fonts, js scripts, images, CSS stylesheets, videos, etc. can be moved out.

    • Definitely expire your CloudWatch log records (maybe no more than a week), otherwise they’ll pile up and end up costing a lot.

    • Consider where backups and logs may go. Backups should also account for Disaster Recovery (DR). Is the purpose of multiple AZs for scaling or DR? If for DR, you should think about different recovery strategies and how much down-time is acceptable.

    • Using Pulumi is good if the goal is to go multi-cloud. But if you’ve hardcoded Aurora or ALBs into the stack, you’re stuck with AWS. If that’s the case, maybe consider going with AWS CDK in a language you like. It would get you farther and let you do more native DevOps.

    • Consider how updates and revisions might work, especially once rolled out. What scripts will you need to run to upgrade the NextCloud stack. What are the implications if only one AZ is updated, but not the other. Etc.

    • If this is meant for business or multiple users, consider where user accounts would go? What about OAuth or 2FA? If it’s a business, they may already have an Identity Provider (IDP) and now you need to tie into it.

    • If tire-kicking, may want to also script switching to plain old RDS/Postgres so you can stay under the free tier.

    • To make this all reusable, you want to take whatever is generated (i.e. Aurora endpoints, and save everything to a JSON or .env file. This way, the whole thing can be zapped and re-created and should work without having to manually do much in the console or CLI.

    • Any step that uses the console or CLI adds friction and risk. Either automate them, or document the crap out of them as a favor to your future self.

    • All secrets could go in .env files (which should be in .gitignore). Aurora/RDS Database passwords could also be auto-generated and kept in SecretsManager and periodically rotated. Hardcoded DB passwords are a risk.

    • Think about putting WAF in front of everything with web access to prevent DDOS attacks.

    This is a great, learning exercise. Hope you don’t find these suggestions overwhelming. They only apply if you want to show it off for future employers. If it’s just for personal use, ignore all the rest I said and just think about operating costs. See if you can find an AWS sales or support person and get some freebie credits.

    Best of luck!




  • I’ve had good luck with Jekyll, saving the source on github, and setting it up so pushing to main auto-deploys to a Cloudflare site. Using Markdown and for larger media, uploading to S3.

    Much easier to set up and maintain than github pages. Since it’s static output, pretty snappy. Also includes RSS feeds and permanent URLs.

    Have also set up several Wordpress sites. Slower, but if you want wysiwyg editing, user comments, or there are several contributors, would work better.

    Have also heard good things about ghost, but haven’t actually deployed one yet.



  • fubarx@lemmy.worldtoSelfhosted@lemmy.worldLogwatch
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 months ago

    Saw a posting this past week on SSD drive failures. They’re blaming a lot of it on ‘over-logging’ – too much writing trivial, unnecessary data to logs. I imagine it gets worse when realtime data like OpenTelemetry get involved.

    Until I saw that, never thought there was such a thing as ‘too much logging.’ Wonder if there are any ways around it, other than putting logs on spinny disks.


  • Have used Jekyll, Hugo, and Docusaurus to generate static sites, and Wordpress and Ghost for blogs.

    A few things to think about:

    • Where do you plan to host and how much is the monthly budget?
    • How much traffic do you expect to get?
    • Will the content be static or updated often (i.e. landing page site vs. blog).
    • Will more than one person be updating the site?
    • How technical is the person/people updating the site? Are they OK with using terminal and command-lines, or GUI and point and click.
    • Will there be ‘member-only’ features, i.e. things that require users creating an account and logging in?
    • Will you need to offer a way for people to get in touch? Like, contact pages, email, etc.
    • Will there be a need for public to post and answer questions (i.e. a forum).
    • Will you need future support for things like newsletters, shopping carts, etc.

    If one-person, technical, static, I’d go with Jekyll and Github pages, or Jekyll/Hugo/Docusaurus on Cloudflare pages. They all have templates. But you need to know how to setup github repos and tools. Cost is $0 to operate, other than annual fee for custom DNS domain name.

    If more than one person, non-technical, or dynamic, then hosted Wordpress or Ghost. Budget for DNS name and ~20-50 dollars or euros/month (plus or minus, depending on features and traffic). There are free versions of these, but they slap ads all over them.

    You can self-host all these, but it’s much easier to have someone else deal with traffic spikes.

    If you need community forums or a way for users to communicate with each other, then none of the above.