

Ya exactly! Or just sanity checking if you understand how something works, I use it a lot for that, or trying to fill in knowledge gaps.
Also qwen3 is out, check that out, it might fit on a 1060.
Aspiring polymath. Applied R&D @ Privacy and Scaling Explorations #maker #Ethereum🦇🔊🐼🐍🟨🦀 Trying to make the internet better. Opinions are my own and subject to change
Ya exactly! Or just sanity checking if you understand how something works, I use it a lot for that, or trying to fill in knowledge gaps.
Also qwen3 is out, check that out, it might fit on a 1060.
I see a mix, don’t get me wrong, Lemmy is definitely opinionated lol, but I don’t think it’s quite black and white.
Also, generally, I’m not going to not share my thoughts or opinions because I’m afraid of people that don’t understand nuance, sometimes I don’t feel like dealing with it, but I’m going to share my opinion most of the time.
OP asked what you self host that isn’t media, self hosted LLMs is something I find very useful and I didn’t see mentioned. Home assistant, pihole, etc, all great answers… But those were already mentioned.
I still have positive upvotes on that comment, and no one has flamed me yet, but we will see.
I think most people on here are reasonable, and I think local LLMs are reasonable.
The race to AGI and companies trying to shove “AI” into everything is kind of insane, but it’s hard to deny LLMs are useful and running them locally you dont have privacy concerns.
Local LLMs, I’m surprised no one brought that up yet. I’ve got an old GPU in my server, and I’m running some local models with openweb-ui for use in the browser and Maid for an Android app to connect to it.
I think looking through the comments on this post about AI stuff is a pretty good representation of my experience on lemmy. Definitely some opinions, but most people are pretty reasonable 🙂