
Can you tell me more about this? I’ve considered trying to build and self-host something for home automation that would essentially be a FOSS and locally run Alexa/Google Assistant. Is this what you’re doing? How exactly does Ollama fit in?
Can you tell me more about this? I’ve considered trying to build and self-host something for home automation that would essentially be a FOSS and locally run Alexa/Google Assistant. Is this what you’re doing? How exactly does Ollama fit in?
Mostly to help quickly pull together and summarize / organize information from web searches done via Open WebUI
Also to edit copy or brainstorm ideas for messaging and scripts etc
Sometimes to have discussions around complex topics to ensure I understand them.
Favorite model to run locally now is easily Qwen3-30B-A3B. It can do reasoning or more quick response stuff and runs very well on my 24 GB of VRAM RTX 3090. Plus, because it has a MoE architecture with only 3B parameters active when doing inference its lightning fast.
Interesting project. Is it actually possible to track workouts using your phone or smartwatch without needing proprietary third-party apps like Strava or Garmin Connect though?
It sounds like we’re on similar levels of technical proficiency. I have learned a lot by reading and going down wormholes on how LLMs work and how to troubleshoot and even optimize them to an extent but I’m not a computer engineer or programmer for sure.
I started with LM studio before Ollama/Open WebUI and it does have some good features and an overall decent UI. I switched because OWUI seems to have more extensibility with tools and functions etc and I wanted something I could run as a server and use on my phone and laptop elsewhere etc. OWUI has been great for that although setting up remote access for the server on the web did take a lot of trial and error. The OWUI team also develops and updates the software very quickly so that’s great.
I’m not familiar with text-generation-WebUI but at this point I’m not really wanting for much more out of a setup than my docker stack with Ollama and OWUI