I’ve been experimenting with it for different use cases:
Standard chat style interface with open-webui. I use it to ask things that people would normally ask ChatGPT. Researching things, vacation plans, etc. I take it all with a grain of salt and also still use search engines
Parts of different software projects I have using ollama-python. For example, I tried using it to auto summarize transaction data
Home Assistant voice assistants for my own voice activated smart home
I only have a GeForce 1080 Ti in it, so some projects are a bit slow and I don’t have the biggest models, but what really matters is the self-satisfaction I get by not using somebody else’s model, or that’s what I try to tell myself while I’m waiting for responses.
I’ve been experimenting with it for different use cases:
I only have a GeForce 1080 Ti in it, so some projects are a bit slow and I don’t have the biggest models, but what really matters is the self-satisfaction I get by not using somebody else’s model, or that’s what I try to tell myself while I’m waiting for responses.