|
Hey there Krzychu,
I'm doing something I've never done before, and I want you there with me.
Tonight at 20:30 UTC, I'm going live to build an Ollama Home Assistant add-on from scratch.
What's the goal? To let you run Ollama client and local (or cloud) LLM models directly on the machine running your Home Assistant—yes, even on a Raspberry Pi or HA Yellow/Green.
What's the catch? This is my first time attempting anything like this. Success is absolutely not guaranteed. But I'm going to give it everything I've got because this is my way of contributing to our amazing community—my gift for the upcoming holidays.
Here's what you can expect:
- Live config & coding (with all the bugs and "why isn't this working?" moments)
- Real-time problem solving
- A chance to see if we can actually pull this off together
Who should join?
- Curious minds: Come watch the adventure unfold
- Community members: Come hang out and see where this wild idea takes us
- Coders: Come help (or have fun at me)! Your expertise could make the difference
Whether we succeed or spectacularly fail, it's going to be a fun ride, and I'd love to have you along for it.
|