I mostly understand the dilemma, but I want to see if someone has better success with their AI assistant. I use the Ollama integration and set up a conversation model. However, the default Home Assistant AI knows to use the home forecast entity whenever I ask about the weather. Whether I also set up an AI task model, toggle “control Home Assistant” on or off, or toggle “perform local commands” on or off - the Ollama models do not reference the home forecast the way the default Home Assistant can. I thought maybe keeping default commands on would enable this ability while answering all other queries with the Ollama LLM. I just want a smarter AI. Any suggestions?


What I do is use externed_openai_conversation from the HACS to hook into my LLM’s OpenAI-compatible API endpoint. That one makes it available via the regular Voice Assistant stuff within Home Assistant.
Not sure what’s happening here. The Ollama page says it doesn’t have all functionality, for example it doesn’t have sentence triggers? And weather forecast is a bit of a weird one in Home Assistant. That’s not an entity (unless you configure one manually) but a service call to fetch the forecast. Maybe your AI just doesn’t have the forecast available, just the current condition and maybe current temperature. Everything else must be specifically requested with a deliberate “weather.get_forecast” call. Maybe that service call and the specific processing is in the official Assistant, but not in the Ollama integration?