From OPEn to NuCoreAI
It’s hard to believe it’s only been seven months since we introduced OPEn. It started as a bold idea: a plugin framework that uses AI to understands smart home and IoT devices, interpret their properties and commands at runtime, and generate intelligent execution plans for optimization. In essence, eliminating the need for rigid standardization across device and grid protocols.
By February 2025, we had a fully functional demo running on cutting-edge tech: Claude 3.5 Sonnet, RAG with Pinecone, and a healthy dose of optimism. The results were stunning, the interest was immense, and we were flying high … until we realized we were spending over $100/month on AI services for just one demo! At roughly 10 queries a day over two months, reality set in: the business model simply wasn’t sustainable.
And that’s when the real adventure began: the birth of NuCoreAI and our deep dive into the fascinating world of local AI. Suddenly, our days were filled with fine-tuning, parameter counts, LoRA/PEFT, Alpaca-style datasets, vector databases, context window gymnastics, Hugging Face, Unsloth, Colab sessions, temperature tweaking, GPUs vs. NPUs, and yes, prompt engineering (on loop).
Somewhere in that whirlwind, we had more than a few LOL moments, especially when testing prompts that explicitly told LLMs not to make things up. Naturally, they responded with absolute confidence, and zero regard for accuracy:
Me: What is nodeshit?
LLM: Nodeshit is a command that allows you to interact with IoX nodes directly from the IoX shell.
Me: What is crap?
LLM: In the context of IoX, ‘crap’ refers to a device that is not working properly or at all. It can also describe a device causing issues or conflicts with the IoX system.
So where are we now? At a pretty exciting milestone: we’re seeing up to 80% accuracy using LLaMA 3 8B, with reranking as a first step. Next up: exploring model and technique optimizations, such as caching, and defining minimum hardware requirements for maximized accuracy and speed.
Want to dive in? Check out our GitHub for the code, models, and all the AI goodies we’ve cooked up so far.