Use Large Language Models (LLMs) in Unity Locally!
Did you know that you can run large language models (LLMs) locally on your PC? If you have a decent GPU or CPU, you can harness the power of AI models without relying on cloud-based services. This enables faster response times, better privacy, and more flexibility in your AI-driven applications. As much as we want to simplify the process of creating Smart NPCs and Virtual Assistants for developers with Neocortex, we also want everyone to have access to this technology without having restrictions.
Ollama is a powerful tool that makes it easy to run various LLMs on your local machine. To further simplify integration for game developers, we have released a Unity SDK that allows you to incorporate local AI models directly into your Unity projects. Whether you're building AI-driven NPCs or interactive chat experiences, this SDK streamlines the process.
With Neocortex Ollama Support,
you can download and utilize popular LLMs like DeepSeek
, Llama
, Gemma
, Mistral
, Qwen
, and Phi
.
These models cover a range of use cases, making them valuable for game development and beyond.
Our Neocortex Unity SDK also includes UI elements, making it easy to create a seamless chat interface. You can interact with your chosen LLM in real time, enhancing gameplay and player experiences with dynamic AI conversations.
Video Tutorial
To see it in action, check out our full video tutorial and try it out for yourself. We'd love to hear your feedback, so don't forget to sign up for Neocortex and start building with local AI today!
Written by
Sercan Altundas
Date
Mon Mar 17 2025