Announcing Zotto: A context-aware AI pair programmer built for Xojo

Hi everyone,

Like many of you, I’ve been looking for ways to integrate LLMs into my daily Xojo workflow. While copy-pasting code snippets into ChatGPT or Claude works for small method creation, it falls apart when you need the AI to understand your specific project structure, your custom classes, or your API nuances.

I was hoping that Xojo’s AI assistant Jade would solve this but I have found it woefully inadequate so, I built a tool to solve that.

Meet Zotto.

Zotto is a cross-platform desktop app (built entirely in Xojo) that connects directly to the Xojo IDE. It’s designed to be a true “pair programmer” that actually sees and understands the project you are working on.

How is this different from Xojo’s Jade?

It’s all about context.

Zotto doesn’t just guess about your project; it uses a custom-built AST parser (XojoKit) to read your open project structure in real-time. It knows your class hierarchy, your method signatures, and your properties.

Zotto detects the running IDE, open projects and can connect to one project at a time.

It can search through your project to find code snippets and suggest refactoring.

Zotto comes with several built-in tools for interacting with an open Xojo project and even includes a custom tool for offline searching of Xojo documentation to reduce hallucinations.

Because Zotto is context-aware, you can ask, “How do I implement the interface defined in MyCore.Utils?” and it knows exactly what that interface looks like without you pasting it.

I know many developers here are protective of their source code. Zotto is intentionally designed as a read-only assistant. In part due to limitations with Xojo’s IDE scripting (principally speed and the inability to refresh a project from disk without closing the project), Zotto is “read-only” by design. It suggests copy-and-pastable changes you can implement in the IDE. Whilst a little slower than directly writing to disk, it has the benefit of giving you complete control of the code. You remain in full control of what gets implemented.

Models

Zotto is model agnostic. You plug in your own provider. You are not just limited to Anthropic like you are with Jade. Zotto currently supports the following providers:

  • LM Studio & Ollama (running either locally on the same computer or reachable over your network)
  • OpenAI
  • Anthropic
  • Google Gemini
  • Any OpenAI-Responses API compatible provider (e.g. OpenRouter)

Major features

  • Cross-platform: Runs natively on macOS, Windows, and Linux.
  • Built 100% in Xojo.
  • Can be used 100% offline if desired.
  • Supports all major LLM providers, both local, on-device and remote proprietary.
  • Read only access to your projects - no destructive activity.
  • Supports custom MCP servers - supply your own tools if you like.
  • Comprehensive built-in tools to read and understand a connected project.
  • Full theme engine. Supports light and dark mode and allows customising many aspects of the UI.
  • Syntax highlighting of Xojo code (again, colours are customisable).
  • Renders the Markdown output by models in realtime.
  • Automatically compacts conversations to keep the flow going.
  • Lightweight: Uses about 60 MB of RAM on Mac - leaves plenty of space for on-device models.

More screenshots

Looking for Beta Testers

I’ve been using this daily for my own work, and it has already saved me a lot of time. I’m now looking for a small group of beta testers to put it through its paces before a wider release. Despite publishing countless open source libraries for the Xojo community over the last 25 year, This is my first published app written in Xojo so I need to make sure I’ve ironed out any issues with distribution.

If you are interested in trying it out, please drop a comment below or message me. I’d love to hear if this workflow fits how you code.

I’d also value @Geoff_Perlman or one of the engineer’s thoughts on this (@Paul_Lefebvre?)

19 Likes

Count me in. Copy-and-paste got so annoying in the last months. I always find bugs when testing.

1 Like

Hi @GarryPettet ,

I’m interested in being a beta tester.

If you need my email address, please reach out via a private forum message.

Thank you,
Anthony

1 Like

I’m in too. This looks very promising. Thanks Garry, for all you do for us!

Cliff

Count me in too.

I’d love to try it out, but I don’t have access to premium tiers on anything, is that a requirement?

That seems very interesting, would love yo try it out ! Count me in

Very interesting. Count me in

O.M.F.G !! Garry strikes again !! :scream:

It’s designed to be a true “pair programmer” that actually sees and understands the project

a little bit like Cursor ?
works on 2025r2.1 ?

looks great !, yes will like to try, thx

I will take it for a spin :smiley:

Thanks for the interest so far guys - keep it coming :slight_smile:

To use Gemini, Claude or one of the other proprietary providers, you’ll need an API key. If you can install Ollama or LM Studio either on the same computer or one reachable from your network, you can run whatever model will fit on your GPU. If you have a Mac, you can run some pretty big models if you’ve got enough RAM.

1 Like

It’s built with the newest version of Xojo.

Works with Xojo Project format projects (that’s important) - it won’t work with the XML or binary format projects.

I currently won’t write code directly to the IDE. I have been able to get this working but because IDE scripting is frankly shoddy it’s a bit too ropey to deploy with V1.

M1 Mac

If you want to hold my hand through a lot of things, I can volunteer to be your idiot test.

3 Likes

@GarryPettet

I’ve got an M4Pro Mini with 64GB of RAM if you want someone who can test running large models locally. (which is how I’d intend to use Zotto; with a large model running locally)

image

Anthony

I’m available for testing too :wink:

Count me in, i can test on macOS M4 (24 Gbyte RAM) and Windows 11 with Geforce 4070 12 GByte :grinning_face:

for the records, Claude Code can perfectly read and write directly xojo project and xojo code in text form.
you only need to get the compile error back to Claude.

2 Likes

Thanks for the interest. I have sorting out notarising the Mac app, etc but have hit a snag communicating with the IDE on Windows. One of those “works perfectly fine on Mac but not Windows” issues.

Once I sort this I’ll rebuild and start distributing the beta.

Urgh. For reasons unknown I can’t even load the IDE Communicator V2 example project (on both Mac and Windows) as I get this error with 2025r3:

Does anyone have a copy of the example project they can share with me so I can fire it up and figure out what port I need to bind to to speak with the IDE on Windows??

Pinging @Greg_O since he’s the last of the institutional knowledge on build automation and IDE communicator.