Building an AI Programming Assistant in Ruby

9 July 2025

I recently read this fantastic blog post which inspired me to build my very own AI Programming Assistant. The first thing I thought when reading the blog post was how simple it actually is to build an agent which underpins all this.

As someone who uses Zed, I use their Agent feature heavily in my day to day work which often feels like magic. Being able to offload much of the ‘grunt’ work to the agent is a huge time-saver and makes building software more enjoyable. I was pleasantly sursprised to realise this isn’t simply magic!

In the blog post it provides an example written in Python. I thought it’d be interesting to write something similar in Ruby. For my agent I decided I would use RubyLLM but this could easily have been built without it.

One of the benefits of this library is it makes switching between different models easy. Rather than having to figure out the intricacies of each model provider’s API, everything is nicely abstracted with a simple RubyLLM.chat call. Making it ideal for experimenting with different models.

Loop + Tools = Agent

require 'ruby_llm'
require 'dotenv/load'

class EditFile < RubyLLM::Tool
  description "Edit the contents of a file. Before editing, use the read file tool."
  param :path, desc: "The path to the file"
  param :new_content, desc: "The new content to write to the file"

  def execute(path:, new_content:)
    begin
      File.write(path, new_content)
      { success: true, content: File.read(path) }
    rescue => e
      { error: e.message }
    end
  end
end

class FindPath < RubyLLM::Tool
  description "Glob search for paths by filename (supports `**/*.rb`, `src/**`, etc.).
    Best when you know part of a path but not its exact location."
  param :glob, desc: "The glob pattern to search for"

  def execute(glob:)
    begin
      matches = Dir.glob(glob).map { |path| File.expand_path(path) }
      { success: true, matches: matches }
    rescue => e
      { error: e.message }
    end
  end
end

class ReadFile < RubyLLM::Tool
  description "Read the contents of a file."
  param :path, desc: "The path to the file"

  def execute(path:)
    begin
      { success: true, content: File.read(path) }
    rescue => e
      { error: e.message }
    end
  end
end

RubyLLM.configure do |config|
  config.openai_api_key = ENV['OPENAI_API_KEY']
end

TOOLSET = [
  FindPath,
  ReadFile,
  EditFile
]

chat = RubyLLM.chat(model: "gpt-4.1")
chat.with_tools(*TOOLSET)

while true
  puts "You:"
  command = gets.chomp
  response = chat.ask(command)
  puts "Agent: #{response.content}"
end

A Working Agent

In my example I wrote three tools. FindPath, ReadFile and EditFile. The main ‘agent’ is simply a while loop. Inside this I collect user input which is passed to the LLM and from there it provides a response.

The model has context of the available tools, so it can decide to use them if it thinks it’s necesssary. Executing a tool call is again nicely handled by RubyLLM.

So there you have it. By combining three simple tools, a loop and an LLM - I can instruct an agent from my terminal to make changes directly to my code. Pretty cool if you ask me!

You can find the gist for this example here.