Skip to content

Personal AI — pi-mono on Raspberry Pi 5

a personal ai agent running on the pi 5 that i can talk to via telegram, and that can:

  • read/write/publish posts on my ghost blog
  • read/create/update notes in my obsidian vault

built with pi-mono — specifically pi-ai (llm api), pi-agent-core (agent runtime + tool calling), and pi-web-ui (optional dashboard later).


telegram <──> bot server (pi 5) <──> pi-agent-core <──> llm
┌─────┴─────┐
ghost tools obsidian tools
│ │
ghost admin api vault (markdown files)
synced via syncthing
  • bot server: a small typescript process on the pi that listens for telegram messages via the bot api (long polling or webhook)
  • pi-agent-core: handles the conversation loop, decides when to call tools, manages state/memory
  • pi-ai: talks to the llm provider (cloud api like anthropic/openai, or local via ollama)
  • ghost tools: custom AgentTool implementations that call the ghost admin api (create post, list posts, update post, publish, etc.)
  • obsidian tools: custom AgentTool implementations that read/write markdown files directly in the synced vault folder on the pi

phase 1 — foundation (get pi-mono running on the pi)

Section titled “phase 1 — foundation (get pi-mono running on the pi)”
  • install node.js 22+ on the pi 5
  • clone pi-mono, npm install, npm run build
  • get an api key (anthropic or openai) — or install ollama + a small model (llama3.2:3b) for fully local/free usage
  • run the coding agent cli to verify everything works
  • decide on llm: cloud api (faster, smarter, ~$5-20/mo) vs local ollama (free, private, slower on pi hardware)

the vault is just markdown files — no api needed, just filesystem access. the main work is syncing the vault to the pi and building the tools.

  • install syncthing on both the desktop and the pi 5
  • configure syncthing to sync the obsidian vault folder between desktop (onedrive path) ↔ pi 5 (e.g. /home/pi/vault/)
  • syncthing runs on LAN so sync is near-instant; also works over the internet if machines aren’t on the same network
  • keep onedrive sync on the desktop too — both can coexist fine (desktop syncs to onedrive AND to pi via syncthing)
  • build obsidian tools as AgentTool implementations:
    • vault_search — full-text search across notes (rg/grep the vault)
    • vault_list_notes — list notes, optionally filtered by folder/tag
    • vault_read_note — read a note by path or name
    • vault_create_note — create a new note (with frontmatter, tags)
    • vault_update_note — append to or edit an existing note
    • vault_delete_note — delete a note
  • respect obsidian conventions: yaml frontmatter, [[wikilinks]], folder structure, tags
  • test tools standalone before wiring into the agent
  • generate a ghost admin api key (ghost admin → integrations → add custom integration)
  • build ghost tools as AgentTool implementations:
    • ghost_list_posts — list recent/draft/published posts
    • ghost_read_post — read a specific post by slug or id
    • ghost_create_post — create a new draft post
    • ghost_update_post — edit an existing post
    • ghost_publish_post — publish a draft
    • ghost_delete_post — delete a post
  • ghost admin api auth uses jwt (short-lived tokens signed with the api key) — implement the token generation
  • test tools standalone before wiring into the agent
  • create a telegram bot via @BotFather, get the bot token
  • build a small telegram bot server using the telegram bot api (long polling is simpler, no need for a public url/webhook)
  • wire incoming messages → pi-agent-core conversation loop → send response back to telegram
  • restrict the bot to only respond to your telegram user id (security — don’t let randoms talk to your agent)
  • handle long responses (telegram has a 4096 char message limit, split or summarize)
  • persistent conversation memory (pi-agent-core supports sessions)
  • system prompt: give the agent personality, context about the blog, writing style instructions, vault structure knowledge
  • systemd service so it starts on boot and auto-restarts
  • optional: pi-web-ui dashboard on a local port for a web chat interface alongside telegram
  • optional: ollama for local inference if not already set up


  • use the 8gb model if running local models
  • official 5v 5a usb-c power supply (prevents crashes under ai load)
  • nvme ssd >> microsd for performance
  • active cooling case recommended for 24/7 operation
  • if using ollama locally: expect ~5-15 tokens/sec with 3b-8b models
  • if using cloud apis: pi hardware doesn’t matter much, it’s just running the bot server + tool calls

itemcost
pi 5 8gb + case + psu~$105 one-time
nvme ssd (optional)~$25 one-time
cloud llm api~$5-20/month
ollama (local)$0 (free)
electricity~$1/month
ghost hostingalready have it
telegram botfree