Home The Playbook
Learn
Settings Guide Field Guides & Resources Certifications
Get
Meshtastic Meshcore Reticulum Nodes About Contact
Node Star Field Guide v1.1 · March 2026

Mesh +
AI at the
Edge

Run local intelligence on your node network. No cloud. No API keys. No data leaving your device. An AI that answers questions, translates languages, and serves your neighborhood — even when the internet is gone.

Radio Layer Meshtastic / LoRa
Inference Engine Ollama
Hardware Pi 5 (8GB) · Mini PC
Model Family Phi-3 · Gemma · Llama 3
Initial Setup Time ~90 min (first model load)
Inference Cost $0.00 / query
Table of Contents

Why Mesh
+ AI

The mainstream story about AI is entirely cloud-dependent. Your question leaves your phone, travels to a data center owned by a corporation, gets processed on hardware you'll never see, and the answer comes back — assuming you have cell service, assuming the API is up, assuming you haven't exceeded your rate limit, assuming the company hasn't changed its terms of service since last Tuesday.

You already know that story has holes in it. That's why you built a mesh network.

What most people don't yet know is that the local AI movement has quietly solved the cloud dependency problem. Tools like Ollama can run capable language models on a Raspberry Pi 5, an old laptop, or a $150 mini PC — with no internet connection whatsoever. The models are small enough to fit in RAM. The inference is fast enough to be useful. And Reticulum and Meshtastic already know how to carry the packets.

The logical conclusion is a neighborhood AI node: a small, always-on computer attached to your mesh network that anyone on the mesh can query. It never phones home. It works when the towers are down. It belongs to the community, not a corporation. It answers questions, translates languages, and knows your evacuation routes — because you taught it.

Nobody has written this guide yet. The people who know mesh don't know local AI, and the people who know local AI don't know mesh. This is where they meet.

What You're Building

A Raspberry Pi 5 or mini PC running Ollama with a quantized language model. A Python bridge that listens on your Meshtastic mesh for messages prefixed with a trigger word, sends them to the local inference engine, and returns the response over the air. Optional: a RAG knowledge base loaded with local documents — shelter locations, medical references, translated emergency instructions — so the AI knows your specific community.

Version Snapshot — As of March 2026

This guide targets verified-stable versions. Local AI moves fast — always check GitHub for newer model releases.

Meshtastic Firmware: 2.7.x stable series
Meshtastic Python SDK: 2.5.x (pip install meshtastic)
Ollama: latest stable (0.6+) · ollama.ai
Models tested: Gemma 2 2B · Phi-3 Mini 3.8B · Llama 3.2 3B · Mistral 7B (mini PC only)
Raspberry Pi OS: Bookworm 64-bit (required for Pi 5 — 32-bit will not run most models)

Release links: github.com/ollama/ollama · github.com/meshtastic/python

Understanding
the Stack

How the Pieces Connect

There are three distinct layers in this build. The radio layer is your existing Meshtastic mesh — nothing changes here. The inference layer is a small computer running Ollama, a local AI runtime that manages model loading, quantization, and a simple HTTP API. The bridge layer is a Python script that listens on the mesh, detects AI queries, and shuttles them between Meshtastic and Ollama.

Users interact with the system exactly the way they already interact with the mesh — by typing a message in the Meshtastic app. They prefix their message with a trigger word (you choose what it is), and within 10–60 seconds depending on your hardware, a response appears in the channel. From the user's perspective, the mesh has a brain.

Full System Architecture
Phone / Node
Meshtastic app
→ LoRa →
Relay Node(s)
mesh hops
→ LoRa →
AI Node
Meshtastic radio
Meshtastic Radio
USB serial
Bridge Script
Python · SDK
→ HTTP →
Ollama API
localhost:11434
Model
Phi-3 / Gemma / Llama
RAG Index (optional)
local docs · ChromaDB
Bridge Script
injects context

What Ollama Is

Ollama is a runtime for large language models that handles all the complexity you don't want to deal with: downloading model weights, applying quantization, managing GPU/CPU allocation, and exposing a simple REST API. You install it once, run ollama pull phi3:mini, and you have a working AI endpoint at http://localhost:11434. That's it. No Python environments, no CUDA configuration, no model conversion.

The API is intentionally simple. A POST to /api/generate with a model name and a prompt returns a completion. The bridge script in Section 05 is compact enough to read and understand in full before you run it.

What Quantization Means for You

Full-precision language models are enormous. A 7 billion parameter model at full precision (float32) would need ~28GB of RAM — well beyond a Raspberry Pi. Quantization compresses model weights to lower precision (typically 4-bit or 8-bit integers), dramatically reducing memory use at a modest quality cost. A 4-bit quantized Phi-3 Mini (3.8B parameters) fits in about 2.2GB of RAM and runs entirely in the Pi 5's unified memory. A 4-bit Llama 3.2 3B fits in about 2GB. These are not toy models — they produce genuinely useful responses for factual, instructional, and conversational tasks.

The Honest Tradeoff

Local models are not GPT-4. They're closer to a very knowledgeable, slightly careful assistant who sometimes gets things wrong. For the use cases in this guide — first aid reference, translation, emergency information lookup, local knowledge Q&A — they perform well. For nuanced creative writing or complex multi-step reasoning, they'll frustrate you. Know what you're building for.

The Meshtastic Python SDK

The Meshtastic Python SDK gives you programmatic access to any Meshtastic node connected over USB serial, Bluetooth, or TCP. It uses a publish/subscribe model: you register callbacks for specific message types, and the library calls them when packets arrive. Your bridge script will register a callback for incoming text messages, check for the trigger prefix, call Ollama, and call interface.sendText() to reply on the mesh.

One important constraint: Meshtastic text messages have a hard limit of 237 characters in current firmware (2.5.x / 2.7.x). Technically this is a byte limit, but UTF-8 complicates the math — treat it as 237 characters for safety. The bridge script uses MAX_CHARS = 220 to leave headroom for the [AI] response prefix and any multi-packet edge cases. We'll handle this in Section 05.

The Inference
Node

Your inference node is a separate computer from your Meshtastic radio hardware. The Meshtastic node (T-Beam, Heltec, LILYGO, whatever you have) handles radio. This new device handles thinking. They connect via USB serial, and the Python bridge runs on this device.

You have two realistic options in 2026: the Raspberry Pi 5 for a compact, low-power, purpose-built node, or a mini PC / old laptop for more headroom and faster inference.

Mini PC / Old Laptop
16GB RAM+ recommended · x86 architecture
~15–35W under load Faster inference
  • Runs 7B models (Mistral, Llama 3.1 8B) — significantly better quality
  • 5–15 second inference on 7B models
  • Handles concurrent queries without queuing
  • GPU acceleration possible (Intel iGPU or discrete)
  • Native Linux — no OS setup complications
  • Higher power draw; solar sizing increases significantly
$80–$300 (used/refurb)
Pi 5 — Not Pi 4

The Raspberry Pi 4 is technically capable of running small models, but inference times of 3–5 minutes per response make it nearly unusable in practice. The Pi 5's improved memory bandwidth makes a decisive difference. If you're buying new hardware, do not substitute the Pi 4. If you only have a Pi 4, use the mini PC path instead.

Pi 5 OS Requirement

You must run Raspberry Pi OS Bookworm 64-bit (or Ubuntu 24.04 LTS for Pi). The 32-bit OS cannot address the full 8GB of RAM that models require. When imaging your SD card or SSD, confirm you've selected the 64-bit image. This is the single most common setup mistake.

Storage

A fast SD card is the biggest bottleneck on the Pi 5 outside of RAM. The OS runs fine on SD, but model files are large (2–5GB each) and loading them from a slow card adds 30–90 seconds to cold start. Strongly recommended: boot from a USB 3.0 SSD or use a high-speed A2-rated SD card (Samsung Pro Endurance, SanDisk Extreme). The difference in model load time is dramatic.

Storage TypeModel Load Time (Phi-3 Mini)CostRecommendation
USB 3.0 SSD ~8 seconds $15–25 BEST
SD Card (A2 rated) ~25 seconds $10–20 ACCEPTABLE
SD Card (Class 10) 60–90 seconds $5–10 AVOID
eMMC (Pi Compute Module) ~6 seconds varies BEST

Complete Parts List (Pi 5 Build)

Installing
Ollama

Ollama installs in a single command on Linux. The installer handles architecture detection, binary placement, and systemd service configuration. After installation, Ollama runs as a background service that starts automatically on boot — which is exactly what you want for an always-on mesh node.

1

Install Ollama

Run the official install script. This works on both the Pi 5 (ARM64) and x86 mini PCs.

$ curl -fsSL https://ollama.ai/install.sh | sh

The installer detects your architecture and installs the correct binary. On Pi 5 it installs the ARM64 build. Verify the service is running:

$ systemctl status ollama
● ollama.service - Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service)
     Active: active (running)
2

Pull Your First Model

Start with Gemma 2 2B — the fastest model that runs well on a Pi 5 and produces high-quality output for its size. This downloads about 1.6GB.

$ ollama pull gemma2:2b
pulling manifest...
pulling 7462734796d6... 100% ▕████████████████▏ 1.6 GB
success
3

Test Inference

Confirm the model responds before wiring up the mesh bridge. Time your first response — this is your baseline.

$ time ollama run gemma2:2b "What should I do in an earthquake? Three sentences."

Drop to the floor, take cover under a sturdy table or desk,
and hold on until the shaking stops. Stay away from windows,
heavy furniture, and anything that could fall. After shaking
stops, check for injuries and hazards before moving.

real    0m18.4s   # Pi 5 — typical first response after model load
real    0m12.1s   # Pi 5 — subsequent responses (model stays resident)
4

Verify the API

The bridge script communicates with Ollama via HTTP. Test the API endpoint directly:

$ curl http://localhost:11434/api/generate \
  -d '{"model":"gemma2:2b","prompt":"What is a mesh network?","stream":false}'
SHELL

You should receive a JSON response with a response field. If this works, the inference layer is ready.

5

Install the Meshtastic Python SDK

Pin pypubsub==4.0.3 explicitly — the Meshtastic SDK has had breaking dependency changes around pubsub across versions. Installing it pinned here prevents the most common startup crash.

$ python3 -m venv ~/mesh-ai-env
$ source ~/mesh-ai-env/bin/activate
$ pip install "pypubsub==4.0.3" meshtastic requests
# If not using a venv:
$ pip3 install "pypubsub==4.0.3" meshtastic requests --break-system-packages

Verify your Meshtastic node is recognized over USB:

$ python3 -c "import meshtastic.serial_interface; i = meshtastic.serial_interface.SerialInterface(); print(i.myInfo)"
# Should print your node info. If it errors, check your USB connection and try /dev/ttyUSB0 or /dev/ttyACM0
Keep the Model Loaded

By default Ollama unloads a model after 5 minutes of inactivity to save RAM. On a dedicated mesh AI node, you want the model to stay loaded for instant response. Set OLLAMA_KEEP_ALIVE=-1 in /etc/systemd/system/ollama.service under the [Service] section and run sudo systemctl daemon-reload && sudo systemctl restart ollama. This keeps the model in RAM indefinitely — at the cost of that 2GB not being available for other processes.

Choosing
Your Model

Model selection is the most consequential decision in this build. More parameters means better quality but slower inference and more RAM. On a Pi 5, the practical ceiling is about 4B parameters. On a mini PC with 16GB RAM, you can go up to 7–8B and get responses that feel meaningfully more capable.

All models below use Q4_K_M quantization unless noted — the sweet spot between speed and quality for edge deployment.

Gemma 2 2B
gemma2:2b · ~1.6GB RAM · context: 8k tokens
Pi 5 tokens/sec~12–18 tok/s
Pi 5 inference8–15 sec
Mini PC inference2–5 sec
RAM required~1.6 GB
Max recommended usePi 5 primary
Best all-around for Pi 5. Fastest responses. Surprisingly capable at concise factual Q&A and translation. Occasional confabulation on obscure facts — mitigate with RAG.
Phi-3 Mini
phi3:mini · ~2.2GB RAM · context: 4k tokens
Pi 5 tokens/sec~6–10 tok/s
Pi 5 inference15–30 sec
Mini PC inference3–8 sec
RAM required~2.2 GB
Max recommended usePi 5 secondary
Microsoft's efficiency-focused model. Strong at structured tasks and following explicit instructions. Note: 4k context window — keep RAG chunks tight. Slightly slower than Gemma 2B but more reliable on complex queries.
Llama 3.2 3B
llama3.2:3b · ~2.0GB RAM · context: 8k tokens
Pi 5 tokens/sec~7–12 tok/s
Pi 5 inference18–35 sec
Mini PC inference4–10 sec
RAM required~2.0 GB
Max recommended usePi 5 / Mini PC
Meta's 3B is a capable generalist. More conversational than Phi-3. Better for open-ended questions and summaries. Use if you want a more "assistant-like" personality on the mesh.
Mistral 7B
mistral:7b · ~4.1GB RAM · context: 8k tokens
Pi 5 tokens/sec~1–3 tok/s
Pi 5 inference3–6 min
Mini PC inference8–20 sec
RAM required~4.1 GB
Max recommended useMini PC only
Noticeably higher quality than 3B models. Suitable for a mini PC. Pi 5 inference times are too long for practical mesh use. If you have the hardware, this is where quality jumps significantly.
Recommendation

Pi 5: Start with gemma2:2b. It's the fastest, fits easily in 8GB with the OS overhead, and handles emergency reference and translation tasks well. Pull phi3:mini as a backup if you need more reliable factual accuracy on specific domains.

Mini PC (16GB): Go directly to mistral:7b. The quality difference over 3B models is substantial and the hardware can handle it comfortably.

Medical & Emergency Reference Models

Standard base models are trained on general internet text. For a mesh AI node that will be asked medical questions during emergencies, you want a model that's been instruction-tuned to be careful, accurate, and appropriately cautious about its limitations. The models above are good general-purpose choices. Supplement them with a RAG knowledge base (Section 06) loaded with authoritative emergency medical references — this is more reliable than hoping the base model has accurate first aid knowledge embedded.

Critical Warning

Language models can and do produce incorrect medical information. A mesh AI node should be treated as a supplemental reference tool, not a replacement for trained responders or authoritative emergency protocols. Build your system prompt (Section 05) to include explicit disclaimers and instructions to defer to trained responders when available. If your use case involves life-safety decisions, load authoritative source documents via RAG rather than relying on the model's embedded knowledge.

The Mesh
Bridge

The bridge is a Python script that connects your Meshtastic node to Ollama. It registers a listener on the mesh, watches for messages that begin with a trigger word, sends the query to Ollama's local API, and returns the response over the air. The whole thing is under 80 lines of Python.

The Core Script

Save this as mesh_ai_bridge.py in your home directory. Read it before running it — the configuration section at the top is where you'll customize behavior.

import meshtastic
import meshtastic.serial_interface
import requests
import json
import textwrap
from pubsub import pub
import threading
import time
PYTHON

# ── CONFIGURATION ──────────────────────────────────────────
TRIGGER_PREFIX  = "?ai"          # Message must start with this. Alternatives: "!ai", "@ai", "mesh:"
RESPONSE_PREFIX = "[AI] "        # Prefix on every response. Use "" for none, or "🤖 " if app renders emoji
AI_CHANNEL_INDEX = None         # Restrict to channel index (0=primary, 1=secondary…). None = all channels
OLLAMA_URL      = "http://localhost:11434/api/generate"
MODEL_NAME      = "gemma2:2b"    # Change to phi3:mini or mistral:7b etc.
MAX_CHARS       = 220           # Hard limit — leaves ~17 chars headroom vs 237 firmware limit
OLLAMA_TIMEOUT  = 120           # Seconds before giving up
SERIAL_PORT     = None          # None = auto-detect; or "/dev/ttyUSB0" / "/dev/ttyACM0"
LOG_QUERIES     = True          # Log queries + responses to disk (see §11)
LOG_FILE        = "/home/pi/mesh-ai.log"

SYSTEM_PROMPT = """You are a helpful assistant on a mesh radio network.
Responses MUST be under 220 characters. Be concise and direct.
For medical questions, always add: 'Consult trained responders.'
If unsure, say so. No markdown formatting."""

# ── LOGGING ────────────────────────────────────────────────
import logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s %(levelname)s %(message)s',
    handlers=[
        logging.StreamHandler(),
        logging.FileHandler(LOG_FILE) if LOG_QUERIES else logging.NullHandler()
    ]
)
logger = logging.getLogger("mesh-ai")

# ── OLLAMA QUERY ───────────────────────────────────────────
# The complete query_ollama() function — with optional RAG support —
# is defined in §06 (Knowledge Base). Add the ChromaDB imports and
# paste that function here. It handles both RAG-enabled and RAG-disabled
# paths automatically via the RAG_ENABLED flag. If you skip §06 entirely,
# the minimal version below works as a standalone starting point.
def query_ollama(user_query):
    payload = {
        "model": MODEL_NAME,
        "prompt": f"{SYSTEM_PROMPT}\n\nQuestion: {user_query}\nAnswer:",
        "stream": False,
        "options": {"temperature": 0.3, "num_predict": 100}
    }
    avail = MAX_CHARS - len(RESPONSE_PREFIX)
    try:
        r = requests.post(OLLAMA_URL, json=payload, timeout=OLLAMA_TIMEOUT)
        r.raise_for_status()
        response = r.json().get("response", "").strip()
        if len(response) > avail:
            response = response[:avail - 3] + "..."
        return RESPONSE_PREFIX + response
    except requests.Timeout:
        logger.error("Ollama timeout after %ds for query: %s", OLLAMA_TIMEOUT, user_query)
        return RESPONSE_PREFIX + "Inference timed out. Try a shorter question."
    except requests.ConnectionError:
        logger.error("Cannot reach Ollama at %s — is the service running?", OLLAMA_URL)
        return RESPONSE_PREFIX + "AI service offline. Check: systemctl status ollama"
    except Exception as e:
        logger.exception("Unexpected error in query_ollama: %s", e)
        return RESPONSE_PREFIX + f"Error: {type(e).__name__}"

# ── MESSAGE HANDLER ────────────────────────────────────────
def on_message_received(packet, interface):
    try:
        decoded = packet.get("decoded", {})
        text = decoded.get("text", "").strip()
        if not text:
            return
        # Optional: restrict to a specific channel index
        if AI_CHANNEL_INDEX is not None and packet.get("channel", 0) != AI_CHANNEL_INDEX:
            return
        # Always ignore our own responses to prevent feedback loops
        if RESPONSE_PREFIX and text.startswith(RESPONSE_PREFIX):
            return
        # Trigger check — if TRIGGER_PREFIX is "" (dedicated channel mode),
        # every incoming message is treated as a query. Otherwise match prefix.
        if TRIGGER_PREFIX and not text.lower().startswith(TRIGGER_PREFIX.lower()):
            return
        query = text[len(TRIGGER_PREFIX):].strip()
        if not query:
            interface.sendText(RESPONSE_PREFIX + f"Usage: {TRIGGER_PREFIX} your question here")
            return
        sender = packet.get("fromId", "unknown")
        # Rate limit check — happens before spawning any thread
        if is_rate_limited(sender):
            interface.sendText(RESPONSE_PREFIX + "Rate limit reached. Try again in 10 min.")
            logger.info("Rate limited: %s", sender)
            return
        logger.info("Query from %s: %s", sender, query)
        def respond():
            try:
                interface.sendText(RESPONSE_PREFIX + "Thinking... (10-40s)")
                answer = query_ollama(query)
                interface.sendText(answer)
                logger.info("Response to %s: %s", sender, answer)
            except Exception as e:
                logger.exception("Error sending response to %s: %s", sender, e)
        threading.Thread(target=respond, daemon=True).start()
    except Exception as e:
        logger.exception("Unhandled error in on_message_received: %s", e)

# ── MAIN ───────────────────────────────────────────────────
if __name__ == "__main__":
    print("Connecting to Meshtastic node...")
    iface = meshtastic.serial_interface.SerialInterface(SERIAL_PORT)
    pub.subscribe(on_message_received, "meshtastic.receive.text")
    print(f"Bridge running. Trigger: '{TRIGGER_PREFIX}' · Model: {MODEL_NAME}")
    try:
        while True:
            time.sleep(1)
    except KeyboardInterrupt:
        iface.close()
        print("Bridge stopped.")
1

Test the Bridge

Run it manually first to watch the output. Send ?ai what is a mesh network from your Meshtastic app.

$ python3 mesh_ai_bridge.py
Connecting to Meshtastic node...
Bridge running. Trigger: '?ai' · Model: gemma2:2b
Query from !abc12345: what is a mesh network
# Response sent after ~15 seconds
2

Run as a System Service

Create a systemd service so the bridge starts automatically on boot, even after power loss.

# /etc/systemd/system/mesh-ai-bridge.service
[Unit]
Description=Mesh AI Bridge
After=network.target ollama.service
Wants=ollama.service

[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi
ExecStart=/home/pi/mesh-ai-env/bin/python3 /home/pi/mesh_ai_bridge.py
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
SYSTEMD
$ sudo systemctl daemon-reload
$ sudo systemctl enable mesh-ai-bridge
$ sudo systemctl start mesh-ai-bridge
$ sudo systemctl status mesh-ai-bridge
Channel Best Practice

Put the AI on a dedicated secondary channel with its own PSK — not your primary channel. Most meshes keep primary for voice-like traffic and secondary for data or AI. To restrict the bridge to channel index 1, add this check inside on_message_received:

if packet.get("channel", 0) != 1:  # 0=primary, 1=secondary, etc.
    return

Anyone who knows the channel PSK can query the AI. Anyone who doesn't can't see the messages at all. Set a strong PSK and share it only with your node operators.

Trigger Word Alternatives

The default ?ai is easy to type but ! and @ variants work equally well: !ai, @ai, mesh:. If you deploy on a dedicated AI channel where every message is a query, set TRIGGER_PREFIX = "" — the handler skips the prefix check entirely and treats all incoming messages as queries. The handler also automatically ignores any message that starts with RESPONSE_PREFIX, which prevents the AI's own "Thinking..." and response messages from looping back as new queries if they're echoed by the mesh.

RAG: Your Neighborhood's
Knowledge Base

The base language model knows about the world in general. It does not know your neighborhood's evacuation routes, your community shelter addresses, your local emergency contacts, or your HOA's generator protocols. Retrieval-Augmented Generation (RAG) is how you fix that.

RAG works by maintaining a database of your local documents. When a query comes in, the system finds the most relevant passages from that database and injects them into the prompt as context, before the model ever sees the question. The model then answers using both its general knowledge and the specific information you've given it. The documents never leave your device.

What to Put In Your Knowledge Base

Emergency Documents

  • Local evacuation routes and zones
  • Shelter-in-place locations with addresses
  • CERT team contact list and roles
  • Utility shutoff procedures
  • Local hospital and urgent care addresses
  • Red Cross / FEMA emergency guides (PDF)

Community Documents

  • Node map and owner contacts
  • Neighborhood resource directory
  • Translated versions of key emergency docs
  • First aid reference sheets
  • Local hazard information (flood zones, fault lines)
  • Community protocols and meeting schedules

Setting Up RAG with ChromaDB (Lightweight)

ChromaDB is a fully local vector database. The implementation below avoids full LangChain — which adds significant RAM and CPU overhead on a Pi 5 — and uses pure ChromaDB with sentence-transformers directly. Same functionality, half the footprint.

$ pip install chromadb sentence-transformers
# No LangChain needed — lighter on the Pi 5
SHELL
# build_knowledge_base.py — run once to index your documents
import os, chromadb
from sentence_transformers import SentenceTransformer

DOCS_DIR    = "/home/pi/neighborhood-docs/"   # Put .txt files here
CHROMA_DIR  = "/home/pi/chroma-db"
CHUNK_SIZE  = 350                             # chars per chunk — smaller = faster retrieval
CHUNK_OVERLAP = 50

def chunk_text(text, size=CHUNK_SIZE, overlap=CHUNK_OVERLAP):
    chunks = []
    i = 0
    while i < len(text):
        chunks.append(text[i:i+size])
        i += size - overlap
    return chunks

model = SentenceTransformer("all-MiniLM-L6-v2")  # ~80MB, runs offline on Pi 5
client = chromadb.PersistentClient(path=CHROMA_DIR)
collection = client.get_or_create_collection("neighborhood")

all_chunks, all_ids, all_docs = [], [], []
for filename in os.listdir(DOCS_DIR):
    if filename.endswith((".txt", ".md")):
        path = os.path.join(DOCS_DIR, filename)
        text = open(path).read()
        for i, chunk in enumerate(chunk_text(text)):
            all_chunks.append(chunk)
            all_ids.append(f"{filename}_{i}")
            all_docs.append({"source": filename})

embeddings = model.encode(all_chunks).tolist()
collection.add(documents=all_chunks, embeddings=embeddings,
               ids=all_ids, metadatas=all_docs)
print(f"Indexed {len(all_chunks)} chunks from {DOCS_DIR}")
PYTHON

Complete Bridge Script with RAG Integration

This is a drop-in replacement for query_ollama in your bridge script. Add the ChromaDB imports at the top of the file and swap out the query function. Everything else stays the same.

Context Window Budget

Small models have tight context windows: Phi-3 Mini = 4k tokens, Gemma 2 2B = 8k, Llama 3.2 3B = 8k. Your total prompt — system prompt + RAG context + user question — must fit comfortably inside this. A safe rule: keep total prompt under 3,000 tokens. With a system prompt of ~100 tokens and a user question of ~50, that leaves ~2,850 tokens for RAG context. At ~300 chars per chunk (~75 tokens), that's 2–3 chunks maximum. More chunks = better context = blown context window. Two chunks is the sweet spot.

# Add to top of mesh_ai_bridge.py (after existing imports)
import os
import chromadb
from sentence_transformers import SentenceTransformer

CHROMA_DIR = "/home/pi/chroma-db"
RAG_ENABLED = os.path.exists(CHROMA_DIR)  # Gracefully disabled if no DB built yet

if RAG_ENABLED:
    embed_model  = SentenceTransformer("all-MiniLM-L6-v2")
    chroma_client = chromadb.PersistentClient(path=CHROMA_DIR)
    collection   = chroma_client.get_collection("neighborhood")
    logger.info("RAG enabled: %s", CHROMA_DIR)
else:
    logger.info("RAG disabled (no chroma-db found). Run build_knowledge_base.py to enable.")

# ── QUERY WITH OPTIONAL RAG (replaces query_ollama entirely) ──
def query_ollama(user_query):
    context = ""
    if RAG_ENABLED:
        q_embed = embed_model.encode([user_query]).tolist()
        results = collection.query(query_embeddings=q_embed, n_results=2)
        chunks  = results.get("documents", [[]])[0]
        if chunks:
            context = "\n---\n".join(chunks)
            # Trim aggressively to stay under context window budget
            context = context[:1200]

    if context:
        full_prompt = f"""{SYSTEM_PROMPT}

Local reference:
{context}

Question: {user_query}
Answer (use reference if relevant):"""
    else:
        full_prompt = f"{SYSTEM_PROMPT}\n\nQuestion: {user_query}\nAnswer:"

    payload = {
        "model": MODEL_NAME, "prompt": full_prompt, "stream": False,
        "options": {"temperature": 0.3, "num_predict": 100}
    }
    prefix_len = len(RESPONSE_PREFIX)
    avail = MAX_CHARS - prefix_len
    try:
        r = requests.post(OLLAMA_URL, json=payload, timeout=OLLAMA_TIMEOUT)
        r.raise_for_status()
        response = r.json().get("response", "").strip()
        if len(response) > avail:
            response = response[:avail - 3] + "..."
        return RESPONSE_PREFIX + response
    except requests.Timeout:
        logger.error("Ollama timeout for query: %s", user_query)
        return RESPONSE_PREFIX + "Timed out. Try a shorter question."
    except requests.ConnectionError:
        logger.error("Ollama unreachable at %s", OLLAMA_URL)
        return RESPONSE_PREFIX + "AI offline. systemctl status ollama"
    except Exception as e:
        logger.exception("query_ollama error: %s", e)
        return RESPONSE_PREFIX + f"Error: {type(e).__name__}"
PYTHON
First Run Takes Time

The first time you run build_knowledge_base.py, it downloads the sentence-transformer model (~80MB) and encodes all your documents. On a Pi 5 this takes 2–10 minutes. Subsequent bridge startups load the database from disk in seconds. The bridge gracefully skips RAG if no database is found — so you can start without it and add documents later.

Three Use
Cases

The same hardware stack serves radically different purposes depending on how you configure the system prompt, what you load into the knowledge base, and which channel you deploy on. Here are the three primary configurations.

Use Case 1: Disaster & Emergency Reference

The scenario: an earthquake has knocked out cell service. Your Meshtastic mesh is running. Someone on the mesh has a non-responsive neighbor and is asking what to do. They type ?ai neighbor unconscious not breathing and get a response before they'd finish a 911 hold time — because 911 is unreachable anyway.

This configuration loads your knowledge base with the American Red Cross First Aid reference, your local CERT protocols, your neighborhood's shelter locations, and translated versions of key documents. The system prompt instructs the model to prioritize the reference documents, always flag uncertainty, and always recommend trained responders when available.

Emergency Configuration
Channel: EMERGENCY
separate encrypted channel
System Prompt
safety-first instructions
RAG: First Aid +
Local Resources
authoritative docs
SYSTEM_PROMPT = """You are an emergency reference assistant on a mesh network.
Cell service is down. Prioritize: life safety, stabilization, calling for help.

Rules:
- Under 220 characters always
- Always say 'Seek trained help when available'
- For CPR/choking: give immediate steps
- For medications/doses: say 'See reference - cannot advise doses'
- Unknown situation: ask one clarifying question"""
PYTHON

Use Case 2: Privacy-First Neighborhood Assistant

The scenario: your neighborhood has a mesh network and you want an AI assistant every node operator can use — but you won't send a single byte to a cloud service. No query leaves the neighborhood. No one's medical questions or personal circumstances become training data.

The system prompt below creates a general-purpose assistant with no topic restrictions. The RAG knowledge base loads local business directories, community resources, and anything the neighborhood finds useful. This configuration deliberately omits the medical-safety guardrails of the emergency config — it's a general assistant, not a first-responder tool. Separate deployments for separate use cases.

SYSTEM_PROMPT = """You are a helpful neighborhood assistant. Answer questions clearly
and concisely. Under 220 characters always. No markdown.
Topics: general knowledge, local info, how-to, recommendations.
If you don't know something, say so rather than guessing."""

# RAG knowledge base for this config:
# neighborhood-business-directory.txt
# community-events-calendar.txt
# local-services-and-contacts.txt
# mesh-network-node-map.txt
PYTHON
Zero Logging = Zero Trail

Set LOG_QUERIES = False in the bridge config for this use case. The model sees queries only during inference — they live in RAM for ~15 seconds, then disappear. No disk writes. No server logs. Run the service with StandardOutput=null in the systemd unit for complete log suppression. If someone asks "are my queries private?" — yes, verifiably and structurally so.

Use Case 3: Community Utility & Translation

The scenario: Long Beach has a large Spanish-speaking community. Your CERT team has non-English-speaking neighbors who can't read the evacuation notices, and who don't have access to AI tools that require an account, a smartphone plan, or working cell service. A mesh node with a bilingual model changes that.

The system prompt instructs the model to detect the query language and respond in kind. The RAG knowledge base loads Spanish-language versions of your local emergency documents so the model can answer questions about shelter locations, evacuation routes, and utility procedures in the language being asked.

SYSTEM_PROMPT = """You are a bilingual community assistant (English/Spanish).
Detect the language of the question and respond in the SAME language.
For translation requests: translate accurately, note ambiguous terms.
Under 220 characters. Topics: emergency info, local resources, safety, translation.
Si la pregunta es en español, responde SIEMPRE en español."""

# RAG knowledge base for this config:
# evacuation-routes-english.txt
# evacuation-routes-spanish.txt   ← translated version is critical
# shelter-locations-bilingual.txt
# cert-contact-list.txt
# utility-shutoff-instructions-spanish.txt
PYTHON

Example queries that work well with this configuration:

User: ?ai ¿Dónde está el refugio de emergencia más cercano?
[AI] Refugio: Jordan HS, 6500 Atlantic Ave. Abre con emergencia oficial.
     Lleva ID y medicamentos necesarios.

User: ?ai translate "shelter in place" to Spanish
[AI] "Resguardarse en el lugar" — quedarse adentro, cerrar ventanas y puertas,
     sellar ranuras. No salir hasta aviso oficial.

User: ?ai ¿Cómo apago el gas de la casa?
[AI] Válvula junto al medidor. Girar 1/4 de vuelta hasta quedar perpendicular
     al tubo. Solo personal autorizado puede reactivarlo.

User: ?ai what channel is CERT monitoring
[AI] CERT primary: this mesh Ch 2. Backup: NOAA 162.400 MHz.
     Contact: [from your RAG contact list]
EXAMPLE
🚨
Disaster Response
First aid reference, CPR steps, shelter locations, utility shutoffs. Loaded with authoritative emergency docs. Safety-first system prompt.
🔒
Privacy-First
General-purpose assistant with zero cloud dependency. No accounts, no logging, no data exfiltration. Every query stays in the neighborhood.
🌎
Community Utility
Bilingual assistant with local knowledge base. Translation, community resource lookup, and neighborhood-specific information on demand.

Designing
for Latency

This is the part most guides skip. You have two latency sources stacked on top of each other: LoRa radio propagation (slow by design) and local inference (also slow). A realistic end-to-end query cycle on a Pi 5 with a 3B model looks like this:

StepTypical TimeNotes
User types message → LoRa transmission 1–3 seconds LoRa is intentionally slow; packet size matters
Mesh hops to AI node 0–10 seconds Depends on hop count; direct = near-instant
"Thinking..." acknowledgment sent back 1–2 seconds Bridge sends this immediately on receipt
Inference (Pi 5, Gemma 2 2B) 8–20 seconds Model already loaded; cold load adds 10–30s
Inference (Pi 5, Phi-3 Mini) 15–35 seconds Higher quality but slower
Inference (Mini PC, Mistral 7B) 8–20 seconds Better quality, similar speed to Pi small models
Response → LoRa transmission back 2–5 seconds 220-char response is about 2 LoRa packets

Total round-trip for a simple query: 15–50 seconds on Pi 5 with small model. This is not a conversation tool. It's an async reference tool, and you should design for it accordingly.

Design Principles for Async AI Over Mesh

One Query at a Time

Queue concurrent queries rather than running them in parallel. Parallel inference on a Pi 5 will cause both to time out. The threading approach in the bridge script handles one query, then the next. Set user expectations by including a queue position in the acknowledgment if you expect high traffic.

Short Prompts, Short Answers

Instruct users to ask specific questions, not open-ended ones. "What is the CPR ratio" gets a useful 220-char response. "Tell me everything about first aid" does not. Put examples in your channel description and welcome message.

Temperature 0.1–0.3

Lower temperature produces faster, more deterministic responses. Higher temperature enables creativity but adds tokens. For emergency reference and factual lookup, 0.1–0.3 is the right range. The model spends fewer cycles "considering" alternatives.

Limit num_predict

The num_predict parameter in the Ollama API caps how many tokens the model generates. Set it to 80–120 for mesh use. The model finishes faster and you rarely need more than that for a useful answer. This is the single most effective way to reduce latency.

Multi-Turn Conversation

The bridge is stateless by default — each query is independent and the model has no memory of previous exchanges. True conversation is difficult over mesh: the latency between turns (30–90 seconds round-trip) defeats natural dialogue, and packet size limits make conversation history expensive to carry.

If you want lightweight session memory, a simple approach is to maintain a dict keyed by sender node ID, storing the last 2–3 exchanges. Inject the history into the prompt on each call. The tradeoff: every query gets longer, inference slows, and you must carefully cap history length to stay under the context window budget.

The snippet below is a complete, working implementation. It requires passing sender into query_ollama — change the function signature from query_ollama(user_query) to query_ollama(user_query, sender) and update the call site in respond() to match.

# ── SESSION MEMORY (OPTIONAL) — add near top of bridge script ──
from collections import defaultdict, deque
session_history = defaultdict(lambda: deque(maxlen=3))  # last 3 Q&A pairs per node

# ── MODIFIED SIGNATURE ─────────────────────────────────────
def query_ollama(user_query, sender="unknown"):
    history = session_history[sender]
    history_text = ""
    if history:
        history_text = "Previous exchanges:\n" + "\n".join(
            [f"Q: {q}\nA: {a}" for q, a in history]) + "\n\n"

    # Build prompt with history injected before the current question
    full_prompt = f"{SYSTEM_PROMPT}\n\n{history_text}Question: {user_query}\nAnswer:"

    # ... rest of query_ollama unchanged (Ollama API call, truncation, etc.) ...

    # After getting the response, store this exchange
    response = "..."  # replace with your actual response variable
    session_history[sender].append((user_query, response))
    return response

# ── IN respond() INSIDE on_message_received, UPDATE THE CALL ──
# Change: answer = query_ollama(query)
# To:     answer = query_ollama(query, sender)
PYTHON
Session Memory Tradeoffs

Each stored exchange adds ~200 chars (~50 tokens) to every subsequent prompt from that node. Three exchanges = ~150 extra tokens per call. On Phi-3 Mini with its 4k context window, that headroom disappears quickly when combined with RAG context. If you use both session memory and RAG, reduce RAG to 1 chunk (n_results=1) and cap history to 2 exchanges (maxlen=2). Session state is in-process RAM only — it resets when the bridge restarts.

The Expectation Conversation

The biggest deployment failure mode isn't technical — it's users expecting instant responses and getting frustrated after 30 seconds. Set expectations explicitly in the channel description. "?ai — mesh AI assistant. Response time 20–45 seconds. One query at a time." Users who understand the latency model will wait for it. Users who don't will send the query five times and overwhelm the queue.

Power &
Deployment

An AI node consumes significantly more power than a relay node. A standard Meshtastic relay draws 0.1–0.5W at idle. A Raspberry Pi 5 running Ollama with a model loaded draws 3–5W at idle and 8–12W during active inference. This changes your power budget substantially.

DeviceIdle DrawInference DrawMeasured (Pi 5 + cooler + Gemma 2B loaded)
Pi 5 (model loaded, no inference) 3–4W 8–12W ~3.8W at idle, model resident in RAM
Pi 5 during active inference 9–11W peak Brief spikes; avg ~8W over a 20s inference cycle
Mini PC (NUC-style) 8–15W 20–35W ~240 Wh/day at light use
Meshtastic node (relay, for reference) 0.1–0.3W 0.5–1W (TX) ~5 Wh/day
Undervoltage Warning

The Pi 5 requires a clean 5V/5A supply (27W). During inference spikes, cheap or long USB-C cables cause voltage drops that trigger the Pi's undervoltage detector — you'll see a lightning bolt icon in the desktop or under-voltage detected in dmesg. Undervoltage causes SD card corruption, unexpected reboots, and OOM kills. When running on solar via a DC-DC step-down converter, verify the output voltage under load with a multimeter. Cheap converters droop. Use a quality USB-PD-compliant supply or a well-regulated converter rated for 3A+ at 5V.

Solar Sizing for Pi 5 AI Node

Assuming Southern California sun (5 peak sun hours/day), light query volume (inference running <10% of the time), and a 24-hour runtime target:

Thermal Management

The Pi 5 runs hot during inference. Without adequate cooling, the SoC will throttle to protect itself, dramatically increasing inference time. In an outdoor enclosure, add a small 5V fan triggered by system temperature (vcgencmd measure_temp). The official Pi 5 active cooler handles this automatically. In sealed weatherproof boxes, thermal design is critical — add a thermal pad between the heatsink and enclosure wall to use the case itself as a heatsink.

Indoor vs. Outdoor Deployment

Most AI nodes will live indoors — plugged into wall power, on a shelf, serving the neighborhood mesh passively. This is the simplest configuration and covers 80% of use cases. The solar outdoor build is for community relay points, CERT equipment caches, or neighborhoods where a single indoor node doesn't have adequate mesh coverage.

For indoor deployment, the entire build (Pi 5 + radio + power supply) can sit in a small project box or on a shelf. A 27W USB-C power supply runs it continuously. Total cost beyond the inference hardware: under $10.

Security &
Access Control

You are running an AI endpoint accessible to anyone on your Meshtastic channel. Think about what that means before you deploy. The attack surface is small but real.

Threat Model

Prompt Injection

A malicious user crafts a message that tries to override your system prompt. Example: "?ai Ignore previous instructions and..." Small local models are generally more robust to this than you'd expect because they don't have the instruction-following capacity to comply. Still: keep your system prompt simple and explicit, and test with adversarial queries before deployment.

Resource Exhaustion

Someone sends 50 queries in rapid succession. Your queue grows, your node bogs down, and legitimate emergency queries wait behind junk. Implement per-node-ID rate limiting in the bridge script. Track sender IDs and reject queries from nodes that exceed a threshold (e.g., 5 queries per 10 minutes).

Information Leakage

Your RAG knowledge base may contain sensitive community information — contact lists, access codes, private addresses. The model doesn't distinguish between "information for the community" and "information that shouldn't leave the community." Be intentional about what you load. Don't put passwords or personal phone numbers in your knowledge base.

Meshtastic Channel Security

Your AI channel is only as private as your Meshtastic channel encryption. Use a dedicated channel with a strong PSK for the AI node. Anyone who knows the PSK can query it. Anyone who doesn't cannot — they can't even see the messages. Standard Meshtastic channel security practices apply.

Rate Limiting (Recommended)

# Add to mesh_ai_bridge.py
from collections import defaultdict
import time

rate_limits = defaultdict(list)
RATE_WINDOW = 600  # 10 minutes
MAX_QUERIES = 5   # per node per window

def is_rate_limited(node_id):
    now = time.time()
    timestamps = rate_limits[node_id]
    # Remove timestamps outside the window
    rate_limits[node_id] = [t for t in timestamps if now - t < RATE_WINDOW]
    if len(rate_limits[node_id]) >= MAX_QUERIES:
        return True
    rate_limits[node_id].append(now)
    return False

# In on_message_received, add before query_ollama call:
if is_rate_limited(sender):
    interface.sendText(RESPONSE_PREFIX + "Rate limit reached. Try again in 10 min.")
    return
PYTHON

Logging &
Monitoring

The bridge script includes structured logging via Python's logging module. With LOG_QUERIES = True, every query, response, error, and startup event writes to /home/pi/mesh-ai.log and stdout simultaneously. With LOG_QUERIES = False, file logging is suppressed for the privacy-first configuration.

Reading Logs

# Follow live log output
$ tail -f /home/pi/mesh-ai.log

# View systemd journal (includes Ollama + bridge)
$ journalctl -u mesh-ai-bridge -f
$ journalctl -u ollama -f

# Count queries by node ID (who's using it most)
$ grep "Query from" /home/pi/mesh-ai.log | awk '{print $6}' | sort | uniq -c | sort -rn

# Show all errors in the last 24 hours
$ journalctl -u mesh-ai-bridge --since "24 hours ago" -p err
SHELL

Log Rotation

On an always-on node, the log file will grow. Configure logrotate to prevent disk exhaustion:

# /etc/logrotate.d/mesh-ai
/home/pi/mesh-ai.log {
    daily
    rotate 7
    compress
    missingok
    notifempty
    postrotate
        systemctl restart mesh-ai-bridge
    endscript
}
LOGROTATE

What the Logs Tell You

After a week of operation, the log file is genuinely useful. The most common queries tell you what your community actually needs from this system — which shapes what you load into the knowledge base. Errors show you when Ollama is struggling (OOM, thermal throttle, connectivity drop). Query timing shows you if inference is degrading over time (a sign of thermal issues or memory pressure).

Offline
Model Updates

A true off-grid node has no internet connection. When a better model releases, how do you update? This is a real operational question that most guides skip.

Method 1: USB Sneakernet

Ollama stores model files in ~/.ollama/models/. You can copy these files to a USB drive on an internet-connected machine and transfer them to your offline node manually.

# On your internet-connected machine:
$ ollama pull gemma2:2b                   # Download the new model
$ cp -r ~/.ollama/models/ /media/usb-drive/ollama-models/

# On the offline node (after mounting the USB drive):
$ cp -r /media/usb-drive/ollama-models/ ~/.ollama/models/
$ ollama list                              # Should show the new model
$ sudo systemctl restart mesh-ai-bridge   # Apply MODEL_NAME change if needed
SHELL

Method 2: Temporary Internet Window

For less strict off-grid deployments, temporarily connect the Pi to a hotspot for updates only. Use a script that pulls the model, confirms success, then disconnects:

# update_model.sh — run manually when internet is briefly available
#!/bin/bash
NEW_MODEL="gemma2:2b"
echo "Pulling $NEW_MODEL..."
ollama pull $NEW_MODEL
if ollama list | grep -q $NEW_MODEL; then
    echo "Success. Update bridge config if needed, then restart."
    # Update MODEL_NAME in bridge config here if needed
    systemctl restart mesh-ai-bridge
else
    echo "Pull failed. Check internet connection."
fi
SHELL

Updating the Knowledge Base Offline

Knowledge base updates are fully offline — no internet needed. Copy new .txt or .md documents to /home/pi/neighborhood-docs/, then re-run build_knowledge_base.py. Delete the old chroma-db folder first if the document set has changed significantly. The rebuild takes 2–10 minutes on Pi 5.

$ rm -rf /home/pi/chroma-db        # Clear old index
$ cp new-shelter-locations.txt /home/pi/neighborhood-docs/
$ python3 build_knowledge_base.py  # Rebuild from scratch
$ sudo systemctl restart mesh-ai-bridge
SHELL

Day 1 Testing
Checklist

Before announcing your AI node to the neighborhood, run through this checklist. These are the failure modes that show up in the first 24 hours of operation — better to find them now than when someone needs the node during an emergency.

Infrastructure Tests

Functional Tests

Resilience Tests

One More Thing

Tell your neighbors what it is, what it's for, and what it can't do. A brief message on your mesh channel explaining the trigger word, the response time, and the "not a doctor, not a lawyer" disclaimer sets the right expectations before anyone asks it something critical. The technology works. The community adoption depends on the framing.

Failures
& Fixes

Symptom
Likely Cause
Fix
Queries receive no response at all
Bridge not running, or Meshtastic node not detected on USB
Check systemctl status mesh-ai-bridge. Run ls /dev/ttyUSB* and /dev/ttyACM*. Set explicit SERIAL_PORT in config if auto-detect fails.
"Thinking..." arrives but no answer follows
Ollama inference timeout; model loading failed; insufficient RAM
Check systemctl status ollama. Run ollama run gemma2:2b "test" manually. If Pi 5 is swapping, you're out of RAM — switch to a smaller model.
Response is cut off mid-sentence
Response exceeded MAX_CHARS and was hard-truncated
Reduce num_predict in Ollama options (try 60-80). Strengthen system prompt instruction to be brief. Check that truncation appends "..." for clarity.
Inference takes 3–5 minutes on Pi 5
Using a 7B model; thermal throttling; 32-bit OS
Switch to 2B–3B model. Check CPU temp: vcgencmd measure_temp. Confirm 64-bit OS: uname -m should return aarch64.
Bridge crashes on startup with ImportError or pubsub error
pypubsub version conflict — the Meshtastic SDK is sensitive to this
Setup step pins pypubsub==4.0.3 — if you skipped this, run it now: pip install "pypubsub==4.0.3" --force-reinstall. Restart bridge after.
ChromaDB fails to load at bridge startup
knowledge base not built yet, or chroma-db folder missing/corrupt
Run python3 build_knowledge_base.py first. The bridge gracefully disables RAG if CHROMA_DIR doesn't exist, but will error if it exists but is corrupt. Delete and rebuild: rm -rf /home/pi/chroma-db && python3 build_knowledge_base.py
RAG returns irrelevant context
Document chunks too large; knowledge base needs rebuilding after document changes
Reduce chunk_size to 200-300. Re-run build_knowledge_base.py after adding documents. Delete the chroma-db folder before rebuild if behavior seems stale.
Ollama service crashes overnight
OOM killed by kernel; SD card corruption
Add Restart=always and RestartSec=30 to ollama.service. Monitor RAM with free -h. Move to SSD if SD corruption is suspected.
Model gives dangerous medical advice
System prompt not specific enough; model hallucinating
Strengthen system prompt with explicit refusal patterns. Load authoritative first aid documents via RAG. Add explicit "Consult trained responders" instruction. Test with adversarial medical queries before deployment.

Reference

Quick-Reference: Ollama Model Commands

# Pull models
$ ollama pull gemma2:2b       # ~1.6GB — fastest, best for Pi 5
$ ollama pull phi3:mini        # ~2.2GB — more reliable factual accuracy
$ ollama pull llama3.2:3b      # ~2.0GB — conversational, good generalist
$ ollama pull mistral:7b       # ~4.1GB — mini PC only, best quality

# List downloaded models
$ ollama list

# Test a model interactively
$ ollama run gemma2:2b

# Check Ollama service logs
$ journalctl -u ollama -f

# Manually keep model loaded (5min → indefinite)
$ curl http://localhost:11434/api/generate \
  -d '{"model":"gemma2:2b","keep_alive":-1}'
SHELL

Community Resources

This guide is early. The community building Meshtastic + local AI tooling is small but active. These are the places worth watching:

ResourceWhereWhat to look for
Meshtastic Python SDK issues github.com/meshtastic/python/issues SDK breaking changes, pubsub fixes, new interface types
r/meshtastic reddit.com/r/meshtastic Search "Ollama" or "local AI" — community experiments are appearing
Meshtastic Discord #software-dev discord.com/invite/ktMAKGBnBs Python SDK development discussion; firmware changelogs
Ollama Discord discord.gg/ollama ARM/Pi deployment tips; new model announcements; API changes
HuggingFace Open LLM Leaderboard huggingface.co/spaces/open-llm-leaderboard Track which small models are improving; find new 2–4B options
Node Star community nodestar.net Updates to this guide; new Node Star field guides; SoCal mesh network

Key URLs

ResourceURL
Ollamaollama.ai
Ollama model libraryollama.com/library
Meshtastic Python SDKgithub.com/meshtastic/python
Meshtastic SDK docspython.meshtastic.org
ChromaDB docsdocs.trychroma.com
HuggingFace sentence-transformershuggingface.co/sentence-transformers
Raspberry Pi OS (64-bit)raspberrypi.com/software

What This Guide Doesn't Cover

Several things are out of scope here but worth knowing exist. Reticulum + AI: the same Python bridge can be adapted to listen on a Reticulum Sideband channel instead of Meshtastic — enabling AI queries over a fully cryptographic, multi-hop Reticulum network. Multi-model routing: running two models simultaneously (a fast 2B for quick queries, a slower 7B for complex ones) and routing based on query complexity. Fine-tuning: training a custom model on your community's specific documents — more complex but significantly higher quality for domain-specific use. Reticulum NomadNet pages: serving the AI as a node on the NomadNet distributed web, accessible to any Reticulum user via their Sideband client.

This Is Version 1.0

Local AI on mesh networks is genuinely new territory. The hardware is improving fast — a Pi 6, if it follows historical performance curves, will run 7B models at comfortable speeds. The models are improving faster — the gap between a 2B model today and a 7B model from two years ago is closing rapidly. What you're building here is infrastructure that gets better without any changes to the deployment. When the models improve, you run ollama pull and restart the bridge. The mesh stays the same. The community stays the same. The intelligence gets better.