Abdelrahman Hanafy
#tech stack#n8n#OpenAI#Supabase#Pinecone#Next.js#automation

The Full Tech Stack Behind My Personal AI Brain

2026-04-19·4 min read
The Kanban task board — tasks flowing from INBOX through NEXT, DOING, WAITING, and DONE.
Kanban task board in the dashboard.

In part two, I explained the architecture.

This post is the direct implementation view: the exact stack, what each part does, and how the core flow works end to end.

The Full Stack at a Glance

  • Capture: Telegram Bot (primary input), GitHub Webhook (technical notes sync)
  • Orchestration: n8n (workflow automation)
  • Transcription: OpenAI Whisper (voice to text)
  • Classification: GPT-4o-mini (item type + PARA assignment)
  • Structured Storage: PostgreSQL on Supabase (system of record)
  • Vector Storage: Pinecone (semantic search index)
  • Retrieval: GPT-4o (RAG-grounded answers)
  • Dashboard: Next.js on Vercel (browse and manage)

n8n — The Orchestration Layer

This is the execution engine. It runs the core workflows:

  1. Brain Ingestion — triggered by Telegram messages; handles Whisper transcription for voice, GPT-4o-mini classification for all messages, PostgreSQL insert, and Telegram confirmation
  2. Ask Brain — triggered by ? prefix in Telegram; handles Pinecone query, GPT-4o answer generation, Telegram response
  3. Pinecone Sync — scheduled job that finds PostgreSQL items without embeddings and batches them into Pinecone
  4. GitHub Sync — triggered by GitHub push webhook; reads changed markdown files and upserts them into PostgreSQL
Brain Ingestion workflow in n8n: Telegram capture, Whisper transcription, GPT-4o-mini classification, and PostgreSQL storage.
Brain Ingestion workflow in n8n.

Telegram Bot — The Capture Interface

Telegram is the primary input channel.

I use it for:

  • text capture
  • voice capture
  • quick retrieval queries using ?

The bot is intentionally simple: receive message, forward to n8n.

OpenAI Whisper — Voice to Text

Whisper transcribes Telegram voice messages (.ogg) into text.

That text then enters the same pipeline as normal typed messages.

GPT-4o-mini — The Classification Engine

GPT-4o-mini converts raw capture into structured JSON:

  • item_type: task / project / area / resource
  • para_bucket: Project / Area / Resource / Archive
  • title: a concise generated title (5–8 words)
  • context: a short generated summary if the input was long

This response is what gets persisted in the database.

PostgreSQL on Supabase — Structured Storage

PostgreSQL is the system of record.

Every captured item becomes one row with category, bucket, state, content, and metadata.

The core schema is straightforward:

items (
  id, type, para_bucket, title, content,
  source, status, priority, due_date,
  embedded, created_at, updated_at
)

The dashboard reads and writes directly to this database.

Pinecone — The Vector Search Layer

Pinecone stores embeddings for semantic retrieval.

Flow:

  1. Item is saved in PostgreSQL
  2. Embedding is generated (text-embedding-3-small)
  3. Vector is stored in Pinecone
  4. ? queries search Pinecone for relevant context

GPT-4o — Retrieval Answering

GPT-4o receives:

  • original question
  • top matches from Pinecone

It returns a grounded answer based on my own saved context.

Next.js on Vercel — The Dashboard

The dashboard is the control surface for the system.

Core views:

  1. Command Center — stats (active items, open tasks, PARA distribution), tasks in progress, recently added items, quick search, and Ask Brain chat
  2. Task Board — Kanban view across INBOX → NEXT → DOING → WAITING → DONE
  3. PARA Views — Projects, Areas, Resources tabs
  4. Item Detail — full content + metadata + edit
The projects view with active project cards and task counts.
Projects page in the dashboard.
The resources view with category filters and reference items.
Resources page in the dashboard.

The Core Flow (End to End)

  1. I capture text or voice in Telegram.
  2. n8n receives it.
  3. Whisper transcribes voice when needed.
  4. GPT-4o-mini classifies and structures the item.
  5. Item is stored in PostgreSQL.
  6. Embedding is synced to Pinecone.
  7. ? queries retrieve context from Pinecone.
  8. GPT-4o generates the final grounded answer.
  9. Answer comes back in Telegram (and is available in the dashboard).

That is the whole core system so far ..

Share