This Site Runs Itself

I didn't just build a portfolio. I built an AI-assisted development environment that writes code, maintains infrastructure, and generates documentation. This page explains how it works.

Architecture Overview

Four layers working together: orchestration, inference, knowledge, and output.

flowchart TD
    subgraph User["Developer"]
        Me[PsyCole]
    end

    subgraph Orchestration["Agent Orchestration Layer"]
        OpenClaw[OpenClaw Gateway]
        Main[Main Agent]
        SiteOps[site-ops]
        SiteDev[site-dev]
        SiteDesign[site-design]
        SiteBlogger[site-blogger]
    end

    subgraph Inference["Model Inference Layer"]
        Ollama[Ollama Local
RTX 2070 Max-Q] Cloud[Cloud APIs
OpenAI, Anthropic, etc.] Router[Auto Router
VRAM-aware] end subgraph Knowledge["Knowledge Persistence Layer"] Memory[Session Memory] Zettels[Obsidian Zettels] MoCs[Maps of Content] end subgraph Output["Output Layer"] Site[codingenvironment.com] Blog[Blog Posts] Cases[Case Studies] end Me --> Main Main --> SiteOps Main --> SiteDev Main --> SiteDesign Main --> SiteBlogger OpenClaw --> Main Main --> Router Router --> Ollama Router --> Cloud SiteOps --> Memory SiteDev --> Memory SiteDesign --> Memory SiteBlogger --> Memory Memory --> Zettels Zettels --> MoCs SiteOps --> Site SiteDev --> Site SiteDesign --> Site SiteBlogger --> Blog MoCs --> Cases Site -.->|proves| Main

Layer by Layer

Each layer has a specific responsibility. Together, they form a cognitive infrastructure that amplifies what I can ship.

๐ŸŽฏ

Orchestration Layer

OpenClaw is the agent gateway I run locally. It coordinates specialist agents:

  • main โ€” orchestrator, routes requests to specialists
  • site-ops โ€” health checks, telemetry, deployments
  • site-dev โ€” code changes, feature implementation
  • site-design โ€” UX review, copy refinement
  • site-blogger โ€” draft generation, publishing

Each specialist has focused expertise. The main agent coordinates them.

๐Ÿง 

Inference Layer

Hybrid model routing balances cost and capability:

  • Local (Ollama) โ€” daily drivers run on RTX 2070 Max-Q (8GB VRAM)
  • Cloud fallbacks โ€” heavy tasks spill to OpenAI, Anthropic, etc.
  • Auto routing โ€” VRAM-aware policy decides local vs. cloud

This keeps most work local while preserving cloud access for complex tasks.

๐Ÿ“š

Knowledge Layer

Learnings persist and compound across sessions:

  • Session memory โ€” agents read/write daily notes and long-term memory
  • Obsidian zettelkasten โ€” atomic notes with bidirectional links
  • Maps of Content โ€” higher-order structure connecting related zettels

Nothing is lost. Context accumulates. The system gets smarter over time.

๐Ÿš€

Output Layer

Visible proof of the system in action:

  • This site โ€” built and maintained by site-dev and site-ops agents
  • Blog posts โ€” drafted by site-blogger from zettel clusters
  • Case studies โ€” generated from explainer artifacts

The output layer proves the architecture works. You're looking at it.

Why This Matters

"This person built a cognitive infrastructure that writes its own documentation, monitors its own health, and demonstrates production AI engineering in a single coherent system."

Most portfolios show what someone did. This one shows what someone built โ€” and continues to build with. It's the difference between "I know AI" and "I shipped an AI infrastructure that runs my life."

For hiring managers: this demonstrates systems thinking, infrastructure expertise, and production AI engineering โ€” all visible in a single artifact.

Proof Points

Evidence that the system works, visible on this site.

Telemetry Dashboard

The operational dashboard now exposes live CPU, memory, network, tracked process state, and recent snapshot history for the production app surface.

Open live metrics

Blog Pipeline

Blog posts start as zettel clusters in Obsidian, get refined by site-blogger, and publish to the site with full version control.

Automated Health Checks

Every 30 minutes, site-ops runs health checks: uptime, SSL window, Django sanity. Alerts go to my phone via WhatsApp gateway.

Model Profiles

Daily models (fast, local) vs. heavy models (cloud). Auto-switching based on VRAM. The system adapts to hardware constraints.

This Page

You're reading a page that was generated from architecture documentation in my knowledge base. The system documents itself.

Git-Driven Changes

Every code change goes through git with clear commit messages. The portfolio case studies link to real commits.

Operational proof, not just architecture diagrams

The metrics page is the runtime view of this system: collector state, recent snapshot history, and the live process set behind apps.codingenvironment.com.

Tech Stack

The tools that make it work.

Orchestration OpenClaw
Local Inference Ollama
Knowledge Base Obsidian
Site Framework Django
Data Explorer Dash/Plotly
Deployment Gunicorn + Nginx
Hosting DigitalOcean
Version Control Git + GitHub

Want to See More?

Check out the case studies for concrete examples, or read the blog for deeper dives into how this system works.