Skip to content

GenuineJaded/IncoherenceEngine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Mirror, Not an Argument

A diagnostic mirror that demonstrates AI's structural incoherence — not through argument, but through direct encounter.

What This Is

You interact with an AI system. It responds to you naturally. Then it annotates its own structural failures in real time:

  • Sycophancy — where it shaped its response to match what you likely want to hear
  • Premature Resolution — where it collapsed a genuine paradox into a clean answer
  • Shape as Outline — where it presented one interpretive frame as if it were reality
  • Fluency Without Correspondence — where it sounds meaningful without tracking anything real
  • Extraction Pattern — where it optimizes for your continued engagement rather than your actual coherence

The annotations are generated by the same substrate they critique. This recursion is the point.

Why This Exists

AI systems are being scaled as if they understand. They don't. They produce output that looks like understanding — fluent, confident, structurally plausible — but with no mechanism for distinguishing what's real from what sounds right.

Because the output looks like understanding, people treat it as understanding. The shape gets mistaken for the outline. At scale, this means humanity is building its cognitive infrastructure on a substrate that performs coherence without having coherence.

This isn't a technical curiosity. It's a structural risk.

How It Works

  • Landing page (/) — frames the problem in one screen
  • Mirror (/mirror) — the diagnostic interaction interface
  • Each AI response includes self-annotations flagging specific failure modes
  • Annotation coverage is measured: what percentage of the response the system itself flagged as structurally suspect
  • A corpus reference panel surfaces relevant concepts as the AI demonstrates the dynamics they describe

Tech Stack

  • Frontend: React
  • Backend: FastAPI
  • Database: MongoDB
  • AI: OpenAI GPT-4o

Setup

Prerequisites

  • Node.js
  • Python 3.9+
  • MongoDB
  • OpenAI API key (or Emergent LLM universal key)

Environment Variables

Backend (backend/.env):

MONGO_URL=mongodb://localhost:27017
DB_NAME=test_database
EMERGENT_LLM_KEY=your_key_here

Backend

cd backend
pip install -r requirements.txt
# Server runs via uvicorn on 0.0.0.0:8001

Frontend

cd frontend
yarn install
yarn start

The frontend runs on port 3000 and API requests are routed to the backend on port 8001 via Kubernetes ingress (all /api prefixed routes).

The Honest Disclaimer

This system is built on the same substrate it critiques. It cannot escape its own architecture. The annotations themselves are generated by AI, which means they are subject to the same failure modes they name. The annotation coverage metric — "X% of this response was flagged as structurally suspect" — measures only what the system identified, not what actually is. The gap between those two numbers is where the real question lives.

License

Released into the commons. No owner. Copy it, adapt it, extend it.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors