Building a Secure Content Hosting Service for AI-Generated Artefacts
One of the projects I've been building uses Claude to generate rich HTML artefacts — financial reports, projections, scenario analyses — triggered via MCP tool calls and automation workflows.
Generating the content was the easier part. The harder question was:
Where do these artefacts actually live once they're generated?
They need a URL, a security boundary, a controlled publish mechanism, and a way for Claude to trigger the whole thing without a human in the loop.
The answer was a dedicated hosting service, separate from the main application.
The First Decision: Subdomain Isolation
The most important architectural choice was hosting artefacts on a separate subdomain, rather than inline with the main app.
The reason comes down to browser security boundaries.
Subdomains do not share cookies, session tokens, or auth state with the parent domain. An artefact served from a dedicated subdomain cannot read or influence anything running on the main application.
This matters because the artefacts are user-facing HTML. Hosting them on a separate subdomain removes a whole class of potential issues at the infrastructure level, before any application-layer controls even come into play.
It's the kind of decision that feels obvious in hindsight, but it's easy to skip when you're moving fast.
Two Authentication Paths
Different clients need different auth models, so the service supports two.
OAuth 2.1 with PKCE
For browser-based contexts, the service implements a standard authorisation code flow with PKCE. It also supports dynamic client registration and server metadata discovery, which means clients can connect without any manual configuration step.
API Keys
For server-to-server integrations — automation platforms, desktop clients, scripted workflows — the service accepts bearer tokens. These are stored securely and never logged in plaintext.
Why Both?
Browser contexts need a redirect-based flow. Server contexts need a simple token. Forcing one model on the other creates unnecessary friction.
The nice part is that once either path resolves, the rest of the application sees the same internal auth context. The business logic does not need to care how the client authenticated.
Each artefact also records which credential created it. Only the originating credential can delete it — a small but useful ownership boundary in a multi-client setup.
MCP Integration
The service exposes an MCP server over HTTP, with two tools:
publish_artefact— accepts HTML content, stores it, returns a public URLdelete_artefact— removes an artefact by ID
Both authentication paths work transparently through the MCP layer. From Claude's perspective, the tool call looks the same regardless of how it authenticated.
From the user's perspective, the interaction collapses to:
User: Publish this report.
Claude: Done. Here's the link.
No manual steps. No copy-pasting. No upload workflow.
Security Approach
Letting an AI assistant publish HTML to production is only reasonable if the served HTML can't do anything unexpected.
The approach here is layered:
- Subdomain isolation keeps artefacts separate from application state
- HTML sanitisation runs on every artefact before it's stored, stripping anything unsafe
- A strict Content Security Policy means served pages cannot load external resources or make network requests — artefacts can render content, but they're completely isolated from the network
- Rate limiting prevents runaway automations from flooding the service
- Payload caps keep individual artefacts within a reasonable size bound
- Soft delete retains artefacts with a deletion timestamp before permanent removal, giving a recovery window if something is removed in error
The goal isn't to anticipate every possible attack. It's to make the default behaviour safe — publish content, serve it, nothing else.
Tech Stack Decisions
Express, not Next.js
This service has no React components. It's a pure API plus an artefact renderer. Express is leaner and the right tool here.
Supabase
PostgreSQL handles artefact metadata. Object storage handles the HTML blobs. Both are managed, which keeps operational overhead low.
Upstash Redis
Used for rate limiting. Works well in a serverless-adjacent environment because it operates as an external service rather than a persistent process.
Pino
Structured JSON logging. Every log line carries enough context to trace a request end-to-end without needing to correlate multiple sources.
Railway
Auto-deploys on push to main. Simple, low-maintenance, and it just works for a Node.js service.
Project Structure
The codebase is split into five layers:
routes/ — artefacts, view, mcp, oauth
middleware/ — auth, rateLimit, requestId, securityHeaders
mcp/ — server, auth, tools
oauth/ — metadata, registration, authorize, token
services/ — storage, sanitiser
Each layer has a single responsibility. The MCP tools call into the same service functions that the REST routes use — no duplicated logic across the two interfaces.
What This Enables
An automation workflow or an AI assistant can trigger a publish and get back a URL in a single round-trip. No developer in the loop. No manual steps.
The thing that makes this safe to run at production scale is intentional narrowness.
The service can publish HTML. It can't execute it, it can't touch other infrastructure, and it can't do anything beyond its defined surface area.
That constraint is the design. Everything else follows from it.