AI-assisted production
Opening
We're not an AI agency. We don't sell "AI strategy engagements." We don't claim to have built an AI pipeline that replaces human craft.
What we do: we've threaded AI tools into our production pipeline at the points where AI is useful. The first-draft stages of asset creation, the repetitive stages of QA, the early stages of research. The result is that our timelines are often 30–50% shorter than a conventional studio's for comparable scope, and our prices are lower because we're spending fewer human hours on work that a tool now handles adequately.
This page is an honest walk-through of where we use AI, what it does well, and what it doesn't.
Where AI is in our pipeline today
Asset generation: 3D baseline geometry
When a project needs a 3D asset and we don't have a client-owned CAD file, we start with AI-generated baseline geometry (Tripo, Meshy, Hunyuan3D, Rodin). The output isn't production-ready (it needs topology cleanup, retopology, UV work, and material application) but it compresses what used to be a 2–3 day modeling task into a 2–3 hour task. The human 3D artist still owns the final asset.
What AI does well here: shape blocking, symmetrical forms, recognizable object classes (shoes, bottles, furniture, small appliances).
What AI doesn't do well: engineering-accurate machinery, hero-quality character work, anything that requires preserving a specific proprietary silhouette.
Asset generation: image variants
For campaign image variants (color, scene context, lifestyle setting), we use Flux, Ideogram, Midjourney, and Adobe Firefly. These aren't replacements for brand photography. They're first-pass explorations so the art director and client can align on direction before production photography is commissioned.
What AI does well here: style explorations, mood boards, placeholder imagery for wireframes and prototypes.
What AI doesn't do well: hero photography for the final site (still human-photographer territory), anything involving a specific real person, anything where brand-accurate materials matter (a specific leather grain, a specific fabric weave).
Copy drafting: first-pass localization
When a project ships in multiple languages, we draft the secondary-language versions with AI (primarily Claude for the initial pass) and then run a native-speaker review before production. This compresses localization from a 3-week external-vendor cycle to a one-week in-pipeline task.
What AI does well here: draft-tier Korean, Japanese, Chinese, and European-language translations; consistent terminology application; tone matching against a supplied voice document.
What AI doesn't do well: anything with cultural nuance, anything marketed to a hyper-local audience, legal or regulated copy.
QA automation
- Visual regression testing. We capture baseline screenshots of every page across three viewports and run AI-assisted comparison to flag unintended changes
- Copy consistency checking. A small in-house tool flags inconsistent brand-term usage, banned words, and tone drift
- Accessibility baseline. AI-assisted alt-text suggestions (human-reviewed) and contrast-ratio checks
Research
- Competitor audit. First-pass market scans and competitor feature lists draft in Claude or Perplexity, human-verified before inclusion in strategy docs
- Spec ingestion. Turning long PDFs (service manuals, product spec sheets, client brand guidelines) into structured summaries we can reference quickly
Where AI is NOT in our pipeline
Being specific about this, because the industry overstates it constantly:
- We don't use AI to write client-facing copy. A first draft might get machine help; the words on the site are written by a human who's been on the project from week one.
- We don't use AI to make design decisions. Typography, color, layout, rhythm. These stay human. AI is a tool in the room, not the director.
- We don't use AI to generate case-study metrics or outcomes. Every number on this site is real.
- We don't claim "AI-native pipeline." That phrase is over-claimed in 2026. What we have is an AI-assisted pipeline where AI earns its keep at specific production stages.
Why this matters to you
Two practical consequences.
1. Our timelines are shorter. A project that would take a conventional studio 12 weeks takes us 8. Not because we cut corners. Because AI does 3–4 weeks of the traditionally-slow work.
2. Our prices are lower for comparable scope. A configurator that a tier-1 studio would bid at $120K lands at $18K in our Configurator Studio Sprint, because we're spending fewer human hours on the parts AI handles adequately.
The quality bar is the same. The delivery shape is different.
Where this is going: 6 to 12 months out
We're conservative about AI claims because we've watched the industry overpromise, and we'd rather say "we do this today, and we're working on that" than over-sell. Things we're actively building toward:
- A written pipeline diagram. Publishing our exact tool-by-stage map, so clients can see what's in the pipeline and what isn't
- Measured case studies. Two case studies with explicit compression data ("this project took X weeks where the traditional pipeline would have taken Y")
- Fine-tuned brand-asset models. For repeat clients, training a brand-specific AI model that generates first-draft assets in their voice
- AI-narrated product tours. As an add-on to Interactive 3D projects, using Eleven Labs-tier voice generation with human-directed scripts
When we ship those, this page gets updated.
If you want to talk about AI specifically
We're happy to. What we won't do is pitch you an "AI strategy engagement". That's not what we sell. But if you're thinking about AI in your marketing operations, your product team, or your design pipeline, we can share what we've learned.
If you want a structured version of that conversation, see the AI Ops Audit productized sprint $5K, two weeks, a real deliverable at the end.


