Chapter 3: Diegetic UI & AR Overlays; Readability at a Glance
Created by Sarah Choi (prompt writer using ChatGPT)
Diegetic UI & AR Overlays for Mecha Cockpits: Readability at a Glance
Diegetic UI is the art of making information feel like it belongs inside the world—seen through glass, projected onto armor, reflected in a visor, pulsing on a console—rather than floating as a game HUD that ignores the cockpit. For mecha concept artists, diegetic UI and AR overlays are one of the fastest ways to sell scale, sophistication, and pilot experience. For production teams, they are also a contract: where the information lives affects animation staging, VFX, UI implementation, camera language, and accessibility.
“Readability at a glance” is the guiding principle. A cockpit can be visually complex and still readable if the information hierarchy is clear and the pilot’s attention costs are respected. Mecha scenes are often high-motion, high-noise, and high-stakes. Your UI should behave like an ally—reducing cognitive load—rather than another thing demanding attention.
Define the pilot’s attention budget
Every diegetic UI choice is really a choice about the pilot’s attention. In a combat moment, the pilot’s eyes want to be outside: horizon, targets, obstacles, allies. If your UI forces long “heads-down” reading, it implies either an extremely stabilized machine, a slow tactical pace, or a pilot who’s relying heavily on automation. None of those are wrong, but they must match the mech’s role.
A practical concepting step is to imagine the pilot’s attention budget as three states: scan, focus, and verify. Scan is the 0.2–0.5 second glance: “Am I okay? Is something wrong?” Focus is the 1–2 second read: “Which target? Which mode?” Verify is the deliberate check: “Diagnostics, maps, planning.” Readability at a glance primarily serves scan and focus. If critical info requires verify, the cockpit reads risky.
Diegetic UI layers: glass, visor, cabin, and body
A coherent cockpit UI usually sits in layers.
The first layer is glass-based AR: information projected onto the canopy or windshield, aligned to the outside world. This is where you place horizon lines, aim reticles, target boxes, lead indicators, and hazard outlines. Glass AR reads cinematic because it lives in the same plane as the pilot’s external view.
The second layer is visor or helmet AR: UI that belongs to the pilot, not the vehicle. This is a great storytelling layer because it can persist during egress, emergency escape, or on-foot sequences. Visor AR can show biometric warnings, comms, navigation cues, and personal preferences. It also helps justify why the cockpit can be darker or why the canopy might be armored—because the pilot’s primary UI is personal.
The third layer is cabin displays: physical screens, indicator stacks, and panels. These are the most reliable place for detailed information and planning. Cabin displays are also the easiest to animate and implement, but they are attention-expensive because the pilot must look away from the world.
The fourth layer is body or hull UI: external lights, armor projections, and visible status cues. This is often overlooked. Hull UI is not for the pilot; it’s for allies, enemies, and spectators. It communicates faction language, threat state, and coordination. In games, it supports readability at distance.
When you decide these layers, your design becomes internally consistent. The cockpit stops being “a bunch of screens” and becomes an information ecology.
Anchors: what never moves
The fastest way to improve glance readability is to choose a few UI anchors that never move. Anchors are the pilot’s home base: stable reference points that the brain can find instantly even during violent motion.
A classic anchor is the horizon line and attitude ladder. Another is a central reticle that stays consistent across modes. Another is a status band (health/heat/power) that always sits in the same region of the canopy. You can still animate and stylize these, but the anchor position should remain stable.
Anchors are also where you show the mech’s identity. A militarized cockpit might use stark geometry and strict alignment. A corporate or experimental cockpit might use softer curves, animated contours, and responsive typography. Style is allowed—anchors make style usable.
What belongs in the glance layer
Not all information deserves canopy AR. The glance layer should contain only what the pilot must know immediately while eyes stay outside. A strong shortlist includes: critical warnings, target identification, aim/lead, immediate navigation cues, and locomotion state.
Critical warnings should be scarce but unmistakable: overheat, reactor instability, ammo depletion, leg damage, sensor blackout, incoming missile. The mistake is to show everything all the time. Constant warnings become wallpaper. A cockpit that screams at the pilot is a cockpit that fails UX.
Target information should be minimal at glance: friend/foe, range, threat level, lock status. Detailed target stats can live on a secondary overlay or screen. Navigation cues should be simple: waypoint direction and distance, route corridor, collision alerts. Locomotion state is the mecha-specific part: stance, balance assist, terrain mode, jump readiness, and gait stability.
If you include too much, the canopy becomes a stained-glass spreadsheet. If you include too little, the pilot seems blind. The balance is your cockpit’s personality.
Occlusion and clutter: the “don’t hide the world” rule
AR overlays can accidentally sabotage visibility—the very thing they’re supposed to support. Concept art should communicate occlusion logic. Where does the UI avoid blocking the pilot’s view? How does it fade or compress when the pilot aims? Does it “breathe” away from points of interest? Does it collapse to icons when motion spikes?
A useful visual trick is to design UI exclusion zones: keep the central aiming corridor clean, keep the lower edge of the canopy clear for foot placement and ground obstacles, and keep the upper corners lighter for sky scanning. Let dense data live near the edges where peripheral vision can catch it without blocking the action.
In production, this matters even more. VFX and UI teams need to know what can be on screen without harming gameplay readability. If you communicate exclusion zones in concept, you prevent late-stage UI surgery.
World-locked vs head-locked UI
A major AR decision is whether elements are world-locked (they stick to objects in the world) or head-locked (they stick to the pilot’s view). World-locked target boxes feel grounded and help aim. Head-locked status bars feel stable and easy to glance at. Most strong systems mix both.
In concept art, you can depict world-locked elements by aligning them to exterior objects: the reticle tracks a target, range tick marks align to terrain, hazard outlines wrap around debris. You can depict head-locked elements by keeping them perfectly stable on the canopy even as the outside scene implies motion.
This choice affects cinematography and gameplay. World-locked UI sells immersion; head-locked UI protects readability under chaos.
Mode states: teach the cockpit to change its mind
Cockpit UI should not be static. It should change states like a competent assistant. Travel mode prioritizes navigation and collision. Combat mode prioritizes targeting and threat warnings. Precision mode (sniping, repair, docking) prioritizes alignment guides and fine telemetry. Emergency mode strips the UI down to survival actions.
Concept artists can show mode logic with small state callouts: same cockpit, different overlay density and emphasis. Production teams love this because it defines what UI “should do” without writing a full spec.
A key human factors principle is that mode changes must be legible. The pilot should know what mode they are in without reading tiny text. Use big cues: color/value shifts, icon changes, reticle shape changes, or a brief transition animation that confirms the swap.
Warning design: priority, escalation, and silence
Warnings are where readability becomes life-or-death. The first rule is priority. Not all warnings are equal. The second is escalation. A warning can begin as a subtle icon, then become a louder banner only if it becomes critical. The third is silence. A cockpit should be calm when the pilot is safe.
In diegetic UI, escalation can be shown with layered cues: a small indicator appears near a relevant system, then a directional cue appears near the canopy edge, then a central interrupt only if necessary. For example, a leg slip might first appear as a foot stability icon, then as a ground hazard highlight, then as an emergency “brace” prompt if the mech is about to fall.
This escalation language also supports audio: subtle beeps, then alarms. It supports animation: haptic buzz, then seat brace, then lock-down.
Readability across cameras: cockpit, third-person, and marketing
Even if your game primarily uses third-person, cockpit UI still matters. It informs the external HUD style, the narrative of piloting, and how marketing shots feel “authentic.” Conversely, if your game uses cockpit view, third-person readability still matters because spectators, enemies, and allies need to read the mech.
A production-aware concept will consider UI in at least two camera contexts: internal cockpit view and external gameplay view. You can bridge them by designing a shared icon language: the same “overheat” symbol appears inside the canopy and on the mech’s external heat vents; the same “lock” symbol appears in visor AR and in external targeting cues.
For marketing, diegetic UI is often a stylistic signature. It can be used in trailers, key art overlays, and UI motion graphics. If you define the language early—line weight, shapes, icon motifs—marketing can build a cohesive brand.
Material and lighting: the UI must survive glare
Diegetic UI exists in a physical world. Canopy glass has reflections, scratches, raindrops, dust, and glare. If you ignore this, the UI can feel pasted on. If you overdo it, the UI becomes unreadable. The sweet spot is to show that the UI is aware of environment: it boosts brightness in glare, simplifies in dust storms, shifts to high-contrast in darkness.
In concept art, you can imply this with subtle interaction: UI elements glow stronger near bright exterior scenes, or they switch to thicker shapes when visibility drops. You can also show UI “depth” by layering: some elements appear close to the glass, others appear deeper, giving the cockpit a volumetric feel.
For production, this translates into shader and VFX guidance. The concept doesn’t have to solve the shader, but it should communicate the intention: readable first, physical second.
Accessibility: color, shape, motion, and comfort
Readability at a glance must include accessibility. Color-only communication is fragile, especially for color vision deficiency or bright environments. Strong diegetic UI uses redundant cues: color plus shape, icon plus position, animation plus audio.
Motion can also be a comfort issue. Flickering, fast parallax, or jittery UI can cause discomfort. A production-friendly concept suggests stable anchors, limited motion, and configurable density. Some games offer “minimal HUD,” “reduced motion,” and “high contrast” modes; your diegetic UI can support these without breaking the fiction by framing them as pilot preferences or system profiles.
Even simple notes like “UI density scales down at high speed” or “reticle stays stable during shake” communicate that you’re thinking about players, not just visuals.
Concepting-side deliverables: how to pitch diegetic UI clearly
For concepting, the most valuable deliverable is not a single pretty cockpit painting; it’s a small set of readability proofs. A canopy overlay sheet can show anchors, exclusion zones, and a few key states. A mode strip can show travel vs combat vs emergency overlays. A “glance test” frame can show what information is readable if you squint or thumbnail the image.
You can also include a minimal icon set—ten or fifteen symbols—that establishes the cockpit’s UI language. Keep it consistent with faction motifs and tech level. A disciplined icon set makes everything feel engineered.
If you’re collaborating with UI artists, provide a “diegetic rules” note: what is world-locked, what is head-locked, what is allowed to occlude, and what must never occlude. These rules are more valuable than decorative detail.
Production-side handoff: what teams will ask you later
In production, teams will ask: Where are the cameras? What is the FOV? What is the canopy geometry? What elements are physical screens vs AR? How does UI behave in dust, night, and damage states? What is the state logic and what triggers it?
You can pre-answer many of these with a single cockpit UI package: a cockpit view with overlay callouts, a canopy-only overlay diagram, and a damage/emergency variant. Provide notes on anchor positions and a few “do not occlude” zones. If your cockpit uses visor AR, note how it persists outside the cockpit.
Also consider diegetic failure states: UI dropouts, sensor blackout, cracked canopy distortion, partial overlay loss. These are dramatic moments and also practical design concerns. If you include them early, they become intentional setpieces rather than late bugs.
Common mistakes (and why they happen)
A common mistake is designing UI as decoration: too many lines, too much text, no hierarchy. This happens because linework looks cool. The fix is to enforce glance layers and anchors. Another mistake is placing UI over the most important exterior information—targets, ground obstacles, horizon. The fix is exclusion zones and world-locking logic.
Another mistake is inconsistent state language: the UI changes shape and position constantly, so the pilot has nothing stable to rely on. The fix is anchors and predictable transitions. A final mistake is ignoring accessibility and comfort—color-only meaning, excessive motion, or high-frequency flicker. The fix is redundant cues and adjustable density.
A repeatable workflow: design the glance layer first
If you want a reliable process, design the canopy AR in this order. First, place anchors: horizon, reticle, status band. Second, define exclusion zones and edge regions for dense data. Third, add minimal target info and navigation cues. Fourth, design warning escalation. Fifth, create two mode states and ensure the pilot can tell them apart instantly. Sixth, validate with a thumbnail/squint test.
Once the glance layer works, you can add beauty: icon motifs, animated sweeps, depth layers, subtle reflections. The strongest diegetic UI concepts feel like a disciplined system that happens to be stylish—not a stylish overlay that happens to show data.
When you get diegetic UI and AR overlays right, the cockpit becomes more than a setting. It becomes a character: the machine’s voice in the pilot’s ear, a visual language of competence under pressure, and a production-ready blueprint for how humans and mechs truly collaborate.