# LCIE – Live Character Interaction Engine (Unity Prototype)

> Unity 2022.3.62f2 prototype  
> Modular architecture for live 3D/holographic characters, synchronized performances, and AI-driven interactions.  
> This build establishes the core engine – **SPM, CEM, DSM, MCM, and UIM** – within one Unity project.  
> **Multitrack MIDI** is the canonical show format for animation timing and synchronization.

---

## ⚙️ Table of Contents

1. [Overview](#1-overview)
2. [High-Level Architecture](#2-high-level-architecture)
3. [Unity Setup & Conventions](#3-unity-setup--conventions)
4. [Folder Structure](#4-folder-structure)
5. [Show File Format (MIDI + JSON fallback)](#5-show-file-format--multitrack-midi)
6. [Show Performance Module (SPM)](#6-show-performance-module-spm)
7. [Character Engine Module (CEM)](#7-character-engine-module-cem)
8. [Main Control Module (MCM)](#8-main-control-module-mcm)
9. [Data Storage Module (DSM)](#9-data-storage-module-dsm)
10. [User Interface Module (UIM)](#10-user-interface-module-uim)
11. [Networking (Unity + Node.js microservices)](#11-networking-unity--nodejs)
12. [Node.js Skeletons (CEM & DSM servers)](#12-nodejs-skeletons)
13. [Character State Management & Relative Movements](#13-character-state-management--relative-movements)
14. [Development Workflow & CI](#14-development-workflow)
15. [Copilot Prompts](#15-copilot-prompts)
16. [Roadmap](#16-roadmap)
17. [Appendices: Code & Data Samples](#17-appendices)

---

## 1. Overview

The **Live Character Interaction Engine (LCIE)** is a modular framework for real-time animated character systems that combine:

- Pre-scripted performances (via MIDI show files)
- Real-time interaction through AI modules
- Unified orchestration via networked modules

In this prototype, everything runs inside **one Unity project** for development simplicity.  
Later stages will deploy individual modules (like CEM and DSM) as standalone **Node.js microservices** communicating via HTTP/WebSocket.

---

## 2. High-Level Architecture

```
┌────────────────┐     ┌────────────────┐     ┌────────────────┐
│     UIM        │◄────▶│     MCM        │◄────▶│     DSM        │
│  (Unity UI)    │     │(Coordinator)   │     │(File/Asset)    │
└────────────────┘     └────────┬───────┘─────┴────┬────────────┘
                                │                  │
                                │                  │
                           ┌────────▼──────┐  ┌──────▼──────────┐
                           │     SPM       │◄────▶│     CEM        │
                           │(Show/Render)  │      │(AI/Dialogue)   │
                           └───────────────┘      └────────────────┘
```

### Modules Summary

| Module  | Description                                                                                                          |
| ------- | -------------------------------------------------------------------------------------------------------------------- |
| **SPM** | Show Performance Module – loads and executes multitrack MIDI files, driving poses and animations in sync with audio. |
| **CEM** | Character Engine Module – manages AI-driven personality and dialogue, sending animation hints back to SPM.           |
| **MCM** | Main Control Module – orchestrates all other modules, coordinates show playback and interactions.                    |
| **DSM** | Data Storage Module – provides access to show files, audio, and character assets.                                    |
| **UIM** | User Interface Module – operator console and monitoring dashboard.                                                   |

---

## 3. Unity Setup & Conventions

**Unity version:** `2022.3.62f2`  
**Scripting Runtime:** .NET 4.x Equivalent

### Key Packages

- [`Melanchall.DryWetMIDI`](https://github.com/melanchall/drywetmidi) – for MIDI parsing and multitrack show handling.
- `com.unity.animation.rigging` – for pose blending and constraint-based animation.
- `com.unity.inputsystem` – for operator controls.

### Assembly Definitions

Each module has its own `.asmdef`:

```
LCIE.SPM.asmdef
LCIE.CEM.asmdef
LCIE.MCM.asmdef
LCIE.DSM.asmdef
LCIE.UIM.asmdef
```

### Coding Guidelines

- **PascalCase** for C# scripts
- Keep each module's scripts within its own folder
- Shared DTOs and network contracts go in `Assets/LCIE/Contracts/`
- Use structured logging for debugging across modules
- Implement character state tracking within character controller components to support relative movements and contextual pose application

---

## 4. Folder Structure

```
/LCIE-Unity-Prototype/
├─ README.md
├─ ProjectSettings/
├─ Assets/
│  ├─ LCIE/
│  │  ├─ SPM/
│  │  │  ├─ Scripts/
│  │  │  │  ├─ MidiShowLoader.cs
│  │  │  │  ├─ ShowSequencer.cs
│  │  │  │  ├─ Interpolator.cs
│  │  │  │  └─ SPMController.cs
│  │  │  ├─ Prefabs/
│  │  │  ├─ Shows/
│  │  │  └─ LCIE.SPM.asmdef
│  │  ├─ CEM/
│  │  │  ├─ Scripts/
│  │  │  │  ├─ CEMController.cs
│  │  │  │  └─ CEMPlaceholderAI.cs
│  │  │  └─ LCIE.CEM.asmdef
│  │  ├─ MCM/
│  │  │  ├─ Scripts/
│  │  │  │  ├─ MCMCoordinator.cs
│  │  │  │  └─ NetworkLayer.cs
│  │  │  └─ LCIE.MCM.asmdef
│  │  ├─ DSM/
│  │  │  ├─ Scripts/
│  │  │  │  └─ DSMClient.cs
│  │  │  └─ LCIE.DSM.asmdef
│  │  ├─ UIM/
│  │  │  ├─ Scripts/
│  │  │  │  └─ OperatorConsole.cs
│  │  │  └─ LCIE.UIM.asmdef
│  │  ├─ Characters/
│  │  │  ├─ Scripts/
│  │  │  │  ├─ CharacterController.cs
│  │  │  │  └─ CharacterStateManager.cs
│  │  │  ├─ Models/
│  │  │  └─ Animations/
│  │  ├─ Contracts/
│  │  │  ├─ DTOs/
│  │  │  │  ├─ AnimationCommand.cs
│  │  │  │  ├─ CEMResponse.cs
│  │  │  │  ├─ SystemStatus.cs
│  │  │  │  └─ CharacterState.cs
│  │  │  ├─ PoseMap.json
│  │  │  └─ NetworkSchema.md
│  │  └─ Tests/
│  │     └─ SPM_Tests/
│  └─ Plugins/
│     └─ Melanchall.DryWetMIDI.dll
├─ tools/
│  └─ node-skeleton/
│     ├─ cem-server/
│     ├─ dsm-server/
│     └─ package.json
└─ .gitignore
```

---

## 5. Show File Format (🎹 Multitrack MIDI)

The LCIE uses **MIDI files (.mid)** as the **primary show data**. The architecture separates musical performance concerns from character animation concerns through a dual-track system that recognizes the distinct kinematic requirements of each domain.

Each MIDI track represents a specific performance element. The system distinguishes between instrument performance tracks, which encode the fine-grained hand, finger, and foot positions necessary for authentic instrumental playing, and character performance tracks, which manage body language, posture, and performer positioning that overlays the instrumental mechanics.

### Track Organization

| Track Type | Purpose | Example |
| ---------- | ------- | ------- |
| Track 0 | Global tempo and metadata | Tempo map, time signature |
| Track 1 | Instrument pose performance (hands, fingers, feet) | Guitar fingering positions, drummer pedal positions |
| Track 2 | Character pose performance (body, posture, positioning) | Standing upright, leaning forward, torso rotation |
| Track 3–N | Additional characters, instruments, or effects | Secondary character or lighting triggers |

### Pose Classification System

Poses are categorized according to their domain and purpose within the performance architecture. This classification ensures that the SPM routes poses to appropriate subsystems and applies them with their intended semantics.

**Instrument Performance Poses** represent kinematic requirements for producing authentic instrumental sounds and techniques. These poses correspond to specific hand positions, finger placements, or playing techniques and are subject to smooth interpolation as the character transitions between musical gestures. Examples include fingering positions on a guitar fretboard, hand positions on a piano keyboard, or striking techniques on percussion instruments.

**Character Performance Poses** represent body language, postural shifts, and performer positioning that creates the visual narrative and emotional context of the performance. These poses include standing postures, leaning motions, torso rotations, and other full-body positioning elements. Unlike instrument poses, character poses may require state awareness in certain choreographic contexts, though the primary musical performance paradigm emphasizes discrete, interpolated positions rather than state-dependent transitions.

### MIDI Mapping

The MIDI note representation remains consistent across both pose categories. Note pitch identifies the specific pose, velocity expresses intensity or emphasis, note length determines animation duration, and note start time specifies playback timing based on the tempo map. The classification system and track organization provide context for interpreting these values appropriately within each domain.

- **Note pitch** → Pose identifier (`C3` = "Idle", `D3` = "Strum", `E3` = "LeftHandPosition1")
- **Velocity (0–127)** → Expression intensity or emphasis level
- **Note length** → Animation duration in seconds (converted via tempo map)
- **Note start** → Playback time in seconds (converted via tempo map)

### Example PoseMap.json with Domain Classification

```json
{
  "C3": {
    "name": "RestPosition",
    "domain": "instrument",
    "description": "Both hands at rest on instrument"
  },
  "D3": {
    "name": "LeftHandPosition1",
    "domain": "instrument",
    "description": "Left hand fingering position for first chord"
  },
  "E3": {
    "name": "RightHandStrum",
    "domain": "instrument",
    "description": "Right hand positioned for downstroke"
  },
  "F3": {
    "name": "Standing",
    "domain": "character",
    "description": "Performer standing upright, neutral posture"
  },
  "G3": {
    "name": "LeanForward",
    "domain": "character",
    "description": "Performer leaning forward with intensity"
  },
  "A3": {
    "name": "TorsoRotateLeft",
    "domain": "character",
    "description": "Upper body rotation to stage left"
  }
}
```

### JSON Fallback

If MIDI parsing is unavailable, show data can be represented in JSON format. See `Docs/ShowFileExamples/example_show_01.json`.

---

## 6. Show Performance Module (SPM)

### Responsibilities

The SPM orchestrates the playback and interpolation of both instrument performance poses and character performance poses in synchronized alignment with audio playback. The module maintains clear separation between these two domains while ensuring temporal coherence across both performance streams.

- Parse MIDI show files via DryWetMIDI, classifying poses according to their domain (instrument or character)
- Convert tracks to domain-specific event timelines
- Manage instrument performance pose playback with smooth interpolation to create authentic instrumental technique visualization
- Manage character performance pose playback with appropriate animation transitions
- Interpolate poses smoothly using Animation Rigging or Animator.CrossFade, with domain-specific interpolation strategies
- Sync visuals with audio (AudioSource.time or AudioSettings.dspTime)
- Output to projectors/holographic displays
- Route instrument and character poses to appropriate subsystems within the character controller

### Core Classes

| Script              | Purpose                                        |
| ------------------- | ---------------------------------------------- |
| `MidiShowLoader.cs` | Parses .mid file, classifies poses by domain, generates internal event lists |
| `ShowSequencer.cs`  | Manages playback timeline with domain awareness |
| `Interpolator.cs`   | Handles pose blending with domain-specific strategies |
| `SPMController.cs`  | Entry point for MCM commands, routes poses to appropriate subsystems |
| `InstrumentPoseHandler.cs` | Manages instrument performance pose playback and interpolation |
| `CharacterPoseHandler.cs` | Manages character performance pose playback |

### Example: MidiShowLoader.cs

```csharp
using System.Collections.Generic;
using Melanchall.DryWetMidi.Core;
using Melanchall.DryWetMidi.Interaction;

namespace LCIE.SPM {
    public static class MidiShowLoader {
        public static List<MidiEventData> Load(string path) {
            var midi = MidiFile.Read(path);
            var tempoMap = midi.GetTempoMap();
            var events = new List<MidiEventData>();

            foreach (var track in midi.GetTrackChunks()) {
                string trackName = track.Events.GetTrackNameEvent()?.Text ?? "Unnamed";
                foreach (var note in track.GetNotes()) {
                    var start = note.TimeAs<MetricTimeSpan>(tempoMap).TotalSeconds;
                    var dur = note.LengthAs<MetricTimeSpan>(tempoMap).TotalSeconds;
                    events.Add(new MidiEventData {
                        Track = trackName,
                        Pitch = note.NoteNumber,
                        Velocity = note.Velocity,
                        Start = (float)start,
                        Duration = (float)dur
                    });
                }
            }
            return events;
        }
    }

    public struct MidiEventData {
        public string Track;
        public int Pitch;
        public int Velocity;
        public float Start;
        public float Duration;
    }
}
```

### Audio Sync

- Use `AudioSource.PlayScheduled(AudioSettings.dspTime + offset)` for precise timing
- MCM broadcasts `playAt` timestamps to align modules

---

## 7. Character Engine Module (CEM)

### Prototype

In early builds, `CEMPlaceholderAI` is a lightweight dialogue stub returning canned responses or rule-based replies.

### Future Integration

CEM will run as a Node.js microservice or local model backend.  
Unity will communicate over HTTP or WebSocket using the following schema:

**Request:**

```json
{
  "characterId": "fatz",
  "prompt": "Guest said: 'What's your favorite song?'"
}
```

**Response:**

```json
{
  "dialogue": "I love the blues!",
  "animationHint": { "pose": "smile", "intensity": 0.8, "duration": 1.0 }
}
```

---

## 8. Main Control Module (MCM)

### Role

The orchestrator that:

- Coordinates show playback
- Manages network events
- Switches between show and interactive modes
- Keeps all modules synchronized

### Public API

| Endpoint                         | Method | Purpose                    |
| -------------------------------- | ------ | -------------------------- |
| `/api/v1/shows/load`             | POST   | Load a new show            |
| `/api/v1/shows/play`             | POST   | Begin show playback        |
| `/api/v1/characters/:id/message` | POST   | Forward guest input to CEM |
| `/api/v1/status`                 | GET    | Return system health       |

---

## 9. Data Storage Module (DSM)

Provides access to all show files and assets.

- **Prototype:** Local file reads or Node.js static server
- **Future:** Cloud-accessible network file system with redundancy

### Example Endpoints (Node.js)

```
GET /api/v1/shows/:id       → returns MIDI or JSON
GET /api/v1/assets/:path    → returns file stream
POST /api/v1/upload/:type   → upload new asset
```

---

## 10. User Interface Module (UIM)

Unity-based operator console.

### Functions

- Load/select shows
- Start/stop control
- Monitor system health
- View logs and CEM replies
- Emergency override (stop behavior instantly)

### Key Features

- Real-time status display
- Show library browser
- Character personality editor (preview)
- Network diagnostics panel
- Show performance profiler

---

## 11. Networking (Unity + Node.js)

### Transport

- **HTTP** for configuration and asset loading
- **WebSocket** for live events and CEM responses

### Pattern

1. On startup, all modules register with MCM
2. MCM broadcasts play states and timestamps
3. CEM and SPM communicate via WebSocket to reduce latency
4. DSM provides RESTful asset access

### Default Ports

| Module      | Port     | Protocol  |
| ----------- | -------- | --------- |
| DSM         | 3001     | HTTP      |
| CEM         | 3002     | HTTP / WS |
| MCM (Unity) | 4000     | HTTP / WS |
| SPM (Unity) | internal | –         |

---

## 12. Node.js Skeletons

### cem-server/index.js

```javascript
const express = require("express");
const expressWs = require("express-ws");
const app = express();
expressWs(app);

app.use(express.json());

app.post("/api/v1/ask", (req, res) => {
  const { prompt, characterId } = req.body;

  // Placeholder logic – integrate real LLM here
  res.json({
    dialogue: `You said: ${prompt}`,
    animationHint: {
      pose: "smile",
      intensity: 0.8,
      duration: 1.0,
    },
  });
});

app.ws("/api/v1/events", (ws, req) => {
  ws.on("message", (msg) => {
    console.log("CEM received:", msg);
    ws.send(
      JSON.stringify({
        status: "acknowledged",
      })
    );
  });
});

app.listen(3002, () => console.log("CEM running on port 3002"));
```

### dsm-server/index.js

```javascript
const express = require("express");
const path = require("path");
const app = express();

app.use("/api/v1/shows", express.static(path.join(__dirname, "shows")));
app.use("/api/v1/assets", express.static(path.join(__dirname, "assets")));

app.get("/api/v1/status", (req, res) => {
  res.json({
    status: "operational",
    timestamp: new Date().toISOString(),
  });
});

app.listen(3001, () => console.log("DSM running on port 3001"));
```

---

## 13. Character State Management & Relative Movements

### Overview and Applicability

The LCIE architecture was engineered primarily for musical performance contexts where characters produce instrumental sounds and music. In this domain, the stateless pose model is architecturally appropriate and optimal. Each pose represents a discrete kinematic configuration necessary for authentic instrumental technique, and the interpolation system creates smooth transitions between these positions. The instrument performance track operates entirely within this paradigm with no requirement for state awareness.

Character performance poses, which manage body language and performer positioning through the dedicated character performance track, also operate within a primarily stateless model that aligns with the core LCIE design philosophy. However, certain choreographic scenarios beyond the primary musical performance use case may benefit from explicit state tracking to support relative movements and contextual pose application. Military drill routines exemplify such scenarios, where movements like "About Face" are inherently relative to current orientation and must correctly accumulate across multiple executions.

This section documents state management for developers implementing such specialized choreography. For standard musical performance contexts, the stateless design is preferred and requires no state management overhead.

### When State Management is Necessary

State management becomes relevant when choreography involves relative transformations dependent on character configuration history. Examples include military drill sequences, dance choreography with compass-directional positioning, or specialized performance types where movements reference preceding states. For most musical performance systems, the dual-track architecture with instrument performance poses and character performance poses operates effectively without state tracking.

### Character State Architecture

When implementing choreography requiring state awareness, each character requires a state manager component that maintains awareness of the character's current configuration. This component tracks orientation, posture categories, and other contextual information relevant to state-dependent transformations.

### CharacterStateManager Implementation

Create a new component `CharacterStateManager.cs` in `Assets/LCIE/Characters/Scripts/` that maintains the character's state. This component tracks the current orientation (expressed in discrete 90-degree increments or continuous rotation values), current posture category (standing, sitting, performing, etc.), and other metadata necessary for relative transformations.

```csharp
using UnityEngine;

namespace LCIE.Characters {
    public class CharacterStateManager : MonoBehaviour {
        private Quaternion currentOrientation = Quaternion.identity;
        private string currentPosture = "Standing";
        private float currentIntensity = 1.0f;

        public Quaternion CurrentOrientation => currentOrientation;
        public string CurrentPosture => currentPosture;
        public float CurrentIntensity => currentIntensity;

        public void UpdateOrientation(Quaternion newOrientation) {
            currentOrientation = newOrientation;
        }

        public void UpdatePosture(string postureName) {
            currentPosture = postureName;
        }

        public void UpdateIntensity(float intensity) {
            currentIntensity = Mathf.Clamp01(intensity);
        }

        public void ApplyRelativeRotation(float degreesY) {
            Quaternion relativeRotation = Quaternion.Euler(0, degreesY, 0);
            currentOrientation = relativeRotation * currentOrientation;
            transform.rotation = currentOrientation;
        }

        public void ApplyAbsoluteRotation(float degreesY) {
            currentOrientation = Quaternion.Euler(0, degreesY, 0);
            transform.rotation = currentOrientation;
        }

        public void ResetToDefault() {
            currentOrientation = Quaternion.identity;
            currentPosture = "Standing";
            currentIntensity = 1.0f;
            transform.rotation = Quaternion.identity;
        }
    }
}
```

### Pose Classification

Classify poses within your PoseMap.json into categories based on their relationship to character state: absolute poses (which ignore current state and apply a fixed configuration), relative rotational poses (which rotate from the current orientation), and compound poses (which consist of multiple sequential movements that depend on intermediate states).

```json
{
  "PresentArms": {
    "type": "absolute",
    "animationState": "PresentArms",
    "description": "Military present arms posture"
  },
  "OrderArms": {
    "type": "absolute",
    "animationState": "OrderArms",
    "description": "Military order arms posture"
  },
  "AboutFace": {
    "type": "relativeRotation",
    "rotation": 180,
    "animationDuration": 1.0,
    "description": "Military about face - 180 degree rotation"
  },
  "RightFace_Count1": {
    "type": "relativeRotation",
    "rotation": 90,
    "isMultiCount": true,
    "countPosition": 1,
    "animationDuration": 0.5,
    "description": "Military right face - first count of two-count movement"
  },
  "RightFace_Count2": {
    "type": "relativeRotation",
    "rotation": 0,
    "isMultiCount": true,
    "countPosition": 2,
    "animationDuration": 0.5,
    "description": "Military right face - second count completing the movement"
  },
  "LeftFace_Count1": {
    "type": "relativeRotation",
    "rotation": -90,
    "isMultiCount": true,
    "countPosition": 1,
    "animationDuration": 0.5
  },
  "LeftFace_Count2": {
    "type": "relativeRotation",
    "rotation": 0,
    "isMultiCount": true,
    "countPosition": 2,
    "animationDuration": 0.5
  },
  "ParadeRest": {
    "type": "absolute",
    "animationState": "ParadeRest",
    "description": "Military parade rest posture"
  }
}
```

### SPM Integration with State Management

The SPM must be enhanced to interact with character state managers rather than applying poses in a purely stateless manner. Modify `SPMController.cs` to accept character state managers and route pose commands through contextual logic.

```csharp
using UnityEngine;
using LCIE.Contracts.DTOs;

namespace LCIE.SPM {
    public class SPMController : MonoBehaviour {
        private CharacterStateManager characterState;

        public void SetCharacterStateManager(CharacterStateManager stateManager) {
            characterState = stateManager;
        }

        public void ApplyPose(string poseName, PoseDefinition poseDefinition) {
            if (characterState == null) {
                Debug.LogWarning("Character state manager not set. Applying pose statelessly.");
                ApplyPoseStateless(poseName, poseDefinition);
                return;
            }

            switch (poseDefinition.Type) {
                case "absolute":
                    ApplyAbsolutePose(poseName, poseDefinition);
                    break;
                case "relativeRotation":
                    ApplyRelativeRotationPose(poseName, poseDefinition);
                    break;
                default:
                    Debug.LogWarning($"Unknown pose type: {poseDefinition.Type}");
                    break;
            }
        }

        private void ApplyAbsolutePose(string poseName, PoseDefinition def) {
            Animator animator = characterState.GetComponent<Animator>();
            if (animator != null) {
                animator.SetTrigger(def.AnimationState);
                characterState.UpdatePosture(poseName);
            }
        }

        private void ApplyRelativeRotationPose(string poseName, PoseDefinition def) {
            characterState.ApplyRelativeRotation(def.Rotation);
            Animator animator = characterState.GetComponent<Animator>();
            if (animator != null && def.AnimationState != null) {
                animator.SetTrigger(def.AnimationState);
            }
        }

        private void ApplyPoseStateless(string poseName, PoseDefinition def) {
            // Fallback for cases without state manager
            Debug.Log($"Applying pose statelessly: {poseName}");
        }
    }
}
```

### Multi-Count Movement Sequencing

Multi-count movements require careful sequencing in your MIDI show file. For a two-count "Right Face" movement, map the two component poses to consecutive MIDI notes with appropriate timing. Ensure the duration of each note corresponds exactly to the count interval in your drill cadence. The SPM will trigger each pose sequentially, and the CharacterStateManager will accumulate the rotations, producing the expected compound movement.

Document multi-count movements within the PoseMap.json using the `isMultiCount` and `countPosition` fields so choreographers understand which poses must be sequenced together. Consider adding validation logic to your MIDI loader to warn when multi-count poses are sequenced incorrectly or separated by excessively long intervals.

### Show File Considerations

When constructing MIDI show files, be explicit about the sequencing of relative movements. A show file that executes "About Face" twice should contain two MIDI note events for the "AboutFace" pose spaced according to the desired timing. The combination of explicit sequencing in the show file and the stateful application logic in the character controller ensures that relative movements produce choreographically correct results.

---

## 14. Development Workflow & CI

### Quick Start

1. **Prepare show files:**

   - Place `.mid` show files under `Assets/LCIE/SPM/Shows/`
   - Organize show files with an instrument performance track and a character performance track
   - Ensure PoseMap.json is configured with proper pose definitions including domain classification (instrument or character)

2. **Prepare character models:**

   - Export your character from Blender as a GLB file with animation clips embedded for each pose
   - Create separate animation clips or sets for instrument performance poses and character performance poses to maintain clear domain separation
   - Place the GLB file in `Assets/LCIE/Characters/Models/`
   - Unity will automatically extract AnimationClips from the embedded animations

3. **Configure character animations:**

   - Create an Animator Controller for your character in `Assets/LCIE/Characters/Animations/`
   - Organize the Animator with separate layers or state groups for instrument performance poses and character performance poses
   - Add states corresponding to each animation clip
   - Configure transitions between states as appropriate for your choreography

4. **Determine state management requirements:**

   - For standard musical performance contexts, rely on the stateless pose architecture with interpolation between discrete positions
   - If implementing specialized choreography requiring state-dependent movements (such as military drill routines), attach a CharacterStateManager component to your character prefab
   - Document which poses are state-dependent and how they relate to prior character configurations

5. **Start Node.js servers:**

   ```bash
   cd tools/node-skeleton
   npm install
   npm start
   ```

   This launches DSM on port 3001 and CEM on port 3002.

6. **Open Unity:**

   - Load scene `LCIE_UIM_Main`
   - Ensure no compilation errors (check Assembly Definition references)
   - Add your character prefab to the scene

7. **Test the pipeline:**
   - In UIM console, click **Load Show** → select a `.mid` file
   - Click **Play**
   - Observe character animation with synchronized instrument performance on the instrument track and character positioning on the character track
   - Monitor logs for network traffic and proper domain routing
   - If using state-dependent choreography, verify that relative movements and state transitions occur correctly

### Testing Checklist

- [ ] MIDI loads without errors
- [ ] Instrument performance poses map correctly via PoseMap.json with "instrument" domain classification
- [ ] Character performance poses map correctly via PoseMap.json with "character" domain classification
- [ ] Audio plays in sync with animation
- [ ] MCM successfully coordinates show start
- [ ] Network calls succeed (check browser DevTools)
- [ ] CEM responds to dialogue prompts
- [ ] Emergency stop works from UIM
- [ ] Instrument performance track produces authentic playing technique visualization
- [ ] Character performance track manages body language and positioning independently
- [ ] Both tracks remain synchronized throughout playback
- [ ] (If using state management) Character state manager properly tracks configuration
- [ ] (If using state management) State-dependent movements accumulate correctly across multiple executions

---

## 15. Copilot Prompts

Include these in code comments for GitHub Copilot context:

```csharp
// Implement a Unity ShowSequencer that plays instrument performance poses and // character performance poses from separate MIDI tracks, synchronized with an AudioSource.
// Route instrument poses to InstrumentPoseHandler and character poses to CharacterPoseHandler.

// Generate an InstrumentPoseHandler that manages hand, finger, and foot positions for
// authentic instrumental technique, applying smooth interpolation between discrete
// playing positions to create convincing performance mechanics.

// Generate a CharacterPoseHandler that manages body language, posture, and performer
// positioning through character performance poses, coordinating with the concurrent
// instrument performance to maintain visual coherence.

// Write an SPMController script to coordinate both instrument and character pose playback,
// classify poses according to domain, route poses to appropriate handlers, and ensure
// temporal synchronization between the two performance streams.

// Build an Interpolator class that smoothly blends between instrument poses using
// quaternion spherical interpolation (SLERP) to create authentic playing transitions,
// distinct from character pose interpolation strategies.

// Create a MidiShowLoader that parses multitrack MIDI files, classifies poses by domain
// (instrument vs character), and generates separate event timelines for each performance stream.

// Create a NetworkLayer helper that handles HTTP requests and WebSocket connections with
// proper error handling and timeout logic for communication with DSM, CEM, and MCM modules.
```

---

## 16. Roadmap

| Version | Goals                                              |
| ------- | -------------------------------------------------- |
| **v0**  | SPM with multitrack MIDI parsing, single character with state management |
| **v1**  | CEM placeholder with network layer and relative movement support         |
| **v2**  | Node-based DSM and AI-integrated CEM                |
| **v3**  | Multi-character setup (1–5 characters) with synchronized state           |
| **v4**  | Hardware/holographic output support with state persistence              |
| **v5**  | Animatronic mapping & actuator translation with contextual pose layers   |

---

## 17. Appendices

### A. Example MIDI → Event Translation with State Awareness

| Track   | Note | Pose  | Type | Velocity | Duration | Expected State After |
| ------- | ---- | ----- | ---- | -------- | -------- | -------------------- |
| Track 1 | C3   | Idle  | Absolute | 64       | 1.0s     | Facing Forward, Standing |
| Track 1 | G#3  | RightFace_Count1 | RelativeRot | 127      | 0.5s     | Facing Right (90°), Intermediate |
| Track 1 | A#3  | RightFace_Count2 | RelativeRot | 127      | 0.5s     | Facing Right (90°), Complete |
| Track 1 | G#3  | AboutFace | RelativeRot | 100      | 1.0s     | Facing Left (270°), Rotated |
| Track 1 | G#3  | AboutFace | RelativeRot | 100      | 1.0s     | Facing Forward (0°), Back to Original |

### B. Example Unity HTTP Helper

```csharp
using UnityEngine.Networking;
using System.Threading.Tasks;
using System.Collections.Generic;

public static class NetworkLayer {
    private static readonly Dictionary<string, string> DefaultHeaders = new() {
        { "Content-Type", "application/json" }
    };

    public static async Task<string> GetAsync(string url) {
        using (var req = UnityWebRequest.Get(url)) {
            var op = req.SendWebRequest();
            while (!op.isDone) await Task.Yield();
            if (req.result != UnityWebRequest.Result.Success)
                throw new System.Exception($"GET {url} failed: {req.error}");
            return req.downloadHandler.text;
        }
    }

    public static async Task<string> PostAsync(string url, string jsonBody) {
        using (var req = new UnityWebRequest(url, "POST")) {
            byte[] bodyRaw = System.Text.Encoding.UTF8.GetBytes(jsonBody);
            req.uploadHandler = new UploadHandlerRaw(bodyRaw);
            req.downloadHandler = new DownloadHandlerBuffer();
            foreach (var kvp in DefaultHeaders) {
                req.SetRequestHeader(kvp.Key, kvp.Value);
            }

            var op = req.SendWebRequest();
            while (!op.isDone) await Task.Yield();
            if (req.result != UnityWebRequest.Result.Success)
                throw new System.Exception($"POST {url} failed: {req.error}");
            return req.downloadHandler.text;
        }
    }
}
```

### C. Example Animator State Machine Setup

For each character, set up a layered Animator with the following configuration:

The **Base Layer** manages locomotion states such as Idle and Moving. The **Animation Layer** contains performance poses including military drill poses such as Present Arms, Order Arms, Parade Rest, Left Face, Right Face, and About Face. A **Blend Tree** enables smooth