Dance AI

Submit Video for Analysis

🎬
Tap to select video or drag & drop here
MP4 / MOV — up to 2.5 GB
Video Quality Report
Scanning…
💡
Recommended recording settings
Resolution: 1080p (1920×1080) — minimum 720p. Higher resolution improves keypoint accuracy, especially for foot and hand detail.
Frame rate: 50 or 60 fps. Ballroom movement is fast; lower frame rates (24/30 fps) cause motion blur and reduce step-detection precision.
Camera: Fixed tripod, full-body framing, dancers visible throughout. Front-facing or diagonal angle is required — the pose model and all metrics are optimised for this view. Side-view videos produce unreliable skeleton detection and inaccurate scores across all metrics; they are not suitable for analysis.

Analysis Jobs

Loading…

Couples & Session History

Loading…

Registered Couples

Loading…

Users

Loading…

Setup

Motion tracking model

Selects the RTMPose Wholebody model used for pose estimation. Applies to all new analysis jobs. Currently running jobs are unaffected.

Additional tracking

The Wholebody model outputs 133 keypoints. These options activate analysis of the face and hand regions (indices 23–132) which are computed but unused by default. Each adds a new scored section to the analysis report.

Storage

New jobs land on the fast disk first, then are automatically moved to the archive disk after the configured delay. Both paths must be accessible.

hours after job finishes

When enabled, every output video shows all detected bodies as ghost skeletons, numbered centroid dots, and a bottom bar with frame index, detection counts, body_ref, and LOCKED/SCORED/FIRST tracking state. Useful for diagnosing mirror reflections and wrong couple selection.

AI / LLM spend

Cumulative costs for all Claude API calls made by this server (coaching text generation and AI chat).

Loading…

Social login (OAuth 2.0)

Allows users to sign in with Google, Microsoft, or Facebook in addition to their email and password. Credentials are set in server/.env — no server changes required, only a restart.

Loading…

Set OAUTH_REDIRECT_BASE in server/.env to generate this URI.

How to configure each provider
Google
  1. Open Google Cloud Console → APIs & Services → Credentials
  2. Create OAuth 2.0 Client ID → Application type: Web application
  3. Add the Redirect URI above to Authorised redirect URIs
  4. Copy Client ID and Client Secret into server/.env:
OAUTH_GOOGLE_CLIENT_ID=your-client-id.apps.googleusercontent.com
OAUTH_GOOGLE_CLIENT_SECRET=your-secret
Microsoft
  1. Open Azure Portal → Microsoft Entra ID → App registrations → New registration
  2. Supported account types: Accounts in any organizational directory and personal Microsoft accounts
  3. Add the Redirect URI above as a Web platform redirect
  4. Under Certificates & secrets → New client secret → copy value
  5. Add to server/.env:
OAUTH_MICROSOFT_CLIENT_ID=your-application-(client)-id
OAUTH_MICROSOFT_CLIENT_SECRET=your-secret-value
Facebook
  1. Open Meta for Developers → My Apps → Create App → Consumer
  2. Add Facebook Login product → Settings → Valid OAuth Redirect URIs: add the URI above
  3. From App Settings → Basic copy the App ID and App Secret
  4. Add to server/.env:
OAUTH_FACEBOOK_APP_ID=your-app-id
OAUTH_FACEBOOK_APP_SECRET=your-app-secret
After editing server/.env, restart the server — the buttons appear on the login page automatically. Users log in with the email address that matches their DanceAI account; no account is created automatically.

Database integrity

Verifies that every completed job's output directory and key files (video, JSON, CSV) still exist on disk. Catches jobs where files were moved or deleted outside the app.

Dancing Principles

Principles from top-level trainers — injected into AI evaluations only when the couple shows the relevant deficit.

Show principles for dance:

Loading…

Pose Lab

Admin-only research area. Upload a video with custom pose estimation settings — only the tracking step runs (no scoring, no reports). Compare keypoint quality across backends and parameter combinations. For video crop/zoom preprocessing see Optimize.

New lab run

Previous runs

Loading…

Optimize

Two-step preprocessing tool. Step 1 generates a zoomed output video that follows the couple across the floor. Step 2 uses the same seed boxes as a region-of-interest (ROI) for pose skeleton detection — narrowing the detector's view to where the couple actually is, so background people, judges, and audience are completely excluded.

Workflow
  1. Upload a video and scrub to moments where the couple's size or position changes significantly.
  2. Draw a tight bounding box around the couple's bodies — saved automatically at that time stamp.
  3. Repeat at as many moments as needed (more seeds = smoother, more accurate pan and zoom).
  4. Click Run auto-zoom — the pipeline interpolates all frames between seeds and renders a cropped output video.
  5. Once done, click → Pose to queue a pose-lab job on the original video. The union of your seed boxes (expanded by the margin) is used as a fixed ROI — skeleton detection sees only the couple's region of the frame.
  6. Download the zoomed video via right-click on the result, or submit it to the main upload workflow for a full JS 2.1 report.

Tip: Draw boxes tightly around both bodies only — the margin parameter adds the extra padding. Loose boxes produce less zoom. Boxes are in native video pixel coordinates and are stored with each job.

New auto-zoom job crop & zoom to keep the couple large in frame

Scrub to any moment after loading — the full video plays in the canvas below.

Previous runs

Loading…

Pose Model Registry

Loading…