📱 Mobile tip: When the file picker opens, choose your Gallery or Files — not Record Video. Live recordings have lower quality than pre-recorded videos.
Video Quality Report
Scanning…
💡
Recommended recording settings Resolution:1080p (1920×1080) — minimum 720p. Higher resolution improves keypoint accuracy, especially for foot and hand detail. Frame rate:50 or 60 fps. Ballroom movement is fast; lower frame rates (24/30 fps) cause motion blur and reduce step-detection precision. Camera:Fixed tripod, full-body framing, dancers visible throughout. Front-facing or diagonal angle is required — the pose model and all metrics are optimised for this view. Side-view videos produce unreliable skeleton detection and inaccurate scores across all metrics; they are not suitable for analysis.
A
B
✂ Crop Video
Loading frame…
Drag on the image to draw the keep region, or enter pixel values below
Analysis Jobs
Loading…
Couples & Session History
Loading…
Searching DanceSport Info and WDSF…
Verify & Confirm
computed: — (after first run)
–auto-fills from tier, or set manually
Registered Couples
Loading…
Loading feedback…
Users
Loading…
Login Log
Loading…
Setup
Motion tracking model
Selects the RTMPose Wholebody model used for pose estimation. Applies to all
new analysis jobs. Currently running jobs are unaffected.
Additional tracking
The Wholebody model outputs 133 keypoints. These options activate analysis of the
face and hand regions (indices 23–132) which are computed but unused by default.
Each adds a new scored section to the analysis report.
Storage
New jobs land on the fast disk first, then are automatically moved to the archive
disk after the configured delay. Both paths must be accessible.
hours after job finishes
When enabled, every output video shows all detected bodies as ghost skeletons, numbered centroid dots,
and a bottom bar with frame index, detection counts, body_ref, and LOCKED/SCORED/FIRST tracking state.
Useful for diagnosing mirror reflections and wrong couple selection.
AI / LLM spend
Cumulative costs for all Claude API calls made by this server (coaching text generation and AI chat).
Loading…
Social login (OAuth 2.0)
Allows users to sign in with Google, Microsoft, or Facebook in addition to their
email and password. Credentials are set in server/.env — no server changes required, only a restart.
Loading…
Set OAUTH_REDIRECT_BASE in server/.env to generate this URI.
How to configure each provider
Google
Open Google Cloud Console → APIs & Services → Credentials
Create OAuth 2.0 Client ID → Application type: Web application
Add the Redirect URI above to Authorised redirect URIs
Copy Client ID and Client Secret into server/.env:
After editing server/.env, restart the server — the buttons appear on the login page automatically. Users log in with the email address that matches their DanceAI account; no account is created automatically.
Database integrity
Verifies that every completed job's output directory and key files (video, JSON, CSV)
still exist on disk. Catches jobs where files were moved or deleted outside the app.
Dancing Principles
Principles from top-level trainers — injected into AI evaluations only when the couple shows the relevant deficit.
New Principle
Translations (manual override)
🇩🇪 German
🇷🇺 Russian
🇨🇳 Chinese
Show principles for dance:
Loading…
Transcribe
Upload a teaching video and get a text transcript via Whisper. Transcribe keeps the original language; Translate always outputs English.
Running Whisper…
Transcript
Save to library
Dances:
Transcription Library
All saved transcripts — available for review and future LLM analysis.
Coaching Phrases Bank
Growing library of coaching phrases extracted from trainer transcripts — sampled and injected into AI evaluation reports. Phrases are extracted automatically when a transcript is saved.
Loading…
Shows phrase count per deficit × dance — red = no phrases
Loading…
Pose Lab
Admin-only research area. Upload a video with custom pose estimation settings —
only the tracking step runs (no scoring, no reports). Compare keypoint quality
across backends and parameter combinations. For video crop/zoom preprocessing see Optimize.
New lab run
Both cameras are processed with the same config. Stats are reported per camera.
Previous runs
Loading…
Results
Optimize
Two-step preprocessing tool. Step 1 generates a zoomed output video that follows
the couple across the floor. Step 2 uses the same seed boxes as a
region-of-interest (ROI) for pose skeleton detection — narrowing the
detector's view to where the couple actually is, so background people, judges, and audience
are completely excluded.
Workflow
Upload a video and scrub to moments where the couple's size or position changes significantly.
Draw a tight bounding box around the couple's bodies — saved automatically at that time stamp.
Repeat at as many moments as needed (more seeds = smoother, more accurate pan and zoom).
Click Run auto-zoom — the pipeline interpolates all frames between seeds and renders a cropped output video.
Once done, click → Pose to queue a pose-lab job on the original video. The union of your seed boxes (expanded by the margin) is used as a fixed ROI — skeleton detection sees only the couple's region of the frame.
Download the zoomed video via right-click on the result, or submit it to the main upload workflow for a full JS 2.1 report.
Tip: Draw boxes tightly around both bodies only — the margin parameter adds the extra padding.
Loose boxes produce less zoom. Boxes are in native video pixel coordinates and are stored with each job.
New auto-zoom job
crop & zoom to keep the couple large in frame
Editing seedsfor: Adjust boxes, then click Re-run with new seeds. A new run will be created; the original is preserved.
Scrub to any moment after loading — the full video plays in the canvas below.
0.0s
Drag on the frame to draw a box — saved automatically
Previous runs
Loading…
Results
Pose Model Registry
Loading…
Training Datasets
Loading…
Annotation Review
Frame grid — thumbnail badges
✓Green border — frame confirmed correct or manually corrected
✗Red border — marked unusable, excluded from export
No badge — auto-annotated only, not yet reviewed
Per-frame decisions
Save corrections — writes your edits to the COCO JSON, marks frame reviewed (green ✓)
No corrections needed — skip — confirms frame is correct as-is (green ✓), advances to next
Mark unusable — excludes frame from training export (red ✗). Select a reason from the dropdown first (optional but recommended): Couple partially out of frame · Couple too small / too far · Image too blurry · Wrong persons detected · 2nd person concealed / missing · Other. The reason appears on the thumbnail badge. Can be undone.
Delete unusable frames — bulk-deletes all red ✗ frames from disk permanently
Editor — canvas interactions
Scroll — zoom in / out (up to 12×, centred on cursor)
Drag background — pan when zoomed in
Drag a dot — move that single joint to the correct position
Shift + drag a dot — slide the entire skeleton of that person as one unit
Ctrl + click a dot — remove the joint back to the "missing" list for re-placement
Missing / wrong keypoints
●Hollow ghost dot — model detected a position but with low confidence. Click to activate, then drag to correct spot.
Coloured chips below canvas — joints with no position at all. Click a chip → cursor turns crosshair → click canvas to place. Escape cancels.
Person colour: blue = P1, orange = P2 — chips and ghost dots match the person they belong to
Managing persons
+ Add person (clone) — duplicates the best skeleton offset by ~4 % of image width. Shift+drag it to the second dancer, then fine-tune individual joints.
✕ P1 / ✕ P2 — removes an entire person annotation (appears only when 2+ persons present). Buttons are coloured blue / orange to match.
If the model detected zero persons, a blank P1 skeleton is created automatically — all joints appear as chips so you can place each one manually.
Trainer notes → couple feedback
Write an observation about the frame (e.g. "left arm over-extended, couple breaking contact").
Tick include screenshot to attach the frame image inline at the top of the feedback message.
Post to couple feedback — inserts the note directly into the couple's feedback stream, visible immediately in the Feedback section.
Common annotation problems to look for
· Phantom arms at the hand-join contact point between the two dancers
· Wrists attributed to the wrong dancer (cross-skeleton misassignment)
· Missing arm keypoints in extreme rise or outside-partner extension
· Occluded hips / knees hidden under the follow's skirt
· Head proximity in tango hold causing nose / ear confusion
Train — complete guide
v8.1.0
Models → Datasets → Review is the standard workflow. Each step must complete before the next begins.
Workflow overview
1 · Models
→
Register or select the pose model to use for annotation
2 · Datasets
→
Create dataset from job videos → Extract frames → Auto-Annotate
3 · Review
→
Open each frame, correct keypoints, confirm or mark unusable → Export COCO JSON
Important: if you re-extract frames (e.g. to get more frames), always run Auto-Annotate again before opening Review. Manual corrections you already made are preserved automatically — only unreviewed frames are overwritten.
Models tab
What the model registry does
Keeps track of every pose model version — built-in and fine-tuned. The active default model is used for all auto-annotation and lab jobs unless you explicitly pick another.
Two built-in models are seeded on startup: RTMO-L (bottom-up, GPU, default) and AlphaPose SE-ResNet50.
Actions
Register model — add a new fine-tuned ONNX file. Fill in name, backend, path, and optionally the base model it was fine-tuned from.
Set default — makes this model the active default for its backend. Only one model per backend can be default.
Edit / Deactivate — update name or path; deactivate hides the model from selectors without deleting it.
Performance stats — OKS mean, phantom rate, and frames tested are shown per model after evaluation.
Datasets tab
Creating a dataset
1. Click + New dataset and give it a name.
2. Select one or more completed analysis jobs as the video source.
3. Set the frame interval (seconds between sampled frames, default 2 s).
4. Set max frames per video if you want to limit the dataset size.
5. Dedup filter (0 = off, default): skips frames that look nearly identical to the previous kept frame. Keep at 0 for short clips or Standard dances with consistent backgrounds — raise to 0.5–0.7 only for longer recordings with many static shots.
6. Tick Focus on close-hold moments to bias sampling toward frames with high body overlap — useful for targeting phantom-arm training examples.
7. Click Create & Extract frames.
After extraction
Auto-Annotate — runs the selected pose model over all extracted frames and writes a COCO keypoints JSON. Takes ~1–5 s per frame on GPU.
Extract frames (re-run) — re-extracts with dedup=0. Safe to run any time; re-run Auto-Annotate afterwards.
✎ Rename — click the pencil icon next to the dataset name to rename it inline. Enter to save, Escape to cancel.
Export COCO JSON — downloads the corrected annotation file (unusable frames excluded). Use after Review is complete.
Merge & export selected — tick two or more datasets and download a single combined COCO JSON with re-indexed IDs. Use to accumulate data across sessions before fine-tuning.
Delete — removes the dataset record; optionally deletes extracted frame files from disk.
Review tab — frame grid
Thumbnail badges
✓ green border — confirmed correct or manually corrected
✗ red border — marked unusable, excluded from export
No badge — auto-annotated only, not yet reviewed
Frame decisions
Save corrections — writes your edits, marks frame green ✓
No corrections needed — skip — confirms frame is correct as-is, advances to next frame
Mark unusable / undo — flags red ✗, auto-advances; undo restores. Pick a reason from the dropdown before clicking (optional): partially out of frame · too small · too blurry · wrong detection · 2nd person concealed · other. Reason shown on thumbnail badge.
Delete unusable frames — permanently deletes all red ✗ JPEGs from disk
What to look for when reviewing
· Phantom arms — extra arm keypoints appearing at the hand-join contact point between the two dancers
· Wrong-dancer wrist — wrist assigned to the wrong person near the hand-join region
· Missing arm — keypoints absent during extreme rise, sway, or outside-partner figures
· Skirt occlusion — follow's knees / ankles hidden under a full skirt
· Head proximity — nose / ear confusion in tango hold where heads are close
· One dancer missing — model detected only one person; use + Add person (clone) to scaffold the second
· Zero detections — no persons found at all; all joints appear as chips for manual placement
Occluded joints — delete or estimate?
Rule: estimate and place whenever possible — do not delete. The COCO format distinguishes three visibility states for a reason:
State
Meaning
When to use
v = 2 Visible
Joint clearly visible in frame
Default for all well-detected joints
v = 1 Occluded
Hidden but position can be estimated from surrounding joints
Most occluded joints in Standard ballroom — skirt, arm behind back, head in tango hold
v = 0 Missing
No position — joint left as chip
Joint completely out of frame, or no anatomical anchor to estimate from
Why estimated positions matter for training
The model learns bone-length relationships, body symmetry, and skeleton structure from all placed joints — including occluded ones. An estimated position (v = 1) teaches the model where a joint must be given the visible anchors around it. A chip (v = 0) teaches only that the joint is absent, which is a weaker signal.
Common occluded joints in Standard and how to handle them
Follow's knees / ankles under skirt Estimate from hip position and ankle (if visible). The leg direction is usually clear from the hip angle.
Lead's left wrist / elbow (behind follow's back) Estimate from shoulder direction. The elbow is typically just behind the follow's waist in closed hold.
Hips in strong body sway or outside-partner Estimate from opposite hip and the shoulder–hip line. Both hips should form a plausible pelvis.
Head / nose in tango hold Estimate from the neck and shoulder line. Heads are close but the lead's is slightly higher and turned right.
Foot / heel during rise Often visible at the ankle even when heel is raised — estimate foot position below the ankle dot.
Practical rule: if you can draw a mental line from two visible anchor joints and place the missing one within ~5 % of image width with confidence → place it. Otherwise leave it as a chip.
Keyboard & mouse — frame editor
Canvas interactions
Scroll wheel
Zoom in / out (1× – 12×, centred on cursor)
Drag background
Pan the view (only when zoomed in)
Drag a dot
Move that single joint to the correct position
Shift + drag dot
Move the entire skeleton of that person as one unit
Ctrl + click dot
Remove joint → returns to missing chips list for re-placement
Click ghost dot
Activate a low-confidence joint (hollow circle) → drag to correct
Click chip → click canvas
Place a completely missing joint at the clicked position
Keyboard shortcuts
Escape
Cancel an active chip-place mode / close modals
1:1 button
Reset zoom and pan back to fit-view
Missing keypoints — two modes
●Ghost dot (hollow circle on canvas) — model has a position estimate but low confidence (v < 0.1). Click to promote to visible, then drag to correct spot.
Coloured chip below canvas — joint has no position at all (v = 0, x = y = 0). Click chip → cursor turns crosshair → click canvas to drop. Escape cancels. Chips are coloured blue for P1, orange for P2.
To remove a wrongly placed joint: Ctrl + click → joint resets to chip.
Person colours
●Person 1 — blue skeleton, blue chips, blue ✕ P1 button
Collect datasets across multiple videos, couples, dances, and camera angles before fine-tuning — a single dataset tends to overfit.
Use Merge & export selected to combine all reviewed datasets into one COCO JSON. The merge re-indexes all IDs and excludes unusable frames.
Run A/B comparison in the lab after fine-tuning: same video, original model vs fine-tuned model. Compare phantom rate and mean confidence before promoting to default.
Register Pose Model
New Training Dataset
Competitions
WDSF
DSI
URLs saved in the couple profile are used automatically. Enter here to override.
Loading…
Results
Loading…
My Profile
My Trainers
Report language
Saved
Sets the default language for new uploads and PDF reports.
Email notifications
Two-factor authentication (TOTP)
Two-factor authentication is active
Two-factor authentication is off
Scan this QR code with Google Authenticator (or any TOTP app), then enter the 6-digit code to confirm.
Can't scan? Enter code manually
Dance AI — Help
Automated video analysis and judge reporting for Standard ballroom dance.
Dance Review
▶
0:00 / 0:00
Range: —
Scrub to where tracking fails, click Set start, advance to the end of the bad segment, click Set end, then Add exclusion.
Send to:
Send question goes to your trainer. Ask AI gets an immediate answer from the coaching AI based on this run's scores.
Flagged segments
Loading…
Questions from couple
Loading…
Questions sent
Loading…
Add Feedback
AI Trainer Chat
Edit User
New couple
Assigned trainers
Assigned couples
Report language
Email notifications
Upload quota (30 days) — either limit blocks uploads
Two-factor authentication (TOTP)
Active
Change Password
Reassess with A/B Comparison
Select algorithm for new run (B)
Custom configuration
Insights language level (B run)
A/B Comparison
Loading…
Run Detail
Loading…
Scroll to zoom · drag to pan · drag a dot to move it · Shift+drag to move whole skeleton · Ctrl+click a dot to remove it · blue = P1, orange = P2.