Cori AI

Voice-controlled companion for 3D learning

Hands-free access to immersive 3D education

Cori AI is an assistive, voice-controlled companion that helps users—especially students with limited motor abilities— explore and interact with 3D models without a mouse or keyboard.

2) What is Cori AI?

A conversational AI inside the Corinth 3D viewer

Cori combines speech recognition, natural-language understanding, and text-to-speech feedback into a conversational chatbot embedded directly in the Corinth 3D viewer.

  • Interact by voice or typed input—whichever is easier.
  • Control models with natural commands, not complex UI steps.
  • Designed for classroom clarity and turn-based reliability.

3) Why accessibility matters

Inclusion that builds independence

Cori AI lowers barriers to participation by enabling hands-free navigation and multimodal guidance—supporting learners who may struggle with traditional interfaces, dense text, or fine motor control.

Place your Cori AI visual here Cori AI concept – placeholder visual

4) Who benefits?

Inclusive by design

Cori AI is designed to support diverse learners and inclusive classrooms—especially where hands-free interaction and structured guidance can make the difference.

  • Students with physical disabilities — full model control without fine motor input.
  • Students with ADHD — interactive, task-focused exploration that sustains attention.
  • Students with dyslexia — reduced dependence on dense text through voice support.
  • Students with intellectual disabilities — step-by-step, repeatable learning without stigma.
  • Students with anxiety or autism — predictable, structured interaction and safe practice.

Design principle

Clear guidance, no overstimulation

Cori AI is built for clarity and calm. The interaction is structured and turn-based to avoid confusion, reduce cognitive load, and keep the focus on learning outcomes—not on UI complexity.

  • Minimal friction to start, navigate, and reset.
  • Feedback that confirms what changed in the model.
  • Classroom-ready pacing and predictable behavior.

5) How Cori works

Turn-based multimodal interaction

The MVP delivers a reliable interaction loop that mirrors mouse and keyboard control—through voice or typed commands.

  • 1. Student speaks or types a command
  • 2. Cori interprets intent
  • 3. The model responds (rotate, zoom, reset, navigate)
  • 4. Cori confirms the action and can provide short guidance

Example commands

Natural language that feels intuitive

  • “Rotate the model to the left.”
  • “Zoom in on the heart.”
  • “Reset the view.”
  • “Show me the next layer.”
  • “Select the left ventricle.”

The focus is on reliability and full parity with traditional controls—so every student can explore the same learning content.