eyEar Watch demo

Wearable assistive AI · prototype in testing

What if your ears could see?

eyEar is a sub-$50 wearable AI that turns the world into actionable, real-time audio—obstacle awareness and eyEar Cortex scene intelligence, not generic label dumps.

  • Working prototype
  • Under $50 target
  • Hands-free
  • eyEar Cortex

Product

eyEar, in one breath

A discreet clip-on wearable: camera, proximity sensing, and a phone companion so you get continuous awareness without a four-figure price tag. It identifies obstacles, objects, pathways, signage, and specific items—and speaks in calm, discreet audio.

The differentiator is not the camera alone—it is eyEar Cortex, the layer that turns raw vision+language into guidance your day actually needs.

Product

See it run

Prototype demos

Real hardware, real flows—short clips on YouTube. Tap a tile to open in a new tab.

Why we exist

A global gap: tools that are priced out, generic, or descriptive—not directive

Over a billion people live with meaningful vision loss. The barrier is rarely “seeing the pixels”—it is affordable, trustworthy assistance that fits transit, work, meals, and social life. Today’s landscape still leaves millions dependent on luck, volunteers, or devices priced like luxury goods.

Cost wall

Specialist wearables often cost thousands. Guide dogs involve long waits. eyEar targets under $50 so independence is not a privilege.

Wrong shape

Phone apps tie up your hands. One-size eyewear ignores cane-first users, guide-dog partners, and people who simply will not wear glasses for assistive tech.

Labels, not next steps

“Chair, table, person” is not enough. People need steps, clock positions, social context— the kind of answer a thoughtful human would give, at the speed of software.

44% vs 79%

Employment: blind or visually impaired vs. without disabilities (U.S.)

AFB statistics

$2k–$4.5k+

Typical vision-aid price bands vs. eyEar’s sub-$50 goal

Cortex

Scenario-tuned responses—built to say what to do, not only what is there.

Context · transit, school, dining

Our journey

From listening to shipping—and what comes next

A straight line from insight to scale: each step builds on blind partners, data, and hardware in the field.

  1. Done

    Insight & listening

    Deep conversations on daily friction—navigation, meals, shopping, social cues—before writing a line of product code.

  2. Done

    Design partners

    Ongoing blind co-designers set priorities: what is actionable vs. noise, and how audio vs. haptics should balance.

  3. Done

    Prototype v1 + Cortex seed

    First wearable loop: sensors, vision API, and early eyEar Cortex prompts shaped by real scenarios.

  4. Now

    Field prototype & feedback loop

    We are here: devices in partner hands, iterating Cortex scenarios weekly, hardening ultrasonic safety paths, and capturing demos you can watch in the Product section.

  5. Next

    Pilot programs

    Structured pilots with schools, community orgs, or clinics—measuring independence gains and refining onboarding.

  6. Then

    Manufacturing & compliance path

    Design for production, supply chain, labeling, and the right regulatory posture for the markets we enter.

  7. Ahead

    Scale & Cortex library growth

    Expand the scenario library to hundreds of modes—every new feature ships through the same Cortex pattern you see below.

Prototype

What we have built

A tested wearable, not a slide deck

Today’s stack: compact ESP32-S3, camera, ultrasonic sensor, discreet clip-on (belt, shirt, cane, or neck) with a phone for hands-free cloud vision when you want it.

  • Ultrasonic safety: distance thresholds—not probabilistic vision—for collision avoidance.
  • Vision + language: multimodal model (e.g. Gemini 2.0) for raw scene understanding.
  • eyEar Cortex: the scenario layer on top—what makes outputs usable in real life.

Roadmap sections describe direction; not every layer is in every build yet.

The USP

eyEar Cortex — scenario intelligence, not a chatbot bolt-on

Off-the-shelf vision models describe pictures. eyEar Cortex is the product: a growing library of situations, prompts, and response shapes co-designed with blind partners so the device speaks like a skilled orientation partner—steps ahead, clock positions, social awareness—not a firehose of object labels.

Every future feature (there will be hundreds) lands as a Cortex scenario: a defined context, safety rules, output format, and test cases. Tap any row below to see how we document what “good” sounds like. This pattern scales as we add modes.

Navigation & transit

Empty seat on bus or train

Goal: Find sittable space and approach path.

Example output: “Empty seat two steps ahead, slightly to your right; aisle clear.”

Crosswalk & curb alignment

Goal: Align to crossing box and confirm signal phase when visible.

Example output: “Curb straight ahead; crosswalk stripes underfoot; pedestrian signal on your left shows walk.”

Indoor corridor & doorway

Goal: Center in hall, announce doors and turns.

Example output: “Hall continues 8 meters; open doorway 2 o’clock—likely restroom sign.”

Elevator floor & button panel

Goal: Read floor indicator and describe control layout.

Example output: “Panel on right; ‘3’ illuminated; braille strip along bottom edge.”

Dining & kitchen

Plate layout by clock position

Goal: Map food for utensil approach.

Example output: “Avocado at 3, sandwich at 6, jam at 9.”

Cafeteria line & counters

Goal: Describe stations and shortest queue hint.

Example output: “Three stations: salad left, hot entree center, cashier right; middle line shortest.”

Stove & appliance state (when visible)

Goal: Surface obvious on/off cues—always paired with ultrasonic safety elsewhere.

Example output: “Front-left burner glow red; kettle on rear right.”

Social & people

Who is in the room & group drift

Goal: Orientation to faces, proximity, whether attention shifted.

Example output: “Three people in arc ahead; two turned toward the door—conversation may have moved.”

Pick clothing from a row

Goal: Distinguish shirts by color/pattern and position.

Example output: “Blue shirt, second from the left on the rack.”

Find a familiar face in a crowd

Goal: Directional cue + distance when partner provided reference.

Example output: “Possible match 11 o’clock, ~4 meters, navy jacket.”

Reading, docs & learning

Physical book paragraph flow

Goal: Page boundaries, read order, bookmarking (roadmap).

Example output: “New paragraph starting; header ‘Chapter 4’ at top of page.”

Handwritten note Q&A

Goal: Semantic search over captured text—“what date is mentioned?”

Example output: “Date line: March 15 near signature.”

Slide or whiteboard overview

Goal: Bullet structure for classroom or meeting access.

Example output: “Three bullets: Budget, Timeline, Risks—timeline bullet has a sub-list.”

Safety & hazards

Head-level obstacles (with vision + ultrasonic)

Goal: Complement cane sweep with branch, sign, or cabinet context.

Example output: “Low branch ahead at forehead height; step left to clear.”

Escalator & stair lip

Goal: Clear edge and direction of travel cues.

Example output: “Escalator mouth 2 steps ahead; handrail on your right.”

Construction & temporary barriers

Goal: Narrate tape, cones, and alternate path hints.

Example output: “Yellow tape crosses sidewalk; gap at 10 o’clock.”

New scenarios ship continuously—same Cortex pipeline: capture → scenario detect → guided prompt → spoken (or haptic) response. Partner testing decides what graduates to default modes vs. optional packs.

Architecture

How it works

Camera + ultrasonic in parallel. Description runs through Cortex and a vision model; obstacle alerts stay on deterministic sensors.

Seeed XIAO ESP32-S3 · Gemini 2.0 Flash · Python · working prototype

Platform roadmap

One core, many forms, blind-first modes

A modular core for eyewear, cane, neck, or belt—plus purpose-built experiences and a layered safety story when connectivity drops.

Multi–form factor

One heart, many wear styles—cane users, guide dog partners, eyewear-averse users included.

Seven experience lanes

Scene, plate, books, documents, expression, wayfinding, hazards—each a Cortex-heavy mode.

Offline-first safety

Physics-first sensors, optional on-device fallback, rich cloud when connected.

Institutions

Schools, NGOs, rehab—alongside consumers—for reach at scale.

Roadmap experience names

  • Scene Narrator
  • Danger Spotter
  • Plate Analyzer
  • Book Reader
  • Expression Reader
  • Document Scanner
  • Wayfinder
Platform visual

Who we serve

Blind and low-vision people first—especially those priced out today

Students, working-age adults, and older adults with progressive loss. Under $50 remains the north star.

  • Independent navigation with obstacle awareness
  • Situational awareness through Cortex-tuned descriptions
  • Social inclusion: who is present, who approaches, group dynamics

Ethics & safety

Responsible AI

  • Privacy. Images go to inference APIs; not retained by eyEar. Bystander capture is disclosed.
  • Bias. We test across contexts and refine Cortex when outputs miss—ongoing, not one-time.
  • Complement. Works with canes, dogs, and orientation skills—not a replacement.

About

Team

Why we built eyEar

Sia and Atishay started eyEar because they could not stop thinking about a problem most people never notice—not only the physical obstacles blind individuals navigate, but the invisible ones: moments of isolation, dependence, and lost confidence when the world is not built with them in mind.

Sia brings the technical foundation that makes eyEar real. She is the reason the project is a device and not just an idea—hardware, software, and the integrated AI behind every description eyEar speaks. Atishay brings the human anchor—months of conversation with blind individuals so every decision reflects their actual lives, not our assumptions.

Together, we have spent nine months making sure eyEar earns its place in people’s lives.

Photo

Sia Gupta

Inventor & Lead Engineer

School: Tesla STEM High School

Location: Redmond, WA

Grade: 11th

Photo

Atishay

Research & Human-centered Design

School: North Creek High School

Grade: 10th

Collaborate

Partner with us

We are looking for organizations, experts, and community members who believe assistive technology should be accessible to everyone. eyEar is at a stage where the right partnerships will shape what it becomes.

We are especially interested in:

  • Organizations that can connect us with blind and low-vision individuals for structured testing and feedback.
  • Experts in accessible design who can help refine hardware, software, and audio experiences.
  • Communities and schools that want to help build something that measurably improves independence and confidence.

If that sounds like you or your organization, we would love to hear from you. Reach out and tell us how you would like to collaborate—we read every message.

Contact us about partnering