HyperAIHyperAI

Command Palette

Search for a command to run...

Quinnipiac Professors and Students Develop AI-Powered Face-Reading Software for Hands-Free Communication and Mobility Assistance

A team of professors and students at Quinnipiac University has developed a groundbreaking face-reading software called AccessiMove, designed to help individuals with limited mobility communicate and interact with their environment using only facial gestures. The project was inspired by a personal encounter when associate professor Chetan Jaiswal met a young man in a wheelchair struggling to communicate with his parents at an occupational therapy conference in 2022. Jaiswal, along with colleagues Karen Majeski, associate professor of occupational therapy, and Brian O'Neill, associate professor of computer science, led the effort alongside students Michael Ruocco and Jack Duggan. Together, they created the university’s first patented software that uses artificial intelligence to turn facial movements into computer commands—offering a hands-free input system for people with motor impairments. AccessiMove works through a standard webcam and uses AI to detect head tilts, winks, blinks, and facial landmarks. These movements are translated into actions such as moving a cursor, clicking, opening applications, or restarting a device. For example, a head tilt to the left or right can move the cursor, while a blink triggers a mouse click. The system is calibrated to each user’s range of motion, making it adaptable for people with different physical conditions, including those who wear glasses or have limited neck movement. The technology isn’t limited to computers. It can also control wheelchairs—forward motion when looking up, backward when looking down—offering greater independence to users in assisted living facilities, rehabilitation centers, and homes. It also holds promise in education, remote learning, and long-term care. Majeski emphasized the software’s potential to improve quality of life, especially for children with mobility challenges. She highlighted efforts to adapt everyday objects, like toys or computers, so they can be operated through facial gestures, promoting both play and learning. O'Neill stressed that the system runs on standard hardware—any device with a built-in webcam—making it accessible and affordable. No special equipment is required. The team sees broader applications in healthcare, where patients who cannot speak could use facial cues to communicate with caregivers. It also opens doors for inclusive gaming, allowing players with disabilities to engage in slower-paced, narrative-driven, or strategy games without physical input. Despite its success in testing, the team acknowledges the need for funding to scale the technology and potentially release it as an open-source tool for wider accessibility. They are actively seeking partnerships, collaborators, and investors, especially with health care institutions across the East Coast, including Yale and Hartford hospitals. Jaiswal’s vision is clear: technology should serve those who need it most. “We’re not building this just to be rich,” he said. “We’re building it to help people live better, more independent lives.”

Related Links