SignAI has been exploring sign language datasets, motion capture, and prototype sign generation since 2021. This page documents the real work behind the vision.
These videos show early-stage sign language AI workflows, documented from as early as March 2021.
Studio workflow capturing hand, face, and body movement using motion capture technology for structured sign language training data.
Text input producing early signed output through an avatar interface, demonstrating practical iteration on generation workflows.
Early prototype showing sign language recognition and signed output experimentation as part of the SignAI journey.
SignAI did not appear overnight. It grew from years of work in Deaf accessibility, BSL resources, and a personal understanding of the communication gap.
Joel Kellhofer MBE founded SignLive, a remote BSL interpreting platform that grew to thousands of registered users and was adopted by a growing number of organisations. He also created the Sign Dictionary, a free BSL learning resource used by millions of people and thousands of schools. Joel is no longer involved in SignLive.
Exploration started in 2021. Joel began experimenting with motion capture techniques and sign language datasets, capturing hand, face, and body movement in studio workflows and laying the groundwork for what would become SignAI.
Prototype videos documented sign language AI experimentation, including motion capture workflows, signed output generation, and early recognition concepts. The SignAI journey was already well underway.
Ongoing development of datasets, model architectures, and generation pipelines. Iterating on pose estimation, temporal sequence modelling, and BSL-aligned sign production, focused on structure and clarity rather than word-for-word substitution.
SignAI is now presented at signai.com as an independent AI initiative focused on sign language access. SignWow operates as the current Deaf-led interpreting, translation, and accessibility service, providing commercial insight and real-world context that informs the longer-term SignAI vision.
The workload is multimodal, real-time, and video-heavy. SignAI is exploring two core AI workflows, designed to complement, not replace, human interpreters.
Temporal modelling of hand shape, facial expression, gaze, and body movement to produce understandable text or structured language representations, using pose estimation pipelines and transformer-based sequence models.
Generation workflows that render signed output through avatar or video-based interfaces, aligned to BSL grammar and structure rather than word-for-word substitution.
SignAI is intended to complement live interpreters and VRS/VRI services, helping cover everyday communication moments that cannot always wait for human availability.
Deaf entrepreneur with over a decade of experience building products and services for the Deaf community. Joel was awarded an MBE for services to the Deaf community. He has been exploring AI-powered sign language technology since 2021.
SignAI is not a speculative story. It comes from someone who understands Deaf users, sign language content, service delivery, and the operational reality of accessibility products.
Whether you are interested in the technology, exploring a partnership, or want to learn more about AI-powered sign language access, we would love to hear from you.