TY - GEN
T1 - AuslanWeb
T2 - 34th ACM Web Conference, WWW 2025
AU - Shen, Xin
AU - Du, Heming
AU - Sheng, Hongwei
AU - Li, Lincheng
AU - Zhang, Kaihao
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/4/28
Y1 - 2025/4/28
N2 - Effective communication between the deaf community and hearing individuals facilitates social inclusion, equal opportunities, and the dignity of vulnerable populations. However, existing region-specific sign language systems are constrained by limited training datasets and narrow topic domains, rendering them ineffective for bridging the linguistic gaps between sign languages and spoken languages. Auslan, as the sign language specific to Australia, still lacks a reliable bidirectional translation tool for effective communication. To address these challenges, we propose AuslanWeb, a web-based system for bidirectional translation of both isolated and successive sign language1. For the former, AuslanWeb achieves high-precision mapping between isolated signs (glosses) and spoken language words or phrases through a multimodal recognition system and a versatile Auslan dictionary. For the latter, it leverages the advanced contextual understanding and text generation capabilities of Large Language Models (LLMs) to support bidirectional translation between successive sign language videos and long-form spoken language. By integrating linguistic structure with advanced AI capabilities, AuslanWeb overcomes the limitations of dataset dependency and enhances the scalability of sign language translation systems. The effectiveness of the system is further validated through user feedback, receiving consistent praise from Auslan experts, Australian deaf individuals, and volunteers. The demo video of AuslanWeb is provided here .
AB - Effective communication between the deaf community and hearing individuals facilitates social inclusion, equal opportunities, and the dignity of vulnerable populations. However, existing region-specific sign language systems are constrained by limited training datasets and narrow topic domains, rendering them ineffective for bridging the linguistic gaps between sign languages and spoken languages. Auslan, as the sign language specific to Australia, still lacks a reliable bidirectional translation tool for effective communication. To address these challenges, we propose AuslanWeb, a web-based system for bidirectional translation of both isolated and successive sign language1. For the former, AuslanWeb achieves high-precision mapping between isolated signs (glosses) and spoken language words or phrases through a multimodal recognition system and a versatile Auslan dictionary. For the latter, it leverages the advanced contextual understanding and text generation capabilities of Large Language Models (LLMs) to support bidirectional translation between successive sign language videos and long-form spoken language. By integrating linguistic structure with advanced AI capabilities, AuslanWeb overcomes the limitations of dataset dependency and enhances the scalability of sign language translation systems. The effectiveness of the system is further validated through user feedback, receiving consistent praise from Auslan experts, Australian deaf individuals, and volunteers. The demo video of AuslanWeb is provided here .
KW - Australian Sign Language (Auslan) Communication System
KW - Large Language Model
KW - Prompt Engineering
UR - https://www.scopus.com/pages/publications/105005137543
U2 - 10.1145/3696410.3714525
DO - 10.1145/3696410.3714525
M3 - 会议稿件
AN - SCOPUS:105005137543
T3 - WWW 2025 - Proceedings of the ACM Web Conference
SP - 5212
EP - 5223
BT - WWW 2025 - Proceedings of the ACM Web Conference
PB - Association for Computing Machinery, Inc
Y2 - 28 April 2025 through 2 May 2025
ER -