SEUIR Repository

Parallel CNN+BiLSTM with feature fusion for robust air- writing recognition

Show simple item record

dc.contributor.author Harshana, B. A. D. D.
dc.contributor.author Lankasena, B. N. S.
dc.contributor.author Isuru Jayarathne, I.
dc.date.accessioned 2026-04-22T07:13:11Z
dc.date.available 2026-04-22T07:13:11Z
dc.date.issued 2025-10-30
dc.identifier.citation Conference Proceedings of 14th Annual Science Research Session – 2025 on “NEXT-GEN SOLUTIONS: Bridging Science and Sustainability” on October 30th 2025. Faculty of Applied Sciences, South Eastern University of Sri Lanka, Sammanthurai.. pp. 21. en_US
dc.identifier.isbn 978-955-627-146-1
dc.identifier.uri http://ir.lib.seu.ac.lk/handle/123456789/7885
dc.description.abstract Air-writing recognition offers a contactless and intuitive input method, allowing users to write characters in mid-air using hand movements captured by motion sensors or cameras. But existing sequential CNN+BiLSTM models use in air-writing recognition often lack fine temporal details and have limited spatial-temporal feature interaction, causing confusion between similar characters. Recent works show high accuracy with wearable wristbands and fusion networks, yet challenges remain in achieving robust, deployable systems. This study presents a novel parallel CNN+BiLSTM architecture designed to enhance inertial sensor– based air-writing recognition. This study uses a lower-case subset of the 6DMG dataset recorded through a hybrid optical–inertial sensing system. The original dataset contained 14 features per time step, including position data from the WorldViz PPT-X4 optical tracking system, and inertial data collected using the Wii Remote Plus. However, this study to focus only on 11 inertial measurements. In this study, we have implemented two models: baseline model, and the novel model. The proposed model uses a quantum-inspired fusion layer that mixes spatial and temporal features more effectively. Supporting modules capture motion structure, retain relationships between feature groups, and focus on characters that are often misclassified. Comparative experiments against a strong CNN+BiLSTM baseline demonstrate substantial performance gains, with test accuracy improving from 91.16% to 99.32% and the weighted F1-score rising from 0.91 to 0.99, while eliminating low- performing classes. Analysis of confusion matrices confirms the model’s effectiveness in resolving ambiguities such as ‘h’ vs. ‘n’ and ‘e’ vs. ‘t’, highlighting its robustness across diverse handwriting styles. The findings underscore the potential of advanced parallel architectures to achieve high-accuracy, efficient, and user-independent air-writing recognition, with promising applications in assistive technologies, augmented reality, contactless input systems, and wearable computing. en_US
dc.language.iso en_US en_US
dc.publisher Faculty of Applied Sciences, South Eastern University of Sri Lanka, Sammanthurai. en_US
dc.subject Air-Writing Recognition en_US
dc.subject Sensor Based Gesture Recognition en_US
dc.subject Feature Fusion en_US
dc.subject Parallel CNN+BiLSTM Architecture en_US
dc.title Parallel CNN+BiLSTM with feature fusion for robust air- writing recognition en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search SEUIR


Advanced Search

Browse

My Account