Please use this identifier to cite or link to this item: http://ir.lib.seu.ac.lk/handle/123456789/7885
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHarshana, B. A. D. D.-
dc.contributor.authorLankasena, B. N. S.-
dc.contributor.authorIsuru Jayarathne, I.-
dc.date.accessioned2026-04-22T07:13:11Z-
dc.date.available2026-04-22T07:13:11Z-
dc.date.issued2025-10-30-
dc.identifier.citationConference Proceedings of 14th Annual Science Research Session – 2025 on “NEXT-GEN SOLUTIONS: Bridging Science and Sustainability” on October 30th 2025. Faculty of Applied Sciences, South Eastern University of Sri Lanka, Sammanthurai.. pp. 21.en_US
dc.identifier.isbn978-955-627-146-1-
dc.identifier.urihttp://ir.lib.seu.ac.lk/handle/123456789/7885-
dc.description.abstractAir-writing recognition offers a contactless and intuitive input method, allowing users to write characters in mid-air using hand movements captured by motion sensors or cameras. But existing sequential CNN+BiLSTM models use in air-writing recognition often lack fine temporal details and have limited spatial-temporal feature interaction, causing confusion between similar characters. Recent works show high accuracy with wearable wristbands and fusion networks, yet challenges remain in achieving robust, deployable systems. This study presents a novel parallel CNN+BiLSTM architecture designed to enhance inertial sensor– based air-writing recognition. This study uses a lower-case subset of the 6DMG dataset recorded through a hybrid optical–inertial sensing system. The original dataset contained 14 features per time step, including position data from the WorldViz PPT-X4 optical tracking system, and inertial data collected using the Wii Remote Plus. However, this study to focus only on 11 inertial measurements. In this study, we have implemented two models: baseline model, and the novel model. The proposed model uses a quantum-inspired fusion layer that mixes spatial and temporal features more effectively. Supporting modules capture motion structure, retain relationships between feature groups, and focus on characters that are often misclassified. Comparative experiments against a strong CNN+BiLSTM baseline demonstrate substantial performance gains, with test accuracy improving from 91.16% to 99.32% and the weighted F1-score rising from 0.91 to 0.99, while eliminating low- performing classes. Analysis of confusion matrices confirms the model’s effectiveness in resolving ambiguities such as ‘h’ vs. ‘n’ and ‘e’ vs. ‘t’, highlighting its robustness across diverse handwriting styles. The findings underscore the potential of advanced parallel architectures to achieve high-accuracy, efficient, and user-independent air-writing recognition, with promising applications in assistive technologies, augmented reality, contactless input systems, and wearable computing.en_US
dc.language.isoen_USen_US
dc.publisherFaculty of Applied Sciences, South Eastern University of Sri Lanka, Sammanthurai.en_US
dc.subjectAir-Writing Recognitionen_US
dc.subjectSensor Based Gesture Recognitionen_US
dc.subjectFeature Fusionen_US
dc.subjectParallel CNN+BiLSTM Architectureen_US
dc.titleParallel CNN+BiLSTM with feature fusion for robust air- writing recognitionen_US
dc.typeArticleen_US
Appears in Collections:14th Annual Science Research Session

Files in This Item:
File Description SizeFormat 
ASRS2025-Original-45.pdf146.21 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.