Please use this identifier to cite or link to this item: http://ir.lib.seu.ac.lk/handle/123456789/7885
Title: Parallel CNN+BiLSTM with feature fusion for robust air- writing recognition
Authors: Harshana, B. A. D. D.
Lankasena, B. N. S.
Isuru Jayarathne, I.
Keywords: Air-Writing Recognition
Sensor Based Gesture Recognition
Feature Fusion
Parallel CNN+BiLSTM Architecture
Issue Date: 30-Oct-2025
Publisher: Faculty of Applied Sciences, South Eastern University of Sri Lanka, Sammanthurai.
Citation: Conference Proceedings of 14th Annual Science Research Session – 2025 on “NEXT-GEN SOLUTIONS: Bridging Science and Sustainability” on October 30th 2025. Faculty of Applied Sciences, South Eastern University of Sri Lanka, Sammanthurai.. pp. 21.
Abstract: Air-writing recognition offers a contactless and intuitive input method, allowing users to write characters in mid-air using hand movements captured by motion sensors or cameras. But existing sequential CNN+BiLSTM models use in air-writing recognition often lack fine temporal details and have limited spatial-temporal feature interaction, causing confusion between similar characters. Recent works show high accuracy with wearable wristbands and fusion networks, yet challenges remain in achieving robust, deployable systems. This study presents a novel parallel CNN+BiLSTM architecture designed to enhance inertial sensor– based air-writing recognition. This study uses a lower-case subset of the 6DMG dataset recorded through a hybrid optical–inertial sensing system. The original dataset contained 14 features per time step, including position data from the WorldViz PPT-X4 optical tracking system, and inertial data collected using the Wii Remote Plus. However, this study to focus only on 11 inertial measurements. In this study, we have implemented two models: baseline model, and the novel model. The proposed model uses a quantum-inspired fusion layer that mixes spatial and temporal features more effectively. Supporting modules capture motion structure, retain relationships between feature groups, and focus on characters that are often misclassified. Comparative experiments against a strong CNN+BiLSTM baseline demonstrate substantial performance gains, with test accuracy improving from 91.16% to 99.32% and the weighted F1-score rising from 0.91 to 0.99, while eliminating low- performing classes. Analysis of confusion matrices confirms the model’s effectiveness in resolving ambiguities such as ‘h’ vs. ‘n’ and ‘e’ vs. ‘t’, highlighting its robustness across diverse handwriting styles. The findings underscore the potential of advanced parallel architectures to achieve high-accuracy, efficient, and user-independent air-writing recognition, with promising applications in assistive technologies, augmented reality, contactless input systems, and wearable computing.
URI: http://ir.lib.seu.ac.lk/handle/123456789/7885
ISBN: 978-955-627-146-1
Appears in Collections:14th Annual Science Research Session

Files in This Item:
File Description SizeFormat 
ASRS2025-Original-45.pdf146.21 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.