| dc.description.abstract |
Air-writing recognition offers a contactless and intuitive input method, allowing users to
write characters in mid-air using hand movements captured by motion sensors or cameras.
But existing sequential CNN+BiLSTM models use in air-writing recognition often lack fine
temporal details and have limited spatial-temporal feature interaction, causing confusion
between similar characters. Recent works show high accuracy with wearable wristbands and
fusion networks, yet challenges remain in achieving robust, deployable systems. This study
presents a novel parallel CNN+BiLSTM architecture designed to enhance inertial sensor–
based air-writing recognition. This study uses a lower-case subset of the 6DMG dataset
recorded through a hybrid optical–inertial sensing system. The original dataset contained 14
features per time step, including position data from the WorldViz PPT-X4 optical tracking
system, and inertial data collected using the Wii Remote Plus. However, this study to focus
only on 11 inertial measurements. In this study, we have implemented two models: baseline
model, and the novel model. The proposed model uses a quantum-inspired fusion layer that
mixes spatial and temporal features more effectively. Supporting modules capture motion
structure, retain relationships between feature groups, and focus on characters that are often
misclassified. Comparative experiments against a strong CNN+BiLSTM baseline
demonstrate substantial performance gains, with test accuracy improving from 91.16% to
99.32% and the weighted F1-score rising from 0.91 to 0.99, while eliminating low-
performing classes. Analysis of confusion matrices confirms the model’s effectiveness in
resolving ambiguities such as ‘h’ vs. ‘n’ and ‘e’ vs. ‘t’, highlighting its robustness across
diverse handwriting styles. The findings underscore the potential of advanced parallel
architectures to achieve high-accuracy, efficient, and user-independent air-writing
recognition, with promising applications in assistive technologies, augmented reality,
contactless input systems, and wearable computing. |
en_US |