Enhancing FT-Transformer With a Matérn-Driven Kolmogorov-Arnold Feature Tokenizer for Tabular Data-Based In-Bed Posture Classification

In-bed posture classification plays a crucial role in health monitoring. In this paper, we explore in-bed posture classification using FT-Transformer, a model that employs 1D tabular inputs instead of the commonly used 2D pressure heatmaps. However, the Feature Tokenizer in FT-Transformer suffers fr...

Full description

Saved in:
Bibliographic Details
Main Authors: Bing Zhou, Weiwei Chen
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11075767/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In-bed posture classification plays a crucial role in health monitoring. In this paper, we explore in-bed posture classification using FT-Transformer, a model that employs 1D tabular inputs instead of the commonly used 2D pressure heatmaps. However, the Feature Tokenizer in FT-Transformer suffers from limited representational capacity—relying on simplistic numerical feature processing—and slow convergence due to learning separate embeddings for each feature, which increases training complexity and time. To address this, we propose a Matérn-driven Kolmogorov-Arnold Feature Tokenizer (MKAFT) that enhances the expressiveness of feature tokens in FT-Transformer, leading to faster training. This paper offers three major advancements: (1) Validation of Reduced Spatial Dependency – We demonstrate that in-bed posture classification does not heavily rely on the spatial information of 2D pressure heatmaps. By flattening the pressure data into 1D tabular inputs, we simplify the model structure while still achieving excellent classification performance; (2) A Faster KAN via Matérn kernel – by incorporating Matérn kernel into the Kolmogorov-Arnold Network (KAN), we accelerate both training and inference; and (3) Matérn-driven KAN for Optimizing the Feature Tokenizer in FT-Transformer – leveraging Matérn-driven KAN in the Feature Tokenizer stage of FT-Transformer enhances feature representation capacity, accelerating training with minimal impact on classification accuracy. Empirical results demonstrate that our method strikes a favorable balance between efficiency and performance.
ISSN:2169-3536