Smart fusion of multi-sensor ubiquitous signals of mobile device for localization in GNSS-denied scenarios
In order to support indoor and outdoor seamless location-based services (LBS), this paper
proposes a smart fusion architecture for combing the ubiquitous signals of the mobile device
integrated multi-modal sensors based on deep learning, which can fuse the
vision/wireless/inertial information. The core of the fusion architecture is an improved four-
layers deep neural network that integrating a convolutional neural network (CNN) and an
improved particle filter. In the first place, inspired by creating the RGB-D image, we change …
proposes a smart fusion architecture for combing the ubiquitous signals of the mobile device
integrated multi-modal sensors based on deep learning, which can fuse the
vision/wireless/inertial information. The core of the fusion architecture is an improved four-
layers deep neural network that integrating a convolutional neural network (CNN) and an
improved particle filter. In the first place, inspired by creating the RGB-D image, we change …
Abstract
In order to support indoor and outdoor seamless location-based services (LBS), this paper proposes a smart fusion architecture for combing the ubiquitous signals of the mobile device integrated multi-modal sensors based on deep learning, which can fuse the vision/wireless/inertial information. The core of the fusion architecture is an improved four-layers deep neural network that integrating a convolutional neural network (CNN) and an improved particle filter. In the first place, inspired by creating the RGB-D image, we change the image gray by using a normalized magnetic strength and scale the image intensity by using a normalized WiFi signal strength, which results in a new image named RGB-WM image. Then, homogeneous features are extracted from the RGB-WM image based on the improved CNN for achieving context-awareness. Based on combing the context information, we introduce a new particle filter for fusing different information from multi-modal sensors. In order to evaluate our proposed positioning architecture, we have conducted extensive experiments in four different scenarios including our laboratory, and the campus of our university. Experimental results demonstrate the precision and recall of the RGB-WM image feature is 95.6 and 4.1% respectively. Furthermore, the proposed infrastructure-free fusion architecture reduced the root mean square error (RMSE) of locations in the range of 13.3–55.2% in walking experiments with two smartphones, under two motion conditions, which indicates a superior performance of our proposed image/WiFi/magnetic/inertial fusion architecture over the state-of-the-art with these four localization scenarios. The ubiquitous positioning accuracy of our proposed algorithm is less than 1.23 m, which can meet the requirement of the complex GNSS-denied regions.
Springer
顯示最佳搜尋結果。 查看所有結果