Fusing Detected Humans in Multiple Perception Sensors Network

Tóm tắt

A fusion method is proposed to keep a correct number of humans from all humans detected by the robot operating system based perception sensor network (PSN) which includes multiple partially overlapped field of view (FOV) Kinects. To this end, the fusion rules are based on the parallel and orthogonal configurations of Kinects in PSN system. For the parallel configuration, the system will decide whether the detected humans staying in FOV of single Kinect or in overlapped FOV of multiple Kinects by evaluating the angles formed between their locations and Kinect original point on top view (x, z plane) of 3D coordination. Then, basing on the angles, the PSN system will keep the person stay in only one FOV or keep the one with biggest ROI if they stay in overlapped FOV of Kinects. In the case of Kinects with orthogonal configuration, 3D Euclidian distances between detected humans are used to determine the group of humans supported to be same human but detected by different Kinects. Then the system, keep the human with a bigger region of interest (ROI) among this group. The experimental results demonstrate the outperforming of the proposed method in various scenarios.

Từ khoá

Human detection, robot operation system, sensor fusion

Tài liệu tham khảo

[1] ZHAO, T. and R. NEVATIA. Tracking
multiple humans in crowded environment.
In: IEEE Computer Society Conference on
Computer Vision and Pattern Recognition.
Washington: IEEE, 2004, pp. 406413.
[2] ONG, K. S., Y. H. HSU and L. C. FU.
Sensor fusion based human detection and
tracking system for human-robot interaction.
In: International Conference on Intelligent
Robots and Systems. Vilamoura:
IEEE, 2012, pp. 48354840.
[3] AGGARWAL, J. K. Multisensor fusion
for computer vision. New York: SpringerVerlag,
1993.
[4] KNOOP, S., S. VACEK and R. DILLMANN.
Sensor fusion for 3D human body
tracking with an articulated 3D body
model. In: International Conference on
Robotics and Automation. Orlando: IEEE,
2006.
[5] NOURBAKHSH, I. R., K. SYCARA,
M. KOES, M. YONG, M. LEWIS and S.
BURION. Human-robot teaming for search
and rescue. IEEE Pervasive Computing.
2005, vol. 4, iss. 1, pp. 7279.
[6] TANG, C., C. ZHOU, W. PAN, L. XIE and
H. HU. Fusing mixed visual features for human
action recognition. International Journal
of Modelling, Identication and Control.
2013, vol. 19, iss. 1, pp. 1322.
[7] NOAMAN, R. A. K., M. A. M. ALI and N.
ZAINAL. Enhancing pedestrian detection
using optical ow for surveillance. International
Journal of Computational Vision and
Robotics. 2017, vol. 7, iss. 12, pp. 3548.
[8] JIANG, Y., H.-G. WANG and N. XI.
Target Object Identication and Location
Based on Multi-sensor Fusion. International
Journal of Automation and Smart
Technology. 2013, vol. 3, no. 1, pp. 5765.
[9] ISMAIL, A. W. and M. S. SUNAR. Multimodal
fusion: progresses and issues for
augmented reality environment. International
Journal of Computational Vision and
Robotics. 2017, vol. 7, no. 3, pp. 240254.
[10] SONG, H., W. CHOI and H. KIM.
Robust Vision-Based Relative-Localization
Approach Using an RGB-Depth Camera
and LiDAR Sensor Fusion. IEEE Transactions
on Industrial Electronics. 2016,
vol. 63, iss. 6, pp. 37253736.
[11] XUE, J., D. WANG, S. DU, D. CUI,
Y. HUANG and N. ZHENG. A visioncentered
multi-sensor fusing approach to
self-localization and obstacle perception
for robotic cars. Frontiers of Information
Technology & Electronic Engineering. 2017,
vol. 18, iss. 1, pp. 122138.
[12] CHAVEZ-GARCIA, R. O. and O. AYCARD.
Multiple Sensor Fusion and Classication
for Moving Object Detection and
Tracking. IEEE Transactions on Intelligent
Transportation Systems. 2016, vol. 17,
iss. 2, pp. 525534.
[13] AN, K., J. PARK, M. D. HOANG and
J. CHOI. Dispensing materials of mobile
robot cooperating with perception sensor
network. In: 11th International Conference
on Ubiquitous Robots and Ambient
Intelligence. Kuala Lumpur: IEEE, 2014,
pp. 496499.
[14] PARK, J., K. AN and J. CHOI. Realistic
3D simulation of multiple human recognition
over Perception Sensor Network. In:
The 23rd IEEE International Symposium
on Robot and Human Interactive Communication.
Edinburgh: IEEE, 2014, pp. 507
512.
[15] ZHANG, Z. Microsoft Kinect Sensor and
Its Eect. IEEE MultiMedia. 2012, vol. 19,
iss. 2, pp. 410.
[16] QUIGLEY, M., K. CONLEY, B.
GERKEY, J. FAUST, T. FOOTE,
J. LEIBS, R. WHEELER and A. Y.
NG. ROS: an open-source Robot Operating
System. In: Willowgarage [online].
2009. Available at: http://www.
willowgarage.com/sites/default/
files/icraoss09-ROS.pdf.
[17] Package Summary. In: ROS.org [online].
2016. Available at: http://wiki.ros.
org/openni_kinect[18] A Software Architecture for RGB-D People
Tracking Based on ROS Framework for
a Mobile Robot. LEE, S., K.-J. YOON and
J. LEE. Frontiers of intelligent autonomous
systems. Studies in computational intelligence,
v. 466. New York: Springer, 2013,
pp. 5368.
[19] XIA, L., C.-C. CHEN and J. K. AGGARWA.
Human detection using depth information
by Kinect. In: Computer Society
Conference on Computer Vision and
Pattern Recognition Workshops. Colorado
Springs: IEEE, 2011, pp. 1522.