Imitation system for humanoid robotics heads
Abstract
Keywords
References
A. Paiva, J. Dias, D. Sobral, R. Aylett, P. Sobreperez, S. Woods, C. Zoll and L. Hall, ”Caring for Agents and Agents that Care: Building Empathic Relations with Synthetic Agents”, In Third International Joint Conference on Autonomous Agents and Multiagents Systems, Vol. 1, pp. 194-201, New York, USA, 2004.
M. Siegel, C. Breazeal, and M. I. Norton, ”Persuasive Robotics: The influence of robot gender on human behavior”. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2563-2568, October 2009.
A. Tapus and M.j. Mataric, ”Emulating Empathy in Socially Assistive Robotics”, In AAAI Spring Symposium on Multidisciplinary Collaboration for Socially Assistive Robotics, Stanford, USA, March 2007.
M.J. Mataric , J. Eriksson , D.J. Feil-Seifer1 and C.J. Winstein. “Socially assistive robotics for post-stroke rehabilitation”. In Journal of NeuroEngineering and Rehabilitation. 4:5. 2007.
C. Jayawardena, I. H. Kuo, U. Unger, A. Igic, R. Wong, C. I. Watson, R. Q. Stafford, E. Broadbent, P. Tiwari, J. Warren, J. Sohn and B. A. MacDonald. “Deployment of a Service Robot to Help Older People”. In Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems Taiwan, pp. 5990-5995, October 2010.
S. S. Ge, C. Wang, C.C. Hang, ”Facial Expression Imitation in Human Robot Interaction”, In Proc. of the 17 IEEE International Symposium on Robot and Human Interactive Communication, Germany, pp. 213-218, 2008.
S. DiPaola, A. Arya, J. Chan, ”Simulating Face to Face Collaboration for Interactive Learning Systems”, In In Proc: E-Learn 2005, Vancouver, 2005.
T. Chen, “Audio-Visual Integration in multimodal Communication”. In IEEE Proceedings, May, 1998.
Kismet, Available at: http://www.ai.mit.edu/projects/humanoid-oboticsgroup/kismet/kismet.html
T. Hashimoto, S. Hitramatsu, T. Tsuji, and H. Kobayashi, “Development of the Face Robot SAYA for Rich Facial Expressions”. In Proc. of 2006 SICE-ICASE International Joint Conference Korea, pp. 5423-5428, October 2006.
L. Guoyuan, Z. Hongbin, L.Hong, ”Affine Correspondence Based Head Pose Estimation for a Sequence of Images by Using a 3D Model”. In Proc. Sixth IEEE International Conference on Automatic Face and Gesture Recognition (FGR’04) , pp. 632-637, 2004.
D. DeCarlo, D. Metaxas, ”The Integration of Optical Flow and Deformable Models with Applications to Human Face Shape and Motion Estimation”, In Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 231-238, 1996.
P.Ekman, WV Friesen, JC Hager, ”Facial Action Coding System FACS”, the manual, 2002.
P.Ekman, E. Rosenberg, ”What the face reveals: basic and applied studies of spontaneous expression using the facial action coding system (FACS)”, 2nd edn. Oxford Press, London.
J.A. Prado, C. Simplcio, N.F. Lori and J. Dias, ”Visuo-auditory Multimodal Emotional Structure to Improve Human-Robot-Interaction”, In International Journal of Social Robotics, vol. 4, no. 1, pp. 29-51, December 2011.
W. Zhiliang, L. Yaofeng, J. Xiao, “The research of the humanoid robot with facial expressions for emotional interaction”. In Proc. First International Conference on Intelligent Networks and Intelligent Systems, pp. 416-420. 2008.
J.P. Bandera, “Vision-Based Gesture Recognition in a Robot Learning by Imitation Framework”, Ph.D. Thesis, University of Malaga, 2009.
P. Viola, M. Jones, “Robust Real-time Object Detection”, In Second International Workshop on statistical and Computational Theories of Vision-Modeling, Learning, Computing, and Sampling, Canada, 2001.
A. Aly and A. Tapus, “Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction”, In Proc. of the 5th European Conference on Mobile Robots ECMR 2011 Sweden, pp. 101-108, September 2011.
C. Breazeal and L. Aryananda, ”Recognition of Affective Communicative Intent in Robot-Directed Speech”, Artificial Intelligence, pp. 83-104, 2002.
M. Zecca, T. Chaminade, M. A. Umilta, K. Itoh, M. Saito, N. Endo, ”Emotional Expression Humanoid Robot WE-4RII- Evaluation of the perception of facial emotional expressions by using fMRI”, In Robotics and Mechatronics Conference ROBOMEC2007, Akita, Japan, pp. 2A1-O10, 2007.
C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann and S. Narayanan, ”Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information”, In Proc. of ACM 6th International Conference on Multimodal Interfaces (ICMI 2004), 2004.
Z. Zeng, M. Pantic, G. I. Roisman and T. Huang, ”A Survey of Affect Recognition Methods: Audio, Visual and Spontaneous Expressions”, In textit IEEE Transactions on Pattern Analysis and Machine Intelligence,Vol. 31, pp. 39 - 58, 2008.
K.-E. Ko, K.-B. Sim, ”Development of a Facial Emotion Recognition Method based on combining AAM with DBN”, in International Conference on Cyberworlds 2010, pp. 87-91 ,2010.
H.-B. Deng, L.-W. Jin, L.-X. Zhen and J.-C. Huang. ”A New Facial Expression Recognition Method Based on Local Gabor Filter Bank and PCA plus LDA”, International Journal of Information Technology. Vol.11, no.11. pp. 86-96, 2005.
M. Gruendig, O. Hellwich, ” 3D Head Pose Estimation with Symmetry based Illumination Model in Low Resolution Video”. In Proc. 26th Symposium of the German Association for Pattern Recognition, Germany, Vol. 3175, pp. 45-53, 2004.
P. Fitzpatrick, ”Head pose estimation without manual initialization”, AI Lab, MIT. Cambridge, USA, 2000.
L.J. Manso, P. Bachiller, P. Bustos, P. Nunez, R. Cintas and L. Calderita. ”RoboComp: a Tool-based Robotics Framework”. In Simulation, Modeling and Programming for Autonomous Robots (SIMPAR). Pages 251-262. 2010.
Open Source Computer Vision Library, Available at: http://sourceforge.net/projects/opencvlibrary/
Available at: http://iadex.es
DOI: https://doi.org/10.14198/JoPha.2013.7.1.04