Perceptual Computing Laboratory, Waseda University
Green Computing System R&D Center
27 Waseda, Room 40-701
Shinjuku-ku, Tokyo, 162-0042, Japan

Curriculum Vitae Design Portfolio

Yoichi Matsuyama (松山 洋一) is Associate Research Professor at the Perceptual Computing Laboratory, Waseda University in Tokyo. Prior to the current position, he was a Post Doctoral Fellow (Special Faculty) at the ArticuLab in the School of Computer Science, Carnegie Mellon University until November 2018. He has been designing and developing a number of conversational AI media systems for more than a decade. His research interest lies in computational models of human conversations, which combine artificial intelligence, social science, and human-computer/robot interaction. In CMU, he was leading SARA (Socially Aware Robot Assistant) project that was exhibited in a number of high profile conferences, including the World Economic Forum Annual Meeting 2017, Davos, Switzerland. SARA was featured in numerous major media, such as MIT Technology Review, Washington Post, CNBC, BBC, CNET, Popular Science and Science Friday. His Ph.D dissertation project was SCHEMA, a multiparty conversation facilitation robot, specifically its computational models of facilitation strategies and language generation, as well as its robotic platform development. He was also a visiting scholar in the iCub Facility (an embodied cognitive robotics research group), Italian Institute of Technology, and a committee member of ACM SIGGRAPH Asia. He received B.A. in cognitive psychology and media studies, M.E. and Ph.D in computer science from Waseda University in 2005, 2008 and 2015 respectively. A backstory of the journey for Ph.D is posted here.

Mission Statement

Designing Socially Expressive Conversational AI Media to Assist and Entertain Human Lives.

Since the beginning of time, humans talk with one-another. Conversation is an essential and innate way for human cognition when interacting with other people. Therefore, conversational user interfaces (UIs) with AI systems and new ways of information processing in the society – here I call it the conversational AI media – recently emerges in the markets, a novel user experience (UX) design area has arisen. However, the existing commercial products (e.g., virtual assistants) play a minimal role, aside from querying using voice input and output. These products only fulfill limited functions that a human assistant may accomplish because they lack deep understandings of the human conversational process. Similar to what Marshall McLuhan proposed in his media theory, the conversational AI can be regarded as a novel media that has a remarkable potential to change our methods of production, distribution, and consumption of information in our society owing to its unique characteristics towards 2020, 2025, and beyond. As an independent researcher, I scientifically investigate the nature of human conversations by designing the conversational AI media that has actual impacts on the societies of this century. For further mission statement, see the post “Conversation as Media“.


  • Post Doctoral Research Fellow, ArticuLab, Language Technologies Institute and Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, United States, 2014 – 2018 (Advisor: Justine Cassell)
  • Visiting Research Fellow, iCub Facility, Italian Institute of Technology, Genova, Italy, 2013 (Advisor: Giorgio Metta)
  • Director, WIZDOM (Waseda University Integrated Space of Wizards, Digital Oriented Manufacturers), 2012 – 2014
  • Research Associate, Waseda University, 2010 – 2013
  • Research Assistant, International Research and Education Center for Ambient Soc, Waseda University Global COE Program, 2008 – 2010
  • Program Committee (Student Volunteer Program Chair), ACM SIGGRAPH Asia, 2008 – 2009
  • Virtual Reality Content Designer, CAD CENTER Inc., Tokyo, 2004 – 2005


  • Ph.D, Computer Science, 2015 (Advisor: Tetsunori Kobayashi, Perceptual Computing Group)
  • M.E., Computer Science, 2008 (Advisor: Tetsunori Kobayashi, Perceptual Computing Group)
  • B.A., Human and Social Sciences, 2005 (Advisor: Machiko Kusahara)

* All degrees from Waseda University, Tokyo, Japan


  • Best Paper Award, Human Agent Interaction 2019, October 2019
    (Florian Pecune, Shruti Murali, Vivian Tsai, Yoichi Matsuyama, and Justine Cassell, A Model of Social Explanations for a Conversational Movie Recommendation System, In Proceedings of the 7th International Conference on Human-Agent Interaction, pp. 135-143. ACM, 2019.)
  • Outstanding Research Award, Human Agent Interaction 2012, December 2012
    (Yoichi Matsuyama, Akihiro Saito, Iwao Akiba, Moemi Watanabe and Tetsunori Kobayashi, Facilitation Robot Promoting the Greatest Participation of the Greatest Number in Multiparty Conversation, Human-Agent Interaction Symposium 2012, 2B-3, December 2012.)
  • Best presentation, The Japanese Society for Artificial Intelligence SIG-SLUD (Special Interest Group of Speech, Language Understanding and Discourse Processing), February 2012
    (Yoichi Matsuyama, Akihiro Saito, Atsushi Ito, Iwao Akiba, Moemi Watanabe and Tetsunori Kobayashi, Active Timing Detection and Strategies for Multiparty Conversation Facilitation Systems, The Japanese Society for Artificial Intelligence (JSAI), SIG-SLUD-B203-05, pp.17-24, February 2013.)
  • Best presentation, The Japanese Society for Artificial Intelligence SIG-SLUD (Special Interest Group of Speech, Language Understanding and Discourse Processing), July 2008
    (Yoichi Matsuyama, Shinya Fujie, Hikaru Taniyama and Tetsunori Kobayashi, Communication Activation System in Group Communication, The Japanese Society for Artificial Intelligence (JSAI), SIG-SLUD-A801, pp.15-22, July 2008.)
  • Microsoft Scholarship, April 2009


  • Japan Science and Technology Agency (JST) Program for Creating STart-ups from Advanced Research and Technology (START) … approx. $1,000,000
  • Yahoo!-CMU InMind Project, 2015-2017 … $300,000
  • IT R&D program of MSIP/IITP 2017-0-00255, Autonomous Digital Companion Development, Korean Government, 2017 – 2018 … approx. $600,000
  • Google Faculty Award Grant, Grounding Task Behavior in the Social World: Deep Reinforcement Learning for Social Dialogue to Improve Task Performance, 2017 … $76,109
  • Google Cloud Research Credits, Socially Aware Robot Assistant, 2015 – 2017 … $50,000
  • AWS Cloud Credits for Research (2018Q1): PI … $30,000
  • Microsoft Grant, Socially Aware Robot Assistant, 2017 … $75,000 + Surface Hub
  • CMU President Donation for SARA (2017): co-PI … $100,000
  • CMU ProSEED Crosswalk Seed Grant (2018) “Holographic Archive of Research Projects (HARP) : PI … $2500
  • CMU The Frank-Ratchye Fund for Art @ the Frontier, Ghost Box – Holographic Display Prototype (2018) … $500
  • JSPS Grant-in-Aid for Scientific Eesearch WAKATE-B (23700239), “Development and Evaluations of Multiparty Conversation Activation Systems”, 2010 – 2012 … 3,900,000 JPY (approx. $40,000)
  • JSPS Grant-in-Aid for Scientific Eesearch WAKATE-B (25870824), “Facilitation Strategy for Multiparty Conversation Robots”, 2013-2015 … 3,770,000 JPY (approx. $39,000)
  • JSPS Takuetsu Graduate School Program Grant, 2014 … 9,128,641 JPY (approx. $100,000)
  • Yoichi Muraoka Grant, 2012 … 1,000,000 JPY (approx. $10,000)

Professional Services

Program Committee

  • Group Interaction Frontiers in Technology Workshop (GIFT), International Conference on Multimodal Interaction 2018 (ICMI 2018), Program Committee
  • NAACL-HLT 2018 (North American Chapter of the Association for Computational Linguistics: Human Language Technologies), Program Committee
  • ACL 2018 (Annual Meeting of the Association for Computational Linguistics), Program Committee
  • IWSDS 2018 (International Workshop on Spoken Dialogue Systems Technologies), Program Committee
  • Journal of Human Interface Society Japan “Human Collaboration” 2018, Associate Editor
  • RO-MAN 2016 (IEEE International Symposium on Robot and Human Interactive Communication ), Associate Editor
  • ACM SIGGRAPH Asia Commitee Member (2008-2009)

Conference/Session Organizer


  • Journals
    • IEEE Pervasive Computing, Special Issue – Conversational User Interfaces and Interactions, 2018
    • Journal of Behavioral Research Methods, 2015
    • International Journal of Affective Engineering, 2015
    • IEEE/ACM Transactions on Acoustic, Speech and Language (TASLP), 2014
  • Conferences
    • International Workshop on Spoken Dialogue System Technologies (IWSDS) 2017, 2018
    • Advanced Robotics, 2015
    • IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2014
    • International Conference on Virtual Agents (IVA), 2014
    • Journal of Japanese Society for Artificial Intelligence (JSAI), 2012, 2013
    • International Conference on Social Robotics (ICSR), 2011

Patents / Inventions

  • Conversational Robot (Japan 2010-221556)
  • Conversational Facilitation System and Robot (Japan 2008-304140)
  • Deep Neural Network Based Conversational Strategy Classifier (CMU Disclosure of Invention, August 2017)
  • Rapport-Building Animated Virtual Agent for Dyadic Conversation (CMU Disclosure of Invention, April 2016)
  • Social Reasoner (CMU Disclosure of Invention, April 2017)

Demos and Exhibitions

SARA Project (Carnegie Mellon University): Project Lead

SCHEMA Project (Waseda University): Project Lead

Media Coverage

SARA Project (Carnegie Mellon University): Project Lead

“Speaking with SARA certainly felt less jarring than talking to a regular chatbot. The system studies the words a person says during a conversation as well as the tone of his or her voice, also using several cameras to study the speaker’s facial expressions and head movements.”

MIT Technology Review

SCHEMA Project (Waseda University)

    • 「料理!洗濯!笑いまで!人と暮らす未来のロボット」, サイエンスZERO, 2012年4月22日 放送, NHK