Chimpanzee (Pan troglodytes) cognitive mechanisms for joint action and virtual environment navigation

Student thesis: Doctoral Thesis (PhD)


Chimpanzees have demonstrated across several experimental studies and field observations that they can successfully work together. The cognitive mechanisms that chimpanzees employ for joint action, however, remain unclear. A key component of human co-ordination is the ability to represent not only one’s own role, but also the role of a partner. In the first two studies presented, I report evidence that chimpanzees may also represent a partner’s actions during joint action. First, I present evidence that chimpanzees accommodate an experimenter’s actions when passing an object, possibly incorporating another’s actions into their own action plans. Second, I present evidence that chimpanzees learn about a partner’s actions, which may facilitate their ability to produce those actions themselves in a partial role-reversal task. Another open question about chimpanzee joint action is the motivation behind choosing to work together or alone. To investigate whether physical effort may influence chimpanzees’ apparatus choices, I present evidence from a task in which chimpanzees chose between a high and low effort puzzle-box apparatus. Chimpanzees showed no preference for either apparatus. There is also a spatial component to joint action, and how the action space is represented may affect perspective taking and how others’ actions are represented. In the final experiment, I examined chimpanzee’s spatial frames of reference in a virtual environment task. The results showed that some subjects used a simple landmark as an allocentric cue, but not more distal landmarks. Learning about how chimpanzees represent virtual spaces, and whether they could conceive of alternative perspectives, is an important first step towards virtual cooperative games with captive primates. The results of this thesis suggest that chimpanzees understand the role of their partner during joint action, may not reduce their own effort, are sometimes able to use simple virtual landmarks, and can find out-of-sight food in a virtual environment.
Date of Award29 Nov 2023
Original languageEnglish
Awarding Institution
  • University of St Andrews
SupervisorJosep Call (Supervisor)

Access Status

  • Full text open

Cite this