Telepresence

From Hanson Robotics Wiki
Jump to: navigation, search

Operator controls for telepresence robot under control of an operator

  • Operator can choose to stick to more likely attend to the primary face, and for longer, by pushing a button
  • Operator can choose what number face to look to (faces on screen are numbered) by pushing keyboard button
  • Operator's facial gestures are tracked by Faceshift, and played on robot
  • Operator's head gestures are tracked by Faceshift, filtered to establish neutral position, and played relative to current tracking behaviour
  • Operator can take control of the robot's gaze by pulling the trigger on a joystick, and then the joystick controls the gaze of the robot. When the trigger is pulled again, the robot will track the nearest face to the robot's gaze.
  • Operator can choose what expression/animation to play by pushing a button on the joystick base
  • Operator's speech is processed by Annosoft, which produces a sequence of visemes, then used to control animated visemes in Blender.
    • This should be blended with the Faceshift lip data--however the closed-lip visemes subsume the faceshift mouth positions
    • The mouth-closing viseme actions should subsume any expression animations, to ensure correct speech motions

Faceshift smoothing

Need to average/dampen the Faceshift input. But bias the results towards the expressive extremes. Let some finer-grain data get through sometimes (randomly or under some conditions? Probably the forehead)—it shouldn’t stop moving, but should be mov

  • The main problem with the Faceshift control is the halting. Solution:
    • Detect halting conditions: (1) face lost condition, (2) frozen face condition,
    • Keep some motion going even when face motion is halted
    • Blend autonomous behaviours into Faceshift motions