Behavior Layers

From Hanson Robotics Wiki
Jump to: navigation, search

HANSON ROBOTICS BEHAVIORAL LAYERS


  • Puppeteering
    • Motion capture (faceshift, Dynamyzix, or similar and full-body mocap when relevant)
    • Lipsync to actor-spoken dialog
    • Real-time add / delete / modify events from the timeline (Timeline UI)
    • Animation layer composition: override (including keepalive) or additive/blend (must be specified per-track, default is override without passthrough)
    • Event responses: reject or passthrough, default is reject
  • Performances
    • Operate on a linear timeline, which may be extended indefinitely until terminated
    • Timeline event scripts, whether coded via text or drag-and-drop script blocks onto the timeline (Timeline UI)
    • Can handle inputs (or pass them through) in addition to establishing timing straight performances, and also can switch on and off various input handling to determine when to be in “on stage” pure performance mode versus a mix of performance and interaction
    • Can call scripted behaviors
    • Can modulate parameters of lower level events
    • Animation layers composition: override w/ passthrough or additive/blend (must be specified per-track, default is override w/ passthrough)
    • Event responses: reject or handle (which can choose not to handle, or handle and also send to next layer)
  • Scripted Behaviors
    • Operate in an event loop until terminated
    • Behavior trees, whether coded via text or graphically (Ghost tool)
    • Handle inputs (user input such as speech or gesture, or internal motivation state changes)
    • Generate dialog
    • Lipsync to generated dialog
    • Can call-into autonomous network to get inputs or outputs
    • Can modulate parameters of lower level events
    • Animation layer composition: override w/ passthrough or additive/blend (must be specified per-network, default is override w/ passthrough)
    • Event responses: handle (which can choose not to handle, or handle and also send to next layer)
  • Autonomous Motivated Behaviors
    • Operate in an event loop until terminated
    • OpenCog behavior networks
    • Handle inputs
    • Generate dialog
    • May subsume more duties of scripted behaviors as the behavior network becomes more advanced, using the hand-scripted behaviors as an initial training set
    • Animation layer composition: override w/ passthrough or additive/blend (a learned choice that gets encoded in the network)
    • Event responses: handle
  • Keepalive Behavior
    • Generate events on an endless loop until shutdown
    • Must be explicitly muted by higher layers or else they always pass-through to the top

All layers may utilize stored and parameterized or generative animations for gestures and lipsync, there is no exclusivity once the library (stored) or API (generative) are defined, except motion capture puppeteering when set to drive the hardware directly and overrides all else.

Behavioral layers are NOT mutually exclusive unless explicitly set to be so (animation layering and event handling), which means that performances and interactions can coexist simultaneously with both being able to place events on or remove them from the timeline layers. So we should not think of it as, say, interactive “mode” or performance “mode” but that sometimes performances will cause events to be ignored and autonomous behaviors to be overridden – just like how a human actress doesn’t mechanically “turn off” the part of her brain that handles spurious inputs but ignores them. In fact, many performances require both timing and input event handling (i.e. listening for another performer’s dialog or watching their moves) and thus a blend of some autonomous “poke-through” vial selective animation composition modalities per track and specialized event handling to mute irrelevant inputs during performance will be far more common than strict modalities in order to achieve realistic performances.




All layers may utilize stored and parameterized or generative animations for gestures and lipsync, there is no exclusivity once the library (stored) or API (generative) are defined, except motion capture puppeteering when set to drive the hardware directly and overrides all else.

Behavioral layers are NOT mutually exclusive unless explicitly set to be so (animation layering and event handling), which means that performances and interactions can coexist simultaneously with both being able to place events on or remove them from the timeline layers. So we should not think of it as, say, interactive “mode” or performance “mode” but that sometimes performances will cause events to be ignored and autonomous behaviors to be overridden – just like how a human actress doesn’t mechanically “turn off” the part of her brain that handles spurious inputs but ignores them. In fact, many performances require both timing and input event handling (i.e. listening for another performer’s dialog or watching their moves) and thus a blend of some autonomous “poke-through” vial selective animation composition modalities per track and specialized event handling to mute irrelevant inputs during performance will be far more common than strict modalities in order to achieve realistic performances.