Multi User Full Body Avatars
This example shows how you can have full body avatars in Multi-User that will animate with speaking animations, use mouth morphs to move each user's mouth, save the audio, automatically transcribe it and play it back with the session replay.
Running the Script
Configure options (see below)
Modify or add new environment, avatars, interactions or additional objects
Setup speech options
Discord (additionally, set "Push to Talk" options in Voice and Video to enable speaking animations alongside talking)
Review data
Video Recording
Audio transcriptions
Raw data
Add additional functionality if needed from example templates
Configuration
Open the Conversation_Config.py file and configure the following options.
ENVIRONMENT: Not necessary to set if using GUI. User your own or find ones in utils/resources/environment
Avatar Configuration Options
AVATAR_MODEL: Add avatar model to use. Use your own or find some in utils/resources/avatar/full_body
AVATAR_MODEL2: Choose avatar for second user, if needing more than 2, can copy and paste code to increase number
AVATAR1_POS: Choose position of first avatar
AVATAR1_ORI: Choose orientation of first avatar
AVATAR2_POS: Choose position of second avatar
AVATAR2_ORI: Choose orientation of second avatar
TALK_ANIMATION: The animation index for the avatar's talking animation.
IDLE_ANIMATION: The animation index for the avatar's idle or default pose.
NECK_BONE, HEAD_BONE, SPINE_BONE: String names of the bones used for the follow viewpoint (can find these by opening the avatar model in Inspector)
TURN_NECK: Boolean flag if the neck needs to initially be turned
NECK_TWIST_VALUES: List of values defining the neck's twisting motion (format: [yaw, pitch, roll]).
MOUTH_OPEN_ID: The ID number of the morph target for opening the mouth (find in Inspector).
BLINKING: Boolean flag to enable or disable blinking animations.
BLINK_ID: The ID number of the morph target for blinking.
DISABLE_LIGHTING_AVATAR: Boolean flag if lighting in environment is too blown out for the avatar
ATTACH_FACE_LIGHT: Boolean flag to attach a light source to the avatar's face.
FACE_LIGHT_BONE: The name of the bone to which the face light is attached if ATTACH_FACE_LIGHT is true.
Additional Configuration Options
STARTING_POSITION, STARTING_POSITION2: Adjust where you start, should be same as avatar position, but may vary between headsets
USE_KEY_FOR_SPEECH: Boolean flag to hold down a key to speak, set to 'c' by default, can synchronize this with "push to talk" in Discord
SIMULATE_KEY_PRESS_FOR_DISCORD : Boolean flag to simulate a keypress to Discord
KEY_TO_SIMULATE : Currently set to 'c' key, but should be same key set in discord if using "Push to Talk"
RECORD_AUDIO : Saves an mp3 file to utils/recordings
USE_PASSTHROUGH : Choose if using Mixed Reality Passthrough (select 'empty.osgb' for environment)
BIOPAC_ON: Choose whether to connect with Biopac Acqknowledge to measure physiological responses
SET_CONTINUE_KEY : Set Key to Start and Stop Experiment
STARTING_TEXT : Set Starting Text
USE_KEYS_TO_NAVIGATE_SERVER_VIEW : Set to use keyboard commands to navigate the server view (see Controls page)
HIDE_HUD : Hide the HUD overlays
HIDE_GAZE_POINT : Hide the Gaze Point
Interaction
If HOLD_KEY_TO_SPEAK is True, hold either the 'c' key or the RH grip button to start speaking, let go to stop
See this page for additional controls, including server navigation controls
Press 't' to get the position of the server view, this can then be copied and pasted into the code to change the starting viewpoint
Modifying Environment and Avatar(s)
See this page for places to get new assets (works with Avaturn, ReadyPlayerMe, Reallusion, Mixamo, RocketBox, and other fbx avatar libraries)
Place environment model in utils/resources/environment for default location, or reference the new path
For adding new avatars see this page https://sightlab.worldviz.com/examplestemplates/adding-avatar-agents
Modify the config file to update the environment and avatar path, as well as avatar options