Running a Session Multi-User
Screen Record - Will record a video of the session and save it in the “recordings” folder (note: to compress the videos it is recommended that you install the moviepy python library using the package manager and install k-lite codec pack)
Number of Trials - Allows you to set how many trials you want to run. If left blank, the default is unlimited
Fixation Time - Adjust time in milliseconds required for a fixation on an object (default is 500 milliseconds). This can also be adjusted in the code in the experiment function.
Environment - Choose your environment model you wish to run your session with. Place any additional environment models in the utils- resources- environment folder.
Configure- see below
Revert to Default Settings- reverts back to the default settings
Continue- Saves the current configuration and runs the session (last saved configuration will be auto-filled on each run).
After choosing an environment from the dropdown, press “configure” to choose fixation objects. Check or uncheck the objects you wish to collect data on by switching on or off “Fixations”, choose visibility by choosing “Visible” and choose items you wish to grab using “Grabbable”. To add an object manually that is in your model, add the name in the “Child Name” section (this would be for objects that were not added as a group node
(see below)).
NOTE: Wait to press "Continue" until after all the clients have joined
Next, make sure all the assets that you are using are copied over to all the additional clients. This also includes any avatars in the resources-avatar-heads folder and environment files in resources- environment.
After you’ve copied over the assets, run the SightLabVR_Client.py script for the first client (this can either be on the same machine or a separate machine) (note: it does not matter which client starts first)
When the script starts you will first choose your hardware from the dropdown:
Next, choose which client number you are using (for first client choose 1, second 2, etc.)
Next, input the computer name of the host machine running the server (note: this can also be hard coded into the script so you won’t have to input this every time). On the With Windows search bar you can type in “About Your PC” to copy the device name of the computer running the server script.
Choose from a list of available avatars that will represent yourself.
Connect any additional clients in the same way, using “client_sightlab.py”, for the second user on the next machine and choosing “client 2” etc.
When you are ready to begin, press "Continue" on the server. To ensure that all clients are ready to connect, wait until the print statement "After Server Continue" shows up in the Vizard IDE interactive window. When ready, press spacebar on the server. This will start the timer, start collecting eye tracking data and start the Acqknowledge server to collect physio data on each user if you are using Biopac (this will also trigger to start the video recording from the “server” script).
Each client can look around the environment and fixation data will print out in real time on each client’s mirrored window. Fixation data will also be saved to the experiment_data.txt file
Use the ‘P’ key to toggle the gazepoint on and off for the participant. It is always on for the mirrored view.
Navigation:
For SteamVR based headsets use RH Trackpad to teleport and LH Trackpad for smooth locomotion. LH grip to rotate left and RH grip to rotate right.
For Oculus use RH thumbstick to teleport and LH Stick while holding the RH 'B' button for smooth locomotion
These can be changed if needed using vizconnect
To navigate the server window you can either use Vizard's standard mouse based navigation, or as of version 1.10.0, See the controls page for the keys, which can be modified.
Note: If using desktop mode the mouse will lock once the experiment starts (if you've set that option in the settings.py file to "True"). Use Alt+Tab to go back to the server window (or any other window).
When your experiment is finished, press the spacebar on the server again. You will then see a gaze path on each user’s respective mirrored window, the data files will be saved in each user’s respective “data” folder, and a video recording (in the server folder under “recordings” )will be saved if you have checked that option.
After you quit, you'll see 3 data files saved in the data folder:
Tracking_data.txt shows a timestamp along with the x,y,z coordinates of the gaze intersect, head position, pupil diameter (if you are using a headset that tracks pupil diameter), and custom flags. See below on how you can add more items to this file
Experiment_data.txt shows a summary of fixations with the number of fixations per object, total fixation time, average fixation time and a timeline of fixations.
Tracking_data_replay.txt This file is used for the session_replay script and you do not need to utilize it.
You can change the extension .txt to .csv if you wish to view the file in a spreadsheet editor. If you enabled recording, a video recording is also saved to the “recordings” folder (note that videos are uncompressed and take up a significant amount of hard drive space).
Session Replay
After a session is run you can run the SessionReplay_Server.py script to see an interactive replay with visualizations