Running a Session Single User
Run SightLabVR.py (for scenes with 3D models) or SightLabVR_360.py (for spherical videos and photos). This is for running a session using the GUI interface. If you do not want to use the GUI and want to use code to specify your setup see this page.
Choose hardware from various eye tracked headsets, desktop mode or non-eye tracked headsets (head position will be used for analytics of non-eyetracked headsets)
Configure your options:
Screen Record - Will record a video of the session and save it in the “recordings” folder (note: to compress the videos it is recommended that you install the moviepy python library using the package manager and install k-lite codec pack). Also, if you try to resize the mirrored window while recording a video it zooms the perspective in on the video and is no longer accurate. It is set to 800x720 as that's a close approximation to what a user is actually viewing in the HMD,
Number of Trials - Allows you to set how many trials you want to run. If left blank, the default is unlimited
Grabbable Objects - Toggle whether you want to grab the objects in your scene
Fixation Time - Adjust time in milliseconds required for a fixation on an object (default is 500 milliseconds). This can also be adjusted in the code in the experiment function.
Avatar - Choose Head or hands model to use for session replay. Place other hands or head objects in your avatar hands and head folder (in resources) to have them available.
Environment - Choose your environment model you wish to run your session with. Place any additional environment models in the resources- environment folder (this location can be changed, see changing resources directory on this page).
Configure- see below
Gaze Point- Choose the gaze intersect point object
Revert to Default Settings- reverts back to the default settings
Continue- Saves the current configuration and runs the session (last saved configuration will be auto-filled on each run).
After choosing an environment from the dropdown, press “configure” to choose fixation objects. Check or uncheck the objects you wish to collect data on by switching on or off “Fixations”, choose visibility by choosing “Visible”, to grab certain objects check “Grabbable”. To add an object manually that is in your model, add the name in the “Child Name” section (this would be for objects that were not added as a group node (see below)). When finished click “Done”.
Note: if you run out of space see the example "gaze_time_subnodes" on how to get past this by setting the list of objects in code
SightLabVR_360.py options:
Add any of your own 360 videos or photos to the “media” folder, or use included media
Screen Record - Will record a video of the session and save it in the “recordings” folder (note: the videos are uncompressed and may take up a lot of hard drive space)
Media Type - Choose if using a spherical video or photo
Format - Choose Mono or Stereo (The default for 3D video is for top/bottom. To change to left/right you will need to edit the function “StereoSphere” in the “panorama_utils.py” module. (Contact worldviz for help with this.)
Media - Add your 360 videos or images to the “resources/media” folder and they will show up in this list to choose from. (Note that for certain video types (including .mp4) you may need to install the K-lite codec pack for the video to play. You can download that at https://codecguide.com/download_kl.htm
(this location can be changed, see changing media directory on this page).
Running the Session
Click “Continue” to save and start the session
Enter name and subject ID. The subject ID is used for naming the data files (do not use an underscore in the ID, as this will come up with an error in the replay mode, or you will not see the file to select in the Session Replay)
Note that if you do not enter a participant ID that data files and the video recording will be overwritten on the last session with no ID.
Press spacebar to start recording eye tracking data and view real time fixations (unless this is changed to another key or event)
Use the ‘P’ key to toggle the gazepoint on and off for the participant. It is always on for the mirrored view.
Navigation:
For SteamVR based headsets use RH Trackpad to teleport and LH Trackpad for smooth locomotion. LH grip to rotate left and RH grip to rotate right.
For Oculus use RH thumbstick to teleport and LH Stick while holding the RH 'B' button for smooth locomotion
These can be changed if needed using vizconnect
Press spacebar to stop recording and see gaze points
At this point data files will be saved to the data folder
Press spacebar again to start a new trial, or escape to exit
Once “Saved” text appears Sightlab has reached the end of the last trial
After you quit, you'll see 3 data files saved in the data folder:
experiment_data.csv shows a timestamp along with the x,y,z coordinates of the gaze intersect, head position (6DOF), Fixation/Saccade status, pupil diameter (if you are using a headset that tracks pupil diameter), and custom flags. See here on how you can add more items to this file
Client_Tracking_data.txt Text formatted version of the experiment data. Shows a timestamp along with the x,y,z coordinates of the gaze intersect, head position, pupil diameter (if you are using a headset that tracks pupil diameter), and custom flags. (note: for multi-user there is the tag "client")
Experiment_data_trial.txt shows a summary of fixations with the number of fixations per object, total fixation time, average fixation time and a timeline of fixations
. Tracking_data_replay.txt This file is used for the session_replay script and you do not need to utilize it.
For earlier versions that only save .txt, you can change the extension .txt to .csv if you wish to view the file in a spreadsheet editor. Additionally, you can run a script called convert_txt_data.py (in ExampleScripts-Adjusting_Gaze_Data_Post_Session) to make sure that the columns are properly formatted when converting to csv. Can also download that file here. If you enabled recording, a video recording is also saved to the “recordings” folder (note that videos are uncompressed and take up a significant amount of hard drive space).