SightLab VR Pro FAQ

      Sightlab VR Pro works with the following headsets:

For eye tracking:

For non-eye tracked headsets you can use head position with these headsets: 

For additional hardware supported, see the Vizard Supported Devices page

SightLab VR Pro requires WorldViz Vizard software for its full functionality as SightLab VR Pro is an add-on software to our Vizard software engine. You can although still us a software like Shadowplay to stream via the cloud to a device running in stand alone mode. 

You can import 3D models in the usual formats, so .glTF, .obj and .fbx and other common formats will work. There may be mixed results depending on how the textures are packed (the most consistent format would be .glTF). For more information see this page


Currently, SightLab only runs on the Vizard platform, but you can import assets from Unity by using their fbx exporter. Here's an article on doing that. 


Yes, the data files are in .csv and .txt


Number of fixations, time to first fixation, average fixation time, total fixation time, fixation timeline, gaze intersect position, fixation spheres to show amount of fixation time, time stamps, head position, pupil diameter, as well as the ability to add custom flags such as interactions. Additional data is available with custom code and varies per eye tracked headset (such as “eye openness” for the Vive Pro Eye). To add more data points see the data logger example. 

Currently, the Multi-User version of SightLab is set up to support up to 5 users. If you wish to have more users in your application contact sales@worldviz.com and we may be able to customize to your needs. 

The multi-user version has all the same functionality as the single user, but with the ability to have multiple participants in a session interacting together. 

Restarting the machine should fix this issue, as it could be that the operating system had locked some resources or there were some stale network connections from the previous script execution which were not properly cleaned up. Also, make sure to see the print statement "After Server Continue" and the timer HUD is showing before starting the session. 

See this document on how the process of updating your SRAnipal driver works. 

See the Example Script - gaze_time_subnodes or the code attributes page to see ways you can add a large list of objects if the GUI doesn't allow to select all of them. 

See this page on all the places in which to get assets

If this happens, you can use the flag sightlab.is_GLOBAL = 0

See the example "gaze_time_subnodes" on how to get past this by setting the list of objects in code

 This usually happens if you have your participant ID either left blank or the same name as an earlier one, but that      earlier one is still open in a spreadsheet editor program. 

    This is most likely that you put an underscore the participant ID name. You can remove that underscore to see the file,    but underscores will cause this issue.  

     You can use various optimization methods (see this page). This is a simple couple of lines that could be tried:
    env = sightlab.objects[0] #If using the GUI, add this after the experiment starts in the sightLabExperiment function


Additionally, you can open the model in Inspector and choose the textures tab on the bottom right, then shift click to select all textures, right click and compress them. 

     Search for the version that is supported for Python 3.8 in Google  then install that version using the Package Manager:     https://www.worldviz.com/post/tech-tip-using-the-vizard-package-manager-to-get-python-2-7-compatible-versions-of-python-libraries 

You may need to play the sound directly in SightLab/Vizard using audio = viz.addAudio('someFile.wav') 

#Add this code to offset the texture in the z axis

z_offset_value = -1.1  # Adjust as needed


For answers to all technical Vizard questions, please go to the Vizard 7 Documentation FAQ of the additional Vizard FAQ page here