...
Scalable setup within nDisplay
Exporting Data to Unreal
Once your screen has been calibrated using the Scalable system, the master machine will have saved the DataSet to it’s C:/ drive, in the following folder:
...
Once you’ve got your scalable calibration correct, you’ll need to enable perspective mode to export the data to Unreal. To do this click on the Perspective button on the left panel, and then tick the ‘Use Perspective Mode’ button on the menu. You will then need to re calibrate to generate the perspective outputs.
Essentially that is all that’s required, but we will need to calibrate the eye location later on when setting up Unreal correctly for head tracking.
Exporting Data to Unreal
Once your screen has been calibrated using the Scalable system, the master machine will have saved the DataSet to it’s C:/ drive, in the following folder:
Code Block |
---|
C:/ScalableDisplay/DataSets/DatasetXXXXXXXXXX/LastCalibration |
...
Expand |
---|
|
You are welcome to copy this file, and expand on it. You should save it as a .cfg. It’s a 3 Igloo Media Server (IMP) 3 Projector system. Code Block |
---|
#####################################################################
# Igloo nDisplay Example Configuarion
#
# # Tracker
[input] id="ViveVRPN" type="tracker" addr="openvr/controller/LHR-8708C47F@10.1.5.169:3884" loc="X=0,Y=0,Z=0" rot="P=0,Y=0,R=0" front="Y" right="-X" up="Z"
|
Code Block |
---|
#####################################################################
# Igloo nDisplay Example Configuarion
#
# Note:
# Before use, make sure all settings correspond to your system.
#####################################################################
# Config file header.
[info] version="23"
# Cluster nodes
[cluster_node] id="IMP1" addr="192.168.0.101" window="window1" master="true"
[cluster_node] id="IMP2" addr="192.168.0.102" window="window2"
[cluster_node] id="IMP3" addr="192.168.0.103" window="window3"
# Application windows
[window] id=window1 fullscreen=false WinX=0 WinY=0 ResX=1920 ResY=1080 viewports="viewport1"
[window] id=window2 fullscreen=false WinX=0 WinY=0 ResX=1920 ResY=1080 viewports="viewport2"
[window] id=window3 fullscreen=false WinX=0 WinY=0 ResX=1920 ResY=1080 viewports="viewport3"
# Projection policies
[projection] id=projector1 type="easyblend" file="LastCalibration\ScalableDataOrthographic.pol" origin=easyblend_origin scale=1
[projection] id=projector2 type="easyblend" file="LastCalibration\ScalableDataOrthographic.pol_1" origin=easyblend_origin scale=1
[projection] id=projector3 type="easyblend" file="LastCalibration\ScalableDataOrthographic.pol_2" origin=easyblend_origin scale=1
# Viewports
[viewport] id="viewport1" x=0 y=0 width=1920 height=1080 projection=projector1
[viewport] id="viewport2" x=0 y=0 width=1920 height=1080 projection=projector2
[viewport] id="viewport3" x=0 y=0 width=1920 height=1080 projection=projector3
# Cameras
[camera] id=camera_static loc="X=0,Y=0,Z=0" tracker_id="ViveVRPN" tracker_ch=0
# Scene nodes
[scene_node] id=cave_origin loc="X=0,Y=0,Z=0" rot="P=0,Y=0,R=0"
[scene_node] id=wand loc="X=0,Y=0,Z=1" rot="P=0,Y=0,R=0"
[scene_node] id=easyblend_origin loc="X=0,Y=0,Z=0" rot="P=0,Y=0,R=0"
# | Generalsettings
generalswap_sync_policy=1
# Network settings
[network] cln_conn_tries_amount=10 cln_conn_retry_delay=1000 game_start_timeout=30000 barrier_wait_timeout=5000
# Custom arguments
[custom] SampleArg1=SampleVal1 SampleArg2=id="ViveVRPN" type="tracker" addr="openvr/controller/LHR-8708C47F@10.1.5.169:3884" loc="X=0,Y=0,Z=0" rot="P=0,Y=0,R=0" front="Y" right="-X" up="Z"
# General settings
[general] swap_sync_policy=1
# Network settings
[network] cln_conn_tries_amount=10 cln_conn_retry_delay=1000 game_start_timeout=30000 barrier_wait_timeout=5000
# Custom arguments
[custom] SampleArg1=SampleVal1 SampleArg2=SampleVal2 |
|
Scalable Config Explained
...
[camera] id=camera_static loc="X=0,Y=0,Z=0" tracker_id="ViveVRPN" tracker_ch=0
This is the camera used by the master node to create the player, it has many optional properties that relate to 3D, tracking, and hierarchy.
parent - ID of the parent component, default is VR root
tracker_id - the ID of the tracking device, default there is no tracking.
tracker_ch - the ID of the tracking device’s channel (default 0)
eye_swap - swap eyes if in stereo mode; default is false.
eye_dist - distance in meters between the eyes, default is 0.064.
force_offset - force’s a mono camera to behave like a stereo camera, eye_offset works for this behavior too.
Scene Nodes
[scene_node] id=cave_origin loc="X=0,Y=0,Z=0" rot="P=0,Y=0,R=0"
[scene_node] id=easyblend_origin_1 loc="X=0,Y=0,Z=0" rot="P=0,Y=0,R=0"
These are the objects created in the game world, that make the framework for the player. The first object should always be labeled ‘cave_origin’ as this matches the object placed within the Unreal Build. Everything will parent to this object by default. There is no need to specify a parent unless you require a different structure.
With simple camera systems, a projector will be paired to a scene object, which would require you to position, or offset the scene object to place the projector correctly. With scalable displays this is not the case, you only need to create as many scene objects for your projectors, the rest is handled by scalable.
you require a different structure.
With simple camera systems, a projector will be paired to a scene object, which would require you to position, or offset the scene object to place the projector correctly. With scalable displays this is not the case, you only need to create as many scene objects for your projectors, the rest is handled by scalable.
Input
Code Block |
---|
[input] id="ViveVRPN" type="tracker" addr="openvr/controller/LHR-8708C47F@10.1.5.169:3884" loc="X=0,Y=0,Z=0" rot="P=0,Y=0,R=0" front="Y" right="-X" up="Z" |
This is where you define the trackers that can be used instead of, or alongside, normal controller formats. The data from this tracker is broadcast across the cluster network, and the tracking software can be installed and run from any of the servers within the cluster.
The example above, uses a HTC Vive, with a Vive tracker puck. Which also requires you to run an OpenVRPN server to convert the vive position data to something readable by other programs, and then hosts it on the network.
type - The specific type of hardware used, options include:
tracker
for a tracking device.
analog
for a device that produces axis data.
button
for a device that produces Boolean button data.
keyboard
for a standard computer keyboard.
addr - the address of the server data, if using OpenVRPN you will need to change the IP address to match the machine hosting the server (port 3884 is default), and then also change the Tracker ID which is available in SteamVR Options → configure trackers.
loc - Initial offset (meters)
rot - Initial rotation (Euler)
front - axis mapped to forward direction
right - axis mapped to horizontal direction
up - axis mapped to vertical direction
Other settings
The rest of the settings are explained in more detail within the example files. 99% of the time the defaults are perfect.
...
The second way is to use Sharepoint (a business version of OneDrive) which uploads and shares the files between the machines allowed to access it. It is also the same location on all machines. This can also be done with Google drive too. [TODO - how to with a sharedrive.]with Google drive too.
The third way is to share a drive on one of the machines within the cluster. To set this up, it’s best to create it on the machine you’re doing development on, or if that is not possible, the machine designated as the master by the config file.
Expand |
---|
title | Creating a shared drive (Windows 10) |
---|
|
Make sure that all computers on the network can see each other. This is possible in the network rollout within the explorer window. As long as all machines are visible, they should have access to the shared drive. If you cannot see any other computers in this window, you will need to Enable network discovery. This can be done by doing the following: - Open Start - Type ‘Control Panel’ - Click Control Panel - Click Network and Sharing Center (you may need to click the Network and Internet heading - Click Change Advance sharing settings in the upper left side. - Check “Turn on network Discovery” - Check “Turn on file and printer sharing” - Click Save Changes and continue.
You will need to repeat this process on every machine in the cluster, so they can all see and talk to each other. On the Master machine, locate the folder you wish to share with all the other machines. The items you will need to share are: - The Unreal build in it’s entirety - The Unreal nDisplay config file - The Scalable data (should be near the config file) - The nDisplay Listener.exe (and it’s config files)
I created a folder in my D drive called ‘nDisplayBuild’ and placed everything in that folder, like so. Test1, and Test2 are the names of my Unreal Builds
Right click the folder you would like to share, and click Properties and click on the Sharing tab.
Click Share…
Your name will be in the list, but you need to add Everyone if it’s not already present. Then add a Read/Write permission level (Read would be fine, but it stops the ability to write Log files)
Click the Share button, and accept the warning that pops up. The folder will now be shared across all the machines. Provided the have access credentials to the master machine. This is usually just the username and password you use to log into it. The next step is to add the shared folder we just created as a drive with the same location on all the machines. The first step is to identify a drive letter that is not present on all the machines. E is usually free, as most modern machines will have two drives. C and D however, if that is not present pick one further up the alphabet. It has to be the same location on all machines.
When ready, on each machine (including the master) Open the explorer window, and Right Click This PC, then select Map Network Drive This window will pop up.
Pick your drive letter, and then click browse to bring up the Browse for Folder window, select the root folder that you created earlier, and click OK
You also have the option to connect at Sign-in, and connect using different credentials. Both of which are extremely important for all the machines (except credentials for the master, as they will be the same). These settings allow for autonomous restarting of the machines without issues.
Once done, click Finish and you will have a network drive underneath your local drives on the This PC menu. It will have a drive letter, and provide a universal location for all machines to access the same files, at the same exact file path when added to all of them.
This is a very common process, and if you run into any issues, there is lots of support online by searching for ‘Windows 10 add shared drive’
|
It is advised that all of these methods have their drawbacks, and only the config file doesn’t have any issues being accessed from multiple places.
It would be beneficial to create a script that copies the files from the shared location, to a standardised local location on the individual machines, whenever they are updated. This stops any errors that could occur due to the same files being accessed by different machines.
nDisplay Launcher
Once you have your Project deployed successfully to all the computers you've identified in your configuration file, you can use the nDisplayLauncher application to start the Project on all computers simultaneously. You should only run this on your master machine, or a console machine (it doesn’t have to be a machine with the unreal project on, it just needs to be on the same subnet. )
...