Document toolboxDocument toolbox

Private & Confidential

Unreal nDisplay

About nDisplay

The Unreal Engine supports advanced Igloo structures through a system called nDisplay. This system addresses some of the most important challenges in rendering 3D content simultaneously to multiple displays:

  • It eases the process of deploying and launching multiple instances of your Project across different computers in the network, each rendering to one or more display devices.

  • It manages all the calculations involved in computing the viewing frustum for each screen at every frame, based on the spatial layout of your display hardware.

  • It ensures that the content being shown on the various screens remains exactly in sync, with deterministic content across all instances of the Engine.

  • It offers passive and active stereoscopic rendering.

  • It can be driven by input from VR tracking systems, so that the viewpoint in the displays accurately follows the point of view of a moving viewer in real life.

  • It is flexible enough to support any number of screens in any relative orientation, and can be easily reused across any number of Projects.

Every nDisplay setup has a single master computer, and any number of additional computers.

  • Each computer in the network runs one or more instances of your Project's packaged executable file.

  • Each Unreal Engine instance handles rendering to one or more display devices, such as screens or projectors.

  • For each of the devices an instance of Unreal Engine handles, it renders a single viewpoint on the same 3D scene. By setting up these viewpoints so that their location in the 3D world matches the physical locations of the screens or projected surfaces in the real world, you give viewers the illusion of being present in the virtual world.

  • The master node is also responsible for accepting input from spatial trackers and controllers through connections to Virtual-Reality Peripheral Networks (VRPNs), and replicating that input to all other connected computers.

The image above shows a possible nDisplay network. Like all nDisplay networks, one of its PCs acts as the master node. This master node accepts input into the system from a VRPN server, which relays signals that come from spatial tracking devices and other controller devices. The network also contains several other PCs that run other instances of the Unreal Engine Project. Each of these cluster nodes drives one or more display projectors.

Childish Gambino Creates a Fantasy World for Pharos | Project Spotlight | Unreal Engine

Why Use nDisplay over the Unreal Toolkit Plugin

There are some inherent problems with the Unreal toolkit that are difficult to overcome. These include:

  • Slow performance with large complex, or optimized models.

  • No-post processing available

  • Issues with Lumen lighting engine not rendering correctly

  • Miss matched colours compared to the console view.

  • Limited to cubemap output, that only works in ICE.

nDisplay removes all of these issues, and adds even more features which include:

  • Clustering for complex and highly detailed models.

  • Large scale resolutions

  • True to life colours

  • Optitrack and VRPN support

  • Doesn’t require an external plugin

  • Doesn’t require an update from Igloo every time there’s a new Unreal update.

The only current issue with nDisplay is the lack of a method to input warps generated by the Igloo system. Requiring the use of scalable to do curved screens at this stage.

PC Specification for Igloo

The main machine will be the same as the ICE machine. It should be high specification as it will be the center for the models, as the master node. However it will only need to output a single display, and not output anything to the projectors (only the console monitor)

However, it will be responsible for starting Unreal and telling all the other PC’s what to do, as it will be the Master node on the above diagram.

The individual machines will be infrequently used, and will need to be ‘gamer’ ready. This is due to Unreal not outputting it’s best when using Quadro based graphics cards. It also means they can be off the shelf cheaper products with a decent warranty. They can also be specified as smaller form factor machines to fit nicely into a server rack.

A company called MK1 Manufacturing make a wide range of lenovo rack mounting solutions, and could be commissioned to make one for their gaming machine chassis.

Possible solutions could include the LOQ machine, and the Legion T5 machine.

Further conversations need to happen with Bill to recommend something suitable for a cluster solution.

Setting up nDisplay within Unreal

Prerequisites

A New Project

Epic games have kindly given us a template that sets everything up for you. If this is your first time using nDisplay, Igloo highly recommends creating a new project using this template, and experimenting with the config file.

To create a new project, on the Epic Launcher, click Unreal Engine on the left toolbar, then Library on the Top toolbar. Ensure that you have a version of Unreal installed. I would recommend the most recent version if you’re starting from scratch.

Click Next, and on the Select Template menu, click nDisplay and once again click Next.

Finally, select a location for your new project, and wether you would like to include Starter Content, and enable Raytracing. I would recommend disabling Raytracing for your first project unless you’re familiar with it.

Upon selection of Create Project the editor will begin to unpack and create the project at the location you have specified. This might take a long time depending on your computers specifications.

It’s a good idea to create the project within a location that is easy to share on a network. This makes it a lot easier to create the cluster further into the nDisplay pipeline.

Existing Project

  1. Open your project, and navigate to the Plugin’s window. Search for nDisplay and Enable the nDisplay plugin. This plugin is included in Unreal 2.46 and newer.

     

2. Once the editor has re-opened, remove all players + cameras from the scene. The nDisplay system creates it’s own player and camera. This can be edited later using the blueprints and config files.

3. Add nDisplayClusterRootActor, from the Place Actors menu, into the scene where you would like the player to start. Place it on the floor, as it will be generated from the floor based on height values.

Building a project

[How to build the project, and where to place it once built. Adding the listener.exe, and where to find that. Placing the built version within a network shared location./]

When you are ready to build your project, it is done in the usual way by clicking File, Package Project, and Windows (64 bit).

You will then be asked to specify a folder for your completed build to be placed. At this point we recommend a folder that is easily shared. For instance I’m using:

D:/Unreal/nDisplayTest

Once the build is complete it is also a good idea to include, within the build folder, the nDisplay listener. This can be found in your Unreal Engine Binaries folder at the following path:

Program Files\Epic Games\UE_4.XX\Engine\Binaries\DotNET\

Copy the DisplayListener.exe from this location, and place it in the same location as your built application’s executable file (.exe)

Creating the config file

A config file is required to inform each of the machines within the nDisplay cluster, on how to create the various nDisplay components. Luckily Epic has given us a lot of examples of how this configuration file should look. They can be found at this location within your engine folder:

\Program Files\Epic Games\UE_4.XX\Templates\TP_nDisplayBP\Content\ExampleConfigs

Please take some time to view how each of them is constructed, and experiment with modifying them.

To preview how it will look, within the unreal editor, you can load a configuration within the properties of the DisplayClusterRootActor and it will load the asset structure created by the config file. This is a great tool for finding errors.

Config file with Scalable

Within Igloo structures, instead of creating the setup using the config file, we can create a barebones file, and instead use Scalable to create the screens, blends, and systems.

Luckily, Epic have done the work for us and created an Easyblend base config file, which can be found within the EasyBlend folder of the ExampleConfigs.

This configuration just handles the player, and the base locations for the camera system. Then using the scalable editor we can generate the required files for the warping and blending.

For instructions of editing scalable configuration file, it can be found further into this guide.

Using Scalable to generate warp and blend files

Setting up Scalable for the first time

Once you’ve registered for scalable, purchased a license, or signed up for the free trial license, you will be given a download link for Scalable’s software; Display Manager.

You will also require a suitable camera, to create the blends, the list of supported cameras is HERE

You will need to install the Display manager on every machine, and have it launch on startup. It can then be configured from any machine as a single entity. However, it’s best to use the machine you have the cameras plugged into, unless they are gigE cameras (i.e. Basler)

Once you load up the client for the first time, it will ask you for your license file. This may have come with your download, or it may have been emailed to you.

You will be greeted with a wizard type system, with 4 pages. The first being:

Display Clients

The Display Clients panel presents an interface that allows you to setup the connections to the remote computers.  The right side of the panel shows the displays connected to the current system, while the left side shows all available Display Clients.

Use the Display Client on the Local Computer

If the computer running  Scalable Display Manager is the only computer used in the system, select the radio button Use Local Display Clients Only.  No additional setup is required in the Display Clients panel.

Display Clients on Remote Computers

If your Scalable Display Manager configuration will require connecting to one or more remote computers, select the radio button for Use Remote Display Clients.

Set Up Remote Computers

Add a Display Client to the System

Scalable Display Manager automatically detects all Display Clients on the network subnet.  The status of each Display Client is displayed in a colored square to the left of its network identifier.

Display Clients shown in the list on the right will be part of the system.

  1. Click to highlight one of the clients in your system.

  2. Click Assign > to move the clients into the system.

  3. Repeat for all remaining clients.

Scalable Display Manager will by default output a scalable mesh file, if instead you would like to directly apply the warp and blend in the graphics driver check the option Apply Warp & Blend in Graphics Driver.

Warping and Blending in the graphics card requires Nvidia Mosaic Mode.  Please check the website for the latest recommended version of the Nvidia Quadro driver.

Projectors Panel

The Projectors panel should accurately reflect the number and resolution of the displays connected to your computer, however Scalable Display Manager can only auto-detect certain common resolutions. If it is unable to detect the current resolution of your display(s), the most common response is that it will simply show a single display at the combined resolution of all your projectors.

To help with making sure that the projectors reach about an overlap value of 15-20% you can click Show Overlap Pattern. There are 3 sections Yellow,Green, and White. Having the edges of the bands touch will result in the level of overlap it represents.

  • Yellow is 15%

  • Green is 20%

  • White is 25%

Here is an example of a 20% overlap setup:

Enter the Projector Arrangement

  1. Select the physical arrangement of the projectors: Tiled, if the projectors are next to each other, or Stacked, if the projectors are on top of each other.

  2. Select the number of projectors in a row.

  3. Select the number of projectors in a column.

  4. Click Redetect Displays to save the settings.

Order the Display Clients

The order of the display clients will have a direct effect on the projector numbering. To properly order projectors, set the displays such that the left-most projectors on the blended display appear first in the list. To change the order use the arrow key to move the computer IP up or down.

Cameras Panel

Basic Camera Configuration

Verify that the number and type of cameras detected by Scalable Display Manager correspond to your system. If the Automatic type doesn't detect the camera configuration properly, please choose your camera type manually.

Advanced Camera Configuration

The Advanced Camera Configuration allows the user to grab a subset of the total cameras and arrange the order based on serial numbers.

Data Collection

Adjust the Camera Settings

For a proper calibration, it is necessary to focus the camera and adjust its exposure settings.  If the camera captures an image that is too bright it will have difficulty detecting the calibration spot patterns.  Similarly, not focusing the camera will reduce the accuracy of the camera detection and may cause artifacts in the resulting warp and blend.  It is thus important to make sure the camera(s) can see the entire screen and are capturing well-focused and properly exposed images.

Camera Brightness

Scalable Display Manager requires the image to appear with normal saturation in order to properly detect the patterns displayed during calibration. If the image presented to Scalable Display Manager is over or undersaturated, it will result in an error or produce an incorrect geometry calibration. The camera's brightness needs to be set so that there is enough contrast between the light and dark areas of the screen to see the calibration patterns. The image preview window should look similar to how you see the image in real life.

If after manual adjustment the image brightness still has saturation problems, click the Auto Tune button. A series of pictures will be taken to auto-adjust the camera.  If the image is not normally saturated after the Auto Tune, you may need to manually adjust the camera's brightness. Follow the instructions below for your particular camera.

Begin Data Collection

Click Begin Data Collection.

The data collection process will begin by showing a solid, white image on each projector, starting with the first projector and going in order to the last. The white image is used to find the location of the projector in the camera image.  Make sure that the area of the screen that is intended to be the target area of the screen is completely covered by one or all of the projectors.

Next, two images will be displayed on each projector, starting again with the first projector and going in order to the last.  The first image will be a 5-dot pattern which is used to determine the center of the projected image. The second image displayed is the grid pattern which is used to map where the pixels are falling on the screen.

White Pattern

5 Dot Pattern

Grid Pattern

White Pattern

5 Dot Pattern

Grid Pattern

What to do if an Error Occurs

Most errors encountered during calibration are caused by poor camera positioning or improper saturation of the camera image, resulting in the inability of the software to detect the calibration patterns being displayed. When an error occurs, click the link labeled Click here to troubleshoot this error in the error message window. This will open a browser with the suggested solutions for this particular issue.

Data Collection Error Actions

Some of the most common errors encountered during the data collection process have been included in the Error Actions. Error actions allow you to ignore issues with the data collection images which do not inhibit the data collection.

The error action below will be displayed when a camera cannot fully see a projected image. You are given the choice to "End Calibration" or "Ignore and Continue". If you are expecting that the entire projected image will not be seen, Ignore and Continue. However, if you are not seeing the entire projected image for other reasons, you should End Calibration and correct the error at its source.

The software can not discern the difference between a projected image that does not fill the screen and one that fills the screen but spills off. 

End Calibration: In this case, the calibration should be stopped so that the camera can be re-positioned to see the entire projected image.

 

Ignore and Continue: The error action is expected and triggered because the projector is overshooting the screen. Continuing will not cause any issues.

 

 

Choice

Description

Choice

Description

Remember this action for this projector for session

Do not stop the calibration for this error on this projector until the software is restarted

Remember this action for this projector forever

Never stop the calibration for this error on this projector

Remember this action for all projectors for session

Do not stop the calibration for this error on any projector until the software is restarted

Remember this action for all projectors forever

Never stop the calibration for this error on any projector

Projector Visibility

Be careful when editing

Removing a projector from the visibility list for a camera can result in unwanted warp and blends and often cause the system stop calibrating

Scalable setup within nDisplay

Once you’ve got your scalable calibration correct, you’ll need to enable perspective mode to export the data to Unreal. To do this click on the Perspective button on the left panel, and then tick the ‘Use Perspective Mode’ button on the menu. You will then need to re calibrate to generate the perspective outputs.

Essentially that is all that’s required for standard nDisplay, from Scalable’s perspective.
However, if you’re using head tracking, it is required that you adjust the eye level to the correct height, for where the exact center of the room will be.

Exporting Data to Unreal

Once your screen has been calibrated using the Scalable system, the master machine will have saved the DataSet to it’s C:/ drive, in the following folder:

In this folder, you will find a myriad of files, most of which are required for the nDisplay system to work.
Unfortunately, due to how these files are created, it’s difficult to setup this location as a shared space, so you will need to copy these files *(or rather the LastCalibration Folder as a whole) to a location similar to where your config.

Not all the files are required, but it’s easier to copy the entire thing.

Creating Scalable nDisplay config file

This can be created from scratch, or by copying an nDisplay config file, our our sample file below.

You are welcome to copy this file, and expand on it. You should save it as a .cfg.
It’s a 3 Igloo Media Server (IMP) 3 Projector system.

 

Scalable Config Explained

Header Info

[info] version=”23”

This is the config file header, it specifies the version of Unreal that it supports. It’s best to leave this at default “23” otherwise it causes issues.

Cluster Nodes

[cluster_node] id="node_main" addr="10.1.5.110" window="wnd_all" master="true"

A cluster node is a physical machine, capable of running the Unreal project. Each cluster will need a unique ID (you can make this up), a local IP address, and then the window it should display.

Additionally you can also specify:
- which node is the master (there can only be one)
- which node has the sound output (master by default)
- The various ports used for synchronization between the machines (ports 41001 → 41003 are used by default)

Windows

[window] id=wnd_all fullscreen=false WinX=0 WinY=0 ResX=1920 ResY=1080 viewports="vp_1"

This defines the application window for the game. This must cover the entire projection space you are going to use. If, for instance, you have 3 1080p projectors connected to a single machine, the ResX value should be 3x1920 (5760) to span all 3 outputs.

If the machine has a console monitor (a screen where no projection will take place), ensure that it is the first in the screen ordering, and then use the WinX value to adjust the start position of the game canvas.

Viewport, specifies which viewport (explained below) should be rendered to this window. If multiple viewports are required a comma can be used to separate the viewport IDs e.g. “vp_1, vp_2”

The ID should be both Unique, and match the ‘window’ field on the cluster node intended to display this.

Additionally, you can also specify ‘Fullscreen’ which is useful if you have a single display output (or a mozaic/eyefinity output) which will increase game rendering capabilities.

Viewport

[viewport] id=vp_1 x=0 y=0 width=3840 height=1080 projection=proj_easyblend_1

The viewport is an area of the game window where a frame is being rendered. Usually it’s the same size as the entire window, however multiple viewports is supported for multi-window setups like LED walls.

It’s X and Y values donate the its position on the Window, it is not related to the position on the desktop. So if you move the window over 1 1080p screen to allow for a console screen, you do not need to add 1920 to the X value of the viewport.

The projection value, explained below, is which projection policy will be used

Projection

[projection] id=proj_easyblend_1 type="easyblend" file="D:\LocalCalibration\ScalableData.pol" origin=easyblend_origin scale=1

This is where the scalable setup differs from a standard nDisplay system. Instead of a ‘simple’ projection system, which specifies the cameras created by unreal, we use Scalable & Unreal’s easyblend system. This tells unreal how to create cameras based on the .pol data created by Scalable during setup.

Like the other values, it requires a unique ID, but additionally you have a file field, an origin, and a scale value.

The field value should be the location of your ScalabaleData.pol file relative to the location of this file. It’s also possible to put a complete file address like within the example, but be careful to have the same address across all machines.

Camera

[camera] id=camera_static loc="X=0,Y=0,Z=0" tracker_id="ViveVRPN" tracker_ch=0

This is the camera used by the master node to create the player, it has many optional properties that relate to 3D, tracking, and hierarchy.

parent - ID of the parent component, default is VR root
tracker_id - the ID of the tracking device, default there is no tracking.
tracker_ch - the ID of the tracking device’s channel (default 0)
eye_swap - swap eyes if in stereo mode; default is false.
eye_dist - distance in meters between the eyes, default is 0.064.
force_offset - force’s a mono camera to behave like a stereo camera, eye_offset works for this behavior too.

Scene Nodes

[scene_node] id=cave_origin  loc="X=0,Y=0,Z=0"   rot="P=0,Y=0,R=0"
[scene_node] id=easyblend_origin_1 loc="X=0,Y=0,Z=0" rot="P=0,Y=0,R=0"

These are the objects created in the game world, that make the framework for the player. The first object should always be labeled ‘cave_origin’ as this matches the object placed within the Unreal Build. Everything will parent to this object by default. There is no need to specify a parent unless you require a different structure.

With simple camera systems, a projector will be paired to a scene object, which would require you to position, or offset the scene object to place the projector correctly. With scalable displays this is not the case, you only need to create as many scene objects for your projectors, the rest is handled by scalable.

Input

This is where you define the trackers that can be used instead of, or alongside, normal controller formats. The data from this tracker is broadcast across the cluster network, and the tracking software can be installed and run from any of the servers within the cluster.

The example above, uses a HTC Vive, with a Vive tracker puck. Which also requires you to run an OpenVRPN server to convert the vive position data to something readable by other programs, and then hosts it on the network.

type - The specific type of hardware used, options include:
tracker for a tracking device.
analog for a device that produces axis data.
button for a device that produces Boolean button data.
keyboard for a standard computer keyboard.
addr - the address of the server data, if using OpenVRPN you will need to change the IP address to match the machine hosting the server (port 3884 is default), and then also change the Tracker ID which is available in SteamVR Options → configure trackers.
loc - Initial offset (meters)
rot - Initial rotation (Euler)
front - axis mapped to forward direction
right - axis mapped to horizontal direction
up - axis mapped to vertical direction

Other settings

The rest of the settings are explained in more detail within the example files. 99% of the time the defaults are perfect.

Creating a network storage solution

Making nDisplay extremely easy to update and configure requires the use of a NAS (network accessible storage) device. Without it, you would need to manually update each machine with a new version of the Unreal build, and configuration. Instead, all the machines can work from one file, on one machine, that is shared with all of them.

There are a few possible ways to achieve this, the simplest is by using a physical NAS. These can be purchased from most PC stores, and are essentially a small computer with a lot of storage. They connect to a local network and provide every machine on that network with access to it’s storage.

The second way is to use Sharepoint (a business version of OneDrive) which uploads and shares the files between the machines allowed to access it. It is also the same location on all machines. This can also be done with Google drive too.

The third way is to share a drive on one of the machines within the cluster. To set this up, it’s best to create it on the machine you’re doing development on, or if that is not possible, the machine designated as the master by the config file.

  1. Make sure that all computers on the network can see each other. This is possible in the network rollout within the explorer window. As long as all machines are visible, they should have access to the shared drive.

    If you cannot see any other computers in this window, you will need to Enable network discovery. This can be done by doing the following:
    - Open Start
    - Type ‘Control Panel’
    - Click Control Panel
    - Click Network and Sharing Center (you may need to click the Network and Internet heading
    - Click Change Advance sharing settings in the upper left side.
    - Check “Turn on network Discovery”
    - Check “Turn on file and printer sharing”
    - Click Save Changes and continue.

    You will need to repeat this process on every machine in the cluster, so they can all see and talk to each other.

  2. On the Master machine, locate the folder you wish to share with all the other machines. The items you will need to share are:
    - The Unreal build in it’s entirety
    - The Unreal nDisplay config file
    - The Scalable data (should be near the config file)
    - The nDisplay Listener.exe (and it’s config files)

    I created a folder in my D drive called ‘nDisplayBuild’ and placed everything in that folder, like so.
    Test1, and Test2 are the names of my Unreal Builds



  3. Right click the folder you would like to share, and click Properties and click on the Sharing tab.

    Click Share…

    Your name will be in the list, but you need to add Everyone if it’s not already present. Then add a Read/Write permission level (Read would be fine, but it stops the ability to write Log files)

    Click the Share button, and accept the warning that pops up. The folder will now be shared across all the machines. Provided the have access credentials to the master machine. This is usually just the username and password you use to log into it.

  4. The next step is to add the shared folder we just created as a drive with the same location on all the machines. The first step is to identify a drive letter that is not present on all the machines. E is usually free, as most modern machines will have two drives. C and D however, if that is not present pick one further up the alphabet. It has to be the same location on all machines.

    When ready, on each machine (including the master) Open the explorer window, and Right Click This PC, then select Map Network Drive
    This window will pop up.


    Pick your drive letter, and then click browse to bring up the Browse for Folder window, select the root folder that you created earlier, and click OK

    You also have the option to connect at Sign-in, and connect using different credentials. Both of which are extremely important for all the machines (except credentials for the master, as they will be the same). These settings allow for autonomous restarting of the machines without issues.

    Once done, click Finish and you will have a network drive underneath your local drives on the This PC menu. It will have a drive letter, and provide a universal location for all machines to access the same files, at the same exact file path when added to all of them.

    This is a very common process, and if you run into any issues, there is lots of support online by searching for ‘Windows 10 add shared drive’

It is advised that all of these methods have their drawbacks, and only the config file doesn’t have any issues being accessed from multiple places.

It would be beneficial to create a script that copies the files from the shared location, to a standardised local location on the individual machines, whenever they are updated. This stops any errors that could occur due to the same files being accessed by different machines.

nDisplay Launcher

Once you have your Project deployed successfully to all the computers you've identified in your configuration file, you can use the nDisplayLauncher application to start the Project on all computers simultaneously. You should only run this on your master machine, or a console machine (it doesn’t have to be a machine with the unreal project on, it just needs to be on the same subnet. )

It can be found in the same place as the nDisplayListener program, within your Unreal engine’s binary files. It’s a good idea to add this to the start menu, or create a shortcut, as it’s very prone to crashing.

The Launcher looks like this, with 3 main fields:

  1. Add your packaged Project .exe file to the Applications list.
    Click Add under the Applications list, then browse to and select the .exe file you packaged for your Project. The nDisplay Launcher will add your new application to the list. Click its name to select it. This must be the same location on all the machines in the cluster. The addresses must match.

  2. Specify your configuration file. Again, this must be a shared location, with the same address for all machines.

  3. Make sure your application is highlighted, it will say ‘no application is selected’ otherwise. Then click Run

As long as your configuration is correct, you will see it send a run command to all the PC’s on the network.

Unreal, with scalable mapped displays, should start on all the machines in the cluster.

 

(c) Igloo Vision 2020