Results 1 to 4 of 4

Thread: "Real-time" Iray cluster rendering from Unreal Engine 4

  1. #1
    Join Date
    Oct 2017
    Posts
    2

    Question "Real-time" Iray cluster rendering from Unreal Engine 4

    Hi all,

    I'm an undergraduate student currently completing my honours thesis. My thesis research involves using cluster & cloud computing for physics simulation. Specifically, I'm working on real-time use of a cluster over a network for high fidelity simulation of space engineering (currently simulating a satellite ion thruster array). The simulation software I am developing uses Unreal Engine 4 to allow virtual reality interaction with the user's design/simulation project.

    The cluster is set up with 6 nodes, each with an i7 and a GTX745. I can successfully run HPL with CPU & GPU utilisation through CUDA. Iray server also runs without issue on all nodes.

    I'm interested in using Iray to perform some of the simulation rendering for VR, similar to what the 3DS Max Iray plugin provides. I've searched these forums for information unreal engine 4 and real time rendering, but I'm hoping someone more familiar with Iray can point me in the right direction, and explain what I should research. Correct me if I'm wrong, but so far it seems like what I need to do is use Iray to create a lightmap, as the real-time rendering of multiple smaller viewports may not be fast enough to provide a pleasant user experience.

    While I'm confident with programming, I'd like to limit the amount of development I have to do with Iray if possible, allowing me to focus on the software I'm developing. If I can use Iray to supplement rather than replace my current local rendering solution, that would also be great.

    tl;dr - I want to use Iray to replace or supplement the rendering done in UE4, using a local ROCKS cluster with each node running Iray server. Is this feasible? Where should I begin?

    Cheers!

  2. #2
    Join Date
    May 2007
    Location
    Melbourne, Australia
    Posts
    425

    Default

    Some random thoughts.

    - Directly rendering for VR in real-time with Iray (whether in Iray Interactive or Iray Photoreal modes) is not going to be feasible due to the need to achieve at least 90fps to get a good experience in a headset. So I don't see any way to replace UE4 in the scenario you are describing.

    - Rendering panoramic VR domes (in stereo) is totally feasible and works well, however you can't freely move in the environment then, you can however setup stations and in theory could use UE4 for the navigation between them, so that would be the supplementing use case. You could use your cluster to render these with Iray Server in a queued offline way if desired.

    - Baking lighting is also possible, though not yet built in to Iray. There are examples of how this can be done with the Irradiance Probes feature though. This would also be supplementing UE4, however just baking incoming irradiance really doesn't leverage much power out of Iray given the effort you'd have to spend integrating it.

    - Iray Server only supports clustering for queued renders, not interactive ones. If you want interactive clustering you'd need to use VCA hardware.

    - Iray running on VCA has a special IQ mode which can enable much faster convergence by exploiting Infiniband and a cluster optimised rendering method. With if you can get as many iterations as you have physical machines per frame. The frame rate remains that of a single machine but you get more converged images every frame.

    - The hardware you describe, even with 6 nodes is not going to come close to providing an interactive experience with Iray unfortunately. The GPUs are several generations old and low down the spec pole. You can get an idea of benchmarks on our site here. The GTX 745 is going to be about 0.75 the performance of a GTX750 Ti, which we did benchmark, so would have a score of about 0.855 in our benchmark. So a single Quadro P6000 or TITAN X Pascal would give greater performance than the entire cluser in this case.

    Personally I would ask yourself what unique aspect of Iray are you really trying to leverage here. Is physical accuracy (in terms of light transport) critical to the use case? Is the quality achievable from UE4 not sufficient here? Would a hybrid approach using UE4 for navigation and offline rendering for VR points be appropriate? It sounds like VR is critical to your concept, which is what is driving a lot of the issues, without real-time VR with free roaming, Iray is a perfectly capable interactive rendering tool, same goes for station/point based VR, it's just for full free roaming VR that poses a problem.

  3. #3
    Join Date
    Oct 2017
    Posts
    2

    Default

    Thanks for the input ardenpm!

    That's a shame, I saw this video: https://youtu.be/uAVJ3QsJ0fY?t=317 which shows how iray can be used for VR but I couldn't find much more information about integration with something like Unreal. Are they simply rendering a photoreal VR dome?

    What I wanted to investigate was how rendering using the cluster could be used to provide a VR experience to someone using a less powerful computer or handheld device on the same network, in the same way that I'm using the cluster to allow for higher fidelity physics calculations. While the quality provided by UE4 is sufficient, it would be really cool to view some (almost) real-time rendering performed by the cluster on a handheld device.

    I was also interested in an approach like http://vrcluster.io/ , except stitching the views together again for the headset rather than displaying them on different projectors - although I don't think a good VR experience is possible in this way.

    From the information you've given me, it seems it may be better to instead use the GPUs with CUDA to get some extra performance for the physics calculations rather than using them for rendering. Alternatively, using them for VR domes would be useful for the parts of the simulation where interaction is not required, I'll look into this further.

    Cheers!

  4. #4
    Join Date
    May 2007
    Location
    Melbourne, Australia
    Posts
    425

    Default

    That demo used techniques that unfortunately did not make their way into the released Iray product. Most of our customers are using VR domes if they want full, physically accurate rendering since it is the most practical compromise, and something like Unreal if interactivity is more important to them.

    Personally I have doubts about VR rendering over networks with the latency it introduces, however HTC aparently don't feel the same way given their partnership with Dalian Television and Beijing Cyber Cloud in China to do remote VR rendering. Will be interesting to see if that actually works. There has been some interesting research on in predictive rendering, basically guessing where people are going to move/look and pre-rendering that content, however I haven't seen that commercially deployed.

    The hybrid approach of Unreal for navigation and domes for when you get where you're going is pretty appealing to me as a middle ground until the time when we can path trace everything all the time at high enough frame rates.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •