Results 1 to 6 of 6

Thread: Maximum number of threads?

  1. #1
    Join Date
    Dec 2016
    Posts
    1

    Default Maximum number of threads?

    Hi guys,

    i have 2x xeon 2698 v4 cpus, in total 80threads, and i notice that mr is using only 1 xeon or only 40threads in maya???

    will this be updated?

    thx

  2. #2
    Join Date
    Dec 2004
    Location
    Marina Del Rey, California
    Posts
    4,143

    Default

    We generally understand this issue with newer Xeon's whose thread count can exceed 64, and are looking at ways to address the issue.
    Is this for the recent release of NVIDIA mental ray for Maya?

    [mod: assuming yes, I'll move this to the new NVIDIA mental ray for Maya forum]
    Barton Gawboy

  3. #3
    Join Date
    Jan 2011
    Location
    West Dover, Nova Scotia, Canada
    Posts
    154

    Default

    Just as a brainstorming thought:

    Is it possible to use all 80 cores on the system by setting up mental ray standalone/satellite to run with two concurrent ray sessions on the host that are mapped to different port numbers on the same computer?

    In theory, if a manual thread limit flag of 40 cores was defined for each mental ray standalone process with a custom ray -threads option like this you might be able to better utilize the high number of cores on your workstation:

    -threads N (number of concurrent render threads)

    This approach might be a workable solution if it is possible to combine two locally hosted satellite rendering processes with the new satellite UI options in mental ray for Maya's "Open Render Resource Manager" window (that is found in the Maya Render Settings > Configuration tab).

    mental ray Render Resource Manager.png
    Last edited by AndrewHazelden; December 13th, 2016 at 13:10.

  4. #4
    Join Date
    Dec 2005
    Location
    Wherever The Computer Says
    Posts
    2,853

    Default

    This is a (new-ish) Windows thing.

    NUMA Support on Systems With More Than 64 Logical Processors

    On systems with more than 64 logical processors, nodes are assigned to processor groups according to the capacity of the nodes. The capacity of a node is the number of processors that are present when the system starts together with any additional logical processors that can be added while the system is running.
    You can try starting more than one and then changing the affinity to use the other processor. This change in how processors are grouped has affected multiple renderers it seems and not everyone has changed their code to reflect grouping yet.
    "Don't argue with an idiot, they will drag you down to their level and beat you over the head with experience."

  5. #5
    Join Date
    Jan 2011
    Location
    West Dover, Nova Scotia, Canada
    Posts
    154

    Talking

    Quote Originally Posted by Remydrh View Post
    This is a (new-ish) Windows thing.

    You can try starting more than one and then changing the affinity to use the other processor. This change in how processors are grouped has affected multiple renderers it seems and not everyone has changed their code to reflect grouping yet.
    If you want to apply this same approach for adjusting the NUMA processor affinity setting on LINUX here are a few notes that could be helpful:

    Code:
    # numactl - Control NUMA policy for processes or shared memory 
    # https://linux.die.net/man/8/numactl
    
    # Add numactl to your RHEl/CentoOS Linux system
    sudo yum install numactl
    This is just a rough untested & theoretical example of how you could try and launch multiple ray processes using numactl. (I haven't explored the correct way to map the custom port numbers in mental ray standalone for each thread so more work would need to be done to come up with the final launching command):

    Code:
    # Launch a copy of mental ray standalone for each NUMA node on the same server (a 4 CPU socket server = 8 NUMA nodes)
    # Todo: look into the specific way to do custom port # binding for each thread...
    echo "[mental ray DR] Starting 8 NUMA Node Instances"
    echo ""
    nohup numactl -l --physcpubind=0-7 /opt/nvidia/mentalray-3.14-for-maya-2017/bin/ray -server 1 -threads 8 &
    nohup numactl -l --physcpubind=8-15 /opt/nvidia/mentalray-3.14-for-maya-2017/bin/ray -server 1 -threads 8 &
    nohup numactl -l --physcpubind=16-23 /opt/nvidia/mentalray-3.14-for-maya-2017/bin/ray -server 1 -threads 8 &
    nohup numactl -l --physcpubind=24-31 /opt/nvidia/mentalray-3.14-for-maya-2017/bin/ray -server 1 -threads 8 &

  6. #6
    Join Date
    Jan 2011
    Location
    West Dover, Nova Scotia, Canada
    Posts
    154

    Default Numactl and Custom mi-ray Port Numbers

    Hi Bart,

    What is the correct way to set up the XINETD / services based custom port mapping when multiple instances of mental ray standalone are running on a Linux server? I am asking this in the context of having several numactl started instances of mental ray running that are bound to the specific NUMA nodes on a 4 socket server.

    I'm a bit rusty on this process as the last time I explored mental ray standalone workflows was about 5 years ago with mental cloud direct instances running on Amazon EC2.

    Would editing the /etc/services file to add several mi-ray line entries with custom port numbers of 14170, 14171, 14172, 14173 be enough to do that?

    Code:
    mi-ray 14170/tcp   # Nvidia mental ray

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •