RSS< Twitter< etc

<< Back To XMesh Index

XMesh Saver MX Submit Job To Deadline Rollout

 

These controls allow the submission of an XMesh Saver job to Deadline, Thinkbox' Network Management software.

Please note that Deadline provides an Evaluation mode that can run two network nodes for free without any other feature limitations.

If Deadline is not installed on your system, the following rollout will appear:

If Deadline is installed on your system, the following controls will appear:

Job Base Name:

This text field will contain the 3ds Max scene file name by default. You can enter any job description there. Some other tokens will be appended to this base name by the submitter to describe the nature of the job. Note that you can edit this value in the Deadline Monitor's Job Properties dialog after submission.

Comment:

This text field can be used to provide additional user comments about the content and purpose of the job.

Name:

This text field will be populated with the user's name, taken by default from the Windows' login name. Note that you can enter any name there, preferably one that is registered as a Deadline user.

Department:

This text field can be used to provide an optional Department description, useful for sorting and filtering inside the Deadline Monitor.

Pool:

This drop-down list will display all pools available on Deadline, plus a default "none" pool. Pools are useful for scheduling purposes and splitting the hardware resources between projects and tasks in larger companies.

See the Deadline documentation for more information on using Pools.

Group:

This drop-down list will display all machine groups available on Deadline, plus a default "none" group. Groups are useful for making sure only machines with the required hardware and software profile are allowed to work on a job. For example. a Group containing all machines with 3dsMax and XMesh installed would be useful to ensure the caching job will be actually processed without errors.

See the Deadline documentation for more information on using Groups.

Priority:

Both the color slider and spinner define the numeric priority of the job in the range from 0 (lowest) to 100 (highest) while the color of the slider changes from red through yellow to green.

Number of Parallel Tasks / Number of Machines Working Concurrently

The first spinner controls the number of tasks in the Deadline Job to be created. Each task can be processed by a separate network node, thus allowing for parallel processing of the same scene by multiple machines.

The second spinner controls the maximum number of machines that can be dispatched at the same time. 

When the first value is 1, only one task will be created inside the job and only one network node will process it regardless of the second value. The job will not finish faster than the local workstation would, but it will free up the workstation for more productive work.

When the first value is higher that 1, the Number of Machines Working Concurrently value will determine how many network nodes will actually be allowed to work on the job. If the second number is lower than the Number of Parallel Tasks, it will determine the maximum number. If the second number is higher, the Number of Parallel Tasks will be the actual limit.

For example, Number of Parallel Tasks 5 and Number of Machines WOrking Concurrently 2 will allow only 2 machines to process two of the 5 tasks at the same time. If you had 1000 frames to process (1-1000), each task will contain a segment with 200 frames (1-200, 201-400 and so on). The two machines will finish the first two tasks (giving you 400 frames) and then pick up the next two tasks. Whichever of the two finishes with that first will pick up the last remaining task from 801 to 1000.

Note that the first frame of each segment (1, 201, 401, 601, 801) will be saved completely and won't be able to reference any previous data files even if the topology is unchanging at that point. Thus, a short sequence saved with many parallel tasks might produce a lot more data on disk than a single sequence save if topology is mostly consistent, but the parallel processing will probably finish faster.