PlateSpin Orchestrate from Novell is an advanced datacenter management solution designed to manage all network resources. It provides the infrastructure that manages group of ten, one hundred, or thousands of physical or virtual resources.
PlateSpin Orchestrate is equally apt at performing a number of distributed processing problems. From high performance computing, the breaking down of work into lots of small chunks that can be processed in parallel through distributed job scheduling. The following figure shows the product’s high-level architecture:
Figure 1-1 PlateSpin Orchestrate Architecture
This section contains information about the following topics:
Agents are installed on all managed resources as part of the product deployment. The agent connects every managed resource to its configured server and advertises to the PlateSpin Orchestrate Server that the resource is available for tasks. This persistent and auto-reestablishing connection is important because it provides a message bus for the distribution of work, collection of information about the resource, per-job messaging, health checks, and resource failover control.
After resources are enabled, PlateSpin Orchestrate can discover, access, and store detailed abstracted information—called “facts”—about every resource. Managed resources, referred to as “nodes,” are addressable members of the Orchestrate Server “grid” (also sometimes called the “matrix”). When integrated into the grid, nodes can be deployed, monitored, and managed by the Orchestrate Server, as discussed in Section 1.2, Understanding PlateSpin Orchestrate Functionality.
An overview of the PlateSpin Orchestrate grid architecture is illustrated in the figure below, much of which is explained in this guide:
Figure 1-2 PlateSpin Orchestrate Server Architecture
For additional information about job architecture, see Job Architecture
in the PlateSpin Orchestrate 2.0 Developer Guide and Reference.
PlateSpin Orchestrate enables you to monitor your system computing resources using the built-in Resource Monitor. To open the Resource Monitor in the Development Client, see “Monitoring Server Resources” in the PlateSpin Orchestrate Administration Guide.
The following entities are some of key components involved in the Orchestrate Server:
All managed resources, which are called nodes, have an agent with a socket connection to the Orchestrate Server. All resource use is metered, controlled, and audited by the Orchestrate Server. Policies govern the use of resources.
PlateSpin Orchestrate allocates resources by reacting as load is increased on a resource. As soon as we go above a threshold that was set in a policy, a new resource is allocated and consequently the load on that resource drops to an acceptable rate.
You can also write and jobs that perform cost accounting to account for the cost of a resource up through the job hierarchy, periodically, about every 20 seconds. For more information, see Auditing and Accounting Jobs
in the PlateSpin Orchestrate 2.0 Developer Guide and Reference.
A collection of jobs, all under the same hierarchy, can cooperate with each other so that when one job offers to give up a resource it is reallocated to another similar priority job. Similarly, when a higher priority job becomes overloaded and is waiting on a resource, the system “steals” a resource from a lower priority job, thus increasing load on the low priority job and allocating it to the higher priority job. This process satisfies the policy, which specifies that a higher priority job must complete at the expense of a low priority job.
PlateSpin Orchestrate users must authenticate to access the system. Access and use of system resources are governed by policies.
A job definition is described in the embedded enhanced Python script that you create as a job developer. Each job instance runs a job that is defined by the Job Definition Language (JDL). Job definitions might also contain usage policies. For more information, see Job Class
in the PlateSpin Orchestrate 2.0 Developer Guide and Reference.
Jobs are instantiated at runtime from job definitions that inherit policies from the entire context of the job (such as users, job definitions, resources, or groups). For more information, see JobInfo
in the PlateSpin Orchestrate 2.0 Developer Guide and Reference.
Policies are XML documents that contain various constraints and static fact assignments that govern how jobs run in the PlateSpin Orchestrate environment.
Policies are used to enforce quotas, job queuing, resource restrictions, permissions, and other job parameters. Policies can be associated with any PlateSpin Orchestrate object. For more information, see Section 1.2.2, Policy-Based Management.
Facts represent the state of any object in the PlateSpin Orchestrate grid. They can be discovered through a job or they can be explicitly set.
Facts control the behavior a job (or joblet) when it’s executing. Facts also detect and return information about that job in various UIs and server functions. For example, a job description that is set through its policy and has a specified value might do absolutely nothing except return immediately after network latency.
There are three basic types of facts:
Static: Facts that require you to set a value. For example, in a policy, you might set a value to be False. Static facts can be modified through policies.
Dynamic: Facts produced by the PlateSpin Orchestrate system itself. Policies cannot override dynamic facts. They are read only and their value is determined by the PlateSpin Orchestrate Server itself.
Computed: Facts derived from a value, like that generated from the cell of a spreadsheet. Computed facts have some kind of logic behind them which derive their values.
For example, you might have two numeric facts that you want expressed in another fact as an average of the two. You could compose a computed fact which averages two other facts and express it as an average value under a certain fact name. This enables you to create facts that represent other metrics on the system that are not necessarily available in the default set, or are not static to anything that might impact other dynamic facts.
For more information about facts, see Facts
in Policy Elements
in the PlateSpin Orchestrate 2.0 Developer Guide and Reference.
In order for the PlateSpin Orchestrate to choose resources for a job, it uses resource constraints. A resource constraint is some Boolean logic that executes against facts in the system. Based upon this evaluation, it will only consider resources that match the criteria that have been set up by use of constraints.
For more detailed information, see Working with Facts and Constraints
in the PlateSpin Orchestrate 2.0 Developer Guide and Reference and the following JDL constraint definitions listed in the same guide:
Resources, users, job definitions and virtual machines (VM) are managed in groups with group policies that are inherited by members of the group.
A virtual machine host is a resource that is able to run guest operating systems. Attributes (facts) associated with the VM host control its limitations and functionality within the Orchestrate Server. A VM image is a resource image that can be cloned and/or provisioned. A VM instance represents a running copy of a VM image.
Templates are images that are meant to be cloned (copied) prior to provisioning the new copy. For more information, see Creating a Template from a VM
in the PlateSpin Orchestrate 2.0 VM Client Guide and Reference.
The Orchestrate Server manages all nodes by administering jobs (and the functional control of jobs at the resource level by using joblets), which control the properties (facts) associated with every resource. In other words, jobs are units of functionality that dispatch data center tasks to resources on the network such as management, migration, monitoring, load balancing, etc.
PlateSpin Orchestrate provides a unique job development, debugging, and deployment environment that expands with the demands of growing data centers.
As a job developer, your task is to develop jobs to perform a wide array of work that can be deployed and managed by PlateSpin Orchestrate.
Jobs, which run on the Orchestrate Server, can provide functions within the PlateSpin Orchestrate environment that might last from seconds to months. Job and joblet code exist in the same script file and are identified by the .jdl extension. The .jdl script contains only one job definition and zero or more joblet definitions. A .jdl script can have only one Job subclass. As for naming conventions, the Job subclass name does not have to match the .jdl filename; however, the .jdl filename is the defined job name, so the .jdl filename must match the .job filename that contains the .jdl script. For example, the job files (demoIterator.jdl and demoIterator.policy) included in the demoIterator example job are packaged into the archive file named demoIterator.job, so in this case, the name of the job is demoIterator.
A job file also might have policies associated with it to define and control the job’s behavior and to define certain constraints to restrict its execution. A .jdl script that is accompanied by a policy file is typically packaged in a job archive file (.job). Because a .job file is physically equivalent to a Java archive file (.jar), you can use the JDK JAR tool to create the job archive.
Multiple job archives can be delivered as a management pack in a service archive file (SAR) identified with the .sar extension. Typically, a group of related files are delivered this way. For example, the Xen30 management pack is a SAR.
As shown in the following illustration, jobs include all of the code, policy, and data elements necessary to execute specific, predetermined tasks administered either through the PlateSpin Orchestrate Development Client, or from the zos command line tool.
Figure 1-3 Components of a Job (my.job, )
Because each job has specific, predefined elements, jobs can be scripted and delivered to any agent, which ultimately can lead to automating almost any datacenter task. Jobs provide the following functionality:
For more information, see Using PlateSpin Orchestrate Jobs
in the PlateSpin Orchestrate 2.0 Developer Guide and Reference and the following JDL job class definitions in the same guide:
Jobs can written to control all operations and processes of managed resources. Through jobs, the Orchestrate Server manages resources to perform work. Automated jobs (written in JDL), are broken down into joblets, which are distributed among multiple resources.
By managing many small joblets, the Orchestrate Server can enhance system performance and maximize resource use.
Jobs can detect demand and monitor health of system resources, then modify clusters automatically to maximize system performance and provide failover services.
Some jobs provide inspection of resources to more effectively management assets. These jobs enable all agents to periodically report basic resource facts and performance metrics. In essence, these metrics are stored as facts consisting of a key word and typed-value pairs like the following example:
resource.loadaverage=4.563, type=float
Jobs can poll resources and automatically trigger other jobs if resource performance values reach certain levels.
The system job scheduler is used to run resource discovery jobs to augment resource facts as demands change on resources. This can be done on a routine, scheduled basis or whenever new resources are provisioned, new software is installed, bandwidth changes occur, OS patches are deployed, or other events occur that might impact the system.
Consequently, resource facts form a capabilities database for the entire system. Jobs can be written that apply constraints to facts in policies, thus providing very granular control of all resources as required. All active resources are searchable and records are retained for all off-line resources.
The following osInfo.job example shows how a job sets operating system facts for specific resources:
resource.cpu.mhz (integer) e.g., "800" (in Mhz) resource.cpy.vendor (string) e.g. "GenuineIntel" resource.cpu.model (string) e.g. "Pentium III" resource.cpu.family (string) e.g. "i686"
osInfo.job is packaged as a single cross-platform job and includes the Python-based JDL and a policy to set the timeout. It is run each time a new resource appears and once every 24 hours to ensure validity of the resources. For a more detailed review of this example, see osInfo.job
in Using PlateSpin Orchestrate Jobs
in the PlateSpin Orchestrate 2.0 Developer Guide and Reference.
Jobs can be scheduled to to periodically trigger specific system resources based on specific time constraints or events. As shown in the following figure, PlateSpin Orchestrate provides a built-in job scheduler that enables you or system administrators to flexibly deploy and run jobs.
Figure 1-4 The Job Scheduler
For more information, see Resource Selection
in Using PlateSpin Orchestrate Jobs
in the PlateSpin Orchestrate 2.0 Developer Guide and Reference and The PlateSpin Orchestrate Job Scheduler
in the PlateSpin Orchestrate 2.0 Development Client Reference. See also Job Scheduling
and Job
in the PlateSpin Orchestrate 2.0 Developer Guide and Reference.
Jobs also drive provisioning for virtual machines and blade servers. Provisioning adapter jobs are deployed and organized into appropriate job groups for management convenience. Provisioning adapters are deployed as part of your VMM license.
For more information, see Virtual Machine Management
in the PlateSpin Orchestrate 2.0 Developer Guide and Reference
and Section 1.2.1, Resource Virtualization.
The Orchestrate Server is a “broker” that can distribute jobs to every “partner” agent on the grid. Based on assigned policies, jobs have priorities and are executed based on the following contexts:
User Constraints
User Facts
Job Constraints
Job Facts
Job Instance
Resource User Constraints
Resource Facts
Groups
Each object in a job context contains the following elements:
Figure 1-5 Constraint-Based Resource Brokering
For more information, see Working with Facts and Constraints
in the PlateSpin Orchestrate 2.0 Developer Guide and Reference.
There are three API interfaces available to the Orchestrate Server:
Orchestrate Server Management Interface: The PlateSpin Orchestrate Server, written entirely in Java using the JMX (Java MBean) interface for management, leverages this API for the PlateSpin Orchestrate Development Client. The Development Client is a robust desktop GUI designed for administrators to apply, manage, and monitor usage-based policies on all infrastructure resources. The Development Client also provides at-a-glance grid health and capacity checks.
For more information, see the PlateSpin Orchestrate 2.0 Development Client Reference.
Figure 1-6 PlateSpin Orchestrate Development Client
Job Interface: Includes a customizable/replaceable Web application and the zosadmin command line tool. The Web-based Server Portal built with this API provides a universal job viewer from which job logs and progress can be monitored. The job interface is accessible via a Java API or CLI. A subset is also available as a Web Service. The default PlateSpin Orchestrate Server Portal leverages this API. It can be customized or alternative J2EE* application can be written.
PlateSpin Orchestrate Monitoring System: Monitors all aspects of the data center through an open source, Eclipse*-based interrface referred to as the PlateSpin Orchestrate VM Client.This interface operates in conjunction with the Orchestrate Server and monitors the following objects:
Deployed jobs that teach PlateSpin Orchestrate and provide the control logic that PlateSpin Orchestrate runs when performing its management tasks.
Users and Groups
Virtual Machines
For more information, see the PlateSpin Orchestrate 2.0 VM Client Guide and Reference.
Figure 1-7 The PlateSpin Orchestrate VM Client Interface