August 21, 2008
The information in this Readme file pertains to Novell® ZENworks® Orchestrator, the Novell product that interacts with configuration and storage resource management servers to manage physical “compute” and “storage” resources and the relationships between them. The Orchestrator also manages virtual resources, controlling the entire lifecycle of each virtual machine.
The issues included in this document were identified when ZENworks Orchestrator 1.3 was initially released. This document provides descriptions of limitations of the product or known issues and workarounds, when available. The Novell ZENworks Orchestrator 1.3 Release Notes shipped with the product include additional content that you should be aware of when setting up and using ZENworks Orchestrator. To view the Release Notes, open the Administrator Information page and click the Release Notes link.
NOTE:The Administrator Information page is available after the product is installed. For more information, see Installing the Agent and Clients from the Administrator Information Page (Unsupported)
in the Novell ZENworks Orchestrator 1.3 Installation and Getting Started Guide. This guide also provides detailed installation instructions for the product.
This readme includes information organized in the following sections:
The following information was added to this Readme after the initial posting on June 9, 2008:
The issues updated on this date include the following:
The issues added on this date include the following:
The issues added on this date include the following:
The following information is included in this section:
The VM Builder and VM Warehouse patterns have installation dependencies that require the python-xml package to be installed on their server, which is automatically done when you install the VM Builder and VM Warehouse patterns. The ZENworks Orchestrator Server patterns do not have this dependency. However, if you try to configure the ZENworks Orchestrator Server using the provided configuration script without the python library, it fails.
Therefore, if you install the ZENworks Orchestrator Server to a separate machine than the VM Builder and VM Warehouse, you must first install the python-xml package to the server (using yast -i python-xml) before you install the ZENworks Orchestrator Server software. Then the necessary python library is available for configuring the ZENworks Orchestrator Server.
If you install the ZENworks Orchestrator Server on the same server as the VM Builder and VM Warehouse, there is no issue because the python-xml package is automatically installed with the VM Builder and VM Warehouse patterns.
If the ZENworks Orchestrator Server is installed using a Trial License Key, the expiration date and user/node limitations are not shown in the ZENworks Orchestrator Console’s About box.
Workaround: View the following ZENworks Orchestrator Server log for this information:
/var/opt/novell/zenworks/zos/server/logs/server.log
The RPM agent installer without JRE (novell-zenworks-zos-agent-1.3.0-33604.i586.rpm) linked to on the Administrator information page (that is, the page displayed at <server_IP_address:8001> has not been fully tested and is not supported by Novell.
The installer is not labeled as being unsupported, as are the other installers on the page.
The following information is included in this section:
If Network File System (NFS) is used to mount a shared volume across nodes that are running the Orchestrator Agent, the agent cannot properly set the UID on files copied from the datagrid to the managed nodes by using the “default” NFS configuration on most systems.
To address this problem, disable root squashing in NFS so that the agent has the necessary privileges to change the owner of the files it copies.
For example, on a RHEL NFS server or on a SLES NFS server, the NFS configuration is set in /etc/exports. The following configuration is needed to disable root squashing:
/auto/home *(rw,sync,no_root_squash)
In this example, /auto/home is the NFS mounted directory to be shared.
NOTE:The GID is not set for files copied from the datagrid to an NFS mounted volume, whether root squashing is disabled or not. This is a limitation of NFS.
The following information is included in this section:
The uninstall feature in YaST and YaST2 is not supported in this release of ZENworks Orchestrator.
If you create a new VM using VM Builder GUI and you specify an ISO as the install source with a 1 GB OS Disk, the Disk Partitioner in the Yast Installation report two disks. By default, the YaST disk partitioner tries to create swap and root on the second disk. This fails because the second disk is actually the ISO install source.
To work around this issue, ignore the second disk in the partitioner and create swap and root on the 1 GB backed disk.
The following information is included in this section:
If you stop the ZENworks Orchestrator using the standard command (/etc/init.d/novell-zosserver stop)prior to upgrade, the preinstallation script detects that no snapshot was taken of the server, so it restarts the server and then stops it again to take a snapshot before upgrading the server package. If the grid has a lot of objects, the rug command hangs during the upgrade process (that is, the rug command described in Upgrading ZENworks Orchestrator Server Packages Using the rug Command
in the Novell ZENworks Orchestrator 1.3 Upgrade Guide.
In order to execute a successful upgrade, we recommend that you keep the Orchestrator Server running during the upgrade or stop it using the --snapshot flag (for example, /etc/init.d/novell-zosserver stop --snapshot) before the upgrade.
During an upgrade from Orchestrator 1.2 to Orchestrator 1.3, you might see the following error in the agent logs if you upgrade to version 1.3:
05.01 03:28:02 : ERROR : Agent software version mismatch. 05.01 03:28:02 : ERROR : Current agent version: 1.2.0 05.01 03:28:02 : ERROR : Server expecting version: 1.1.0
New and existing 1.2 agents report the incorrect message expecting 1.1.0 after a server upgrade to 1.3, regardless of upgrading or downgrading. If you see the Server expecting version: 1.1.0 message in the agent.log, ignore it and upgrade to version 1.3.0.
If you upgrade a ZENworks Orchestrator 1.2 environment having a large number of VMBuilderHosts to a ZENworks Orchestrator 1.3 environment, the new 1.3 VMBuilderHosts Group might not contain as many VMBuilderHosts objects as the 1.2 group did. This issue occurs because of a timeout problem.
To work around the issue, run the vmBuilderDiscovery schedule in the Job Scheduler of the ZENworks Orchestrator Console. The schedule should discover all of the other devices and display them in the console. Another alternative is to wait for the schedule to run automatically at its next scheduled run time (for example, at resource start). A third alternative is to manually add the missing objects as members in the VMBuilderHosts Group in the ZENworks Orchestrator Console.
The vmHostVncConfig job is a built-in component that ships with Orchestrator 1.2 and Orchestrator 1.3. During a server upgrade from 1.2 to 1.3, the built-in 1.3 vmHostVnConfig schedule (disabled by default) overrides the snapshotted version of the 1.2 schedule (enabled by default).
If you want to redeploy or re-merge this job, you need to open the Job Scheduler in the ZENworks Orchestrator Console and enable it. For more information, see Creating or Modifying a Job Schedule
in the Novell ZENworks Orchestrator 1.3 Administration Guide.
The following information is included in this section:
Although zosadmin commands exist to start and stop the ZENworks Orchestrator Server, you should use it to start or stop the ZENworks Orchestrator Server.
Using the zosadmin --start command to start the server does not work because the command fails to find the server instance in the expected location.
We recommend that you start the server based on information in Stopping and Starting the ZENworks Orchestrator Server
located in the Novell ZENworks Orchestrator 1.3 Installation and Getting Started Guide.
The following information is included in this section:
Use of the ZENworks Orchestrator Console in a firewall environment (NAT, in particular) is not supported for this release. The console uses RMI to communicate with the server, and RMI connects back to the initiator on dynamically chosen port numbers. To use the console in a firewall environment, you need to use a remote desktop or VPN product.
This section explains the issues that might occur when users use the Virtual Machine Management capabilities of ZENworks Orchestrator. The following topics are included:
VMWare* Virtual Center 2.0 introduced a new object grouping for clustering that is not supported by the VMWare API adapter currently shipping with ZENworks Orchestrator 1.1. The additional group might cause VM and host object mismatches after the Orchestrator system discovers VM images in VMWare Virtual Center 2.0 and when you try to provision a VM in the cluster grouping.
To work around the issue, manually create a resource group in ZENworks Orchestrator to match the grouping existing in Virtual Center 2.0. After you create the group to match the
group, you need to add the discovered resource to the new group and also add the new group to the for the vmhost(s) that are to be used for provisioning the resource.If you prepare Virtual Machines that have LVM as their volume manager, on a Redhat server the default volume name is the same, so you cannot prepare a VM with LVM on a host that is also using LVM.
By default, SLES network configuration is set up with FORCE_PERSISTENT_NAMES=yes in /etc/sysconfig/network/config. This results in network device configurations being bound statically to specific MAC addresses.
MAC addresses in VMs are dynamic. In particular, if you clone a VM, you must change the MAC address so that the new VM is unique on the local network segment. If you have a SLES VM configured for DHCP with the default network configuration options, the clone does not appear on the network because its virtual NIC has a different MAC than the hard-coded configuration inside the VM image. The NIC looks like a brand new interface that isn't configured.
You can work around this issue by setting FORCE_PERSISTENT_NAMES=no in /etc/sysconfig/network/config. This setting causes the networking configuration to revert to the traditional mode of assigning eth0 to the first NIC detected by the kernel, eth1 to the second, and so on. This is the preferred mode for VMs because a VM MAC address does not remain static. In addition, a VM is likely to have only one or two virtual NICs, so eth0, eth1, etc. always refer to the same virtual NIC.
The following information is included in this section:
VM names starting with “xen” cause an incorrect disk image path.
Workaround: None.
If you run a VM Builder job from the Eclipse GUI and ZENworks Orchestrator cannot find a resource that meets the job constraints, the Installing status while the job is pending.
in the Eclipse GUI erroneously displays anWhen you install a fully-virtualized SLES VM using an ISO, the installation menu times out before a VNC connection to the box can occur to allow an appropriate selection. This premature time-out causes the VM to attempt to boot from the hard drive instead of the ISO image.
To work around this problem, use the mini ISOs created by Novell for various SLES platforms. These ISOs do not time out. When you use these ISOs, you can boot by using the ISO and pointing to a network installation location. This method is also important for installing OS images that use multiple installation CDs, because Orchestrator does not yet have the capacity to alert for a change of CDs during the install.
The mini ISOs are available for download at the Novell downloads Web site.
When you install a paravirtualized SLES VM using an ISO, the YaST disk partitioner tries to create swap and root on xvdb, this fails, because xvdb is actually the ISO install source.
To work around this problem, ignore xvdb in the partitioner and create swap and root on another storage disk.
Workaround: Modify the VM Builder configuration file to “de-configure” that builder node:
In a text editor, open:
/etc/opt/novell/zenworks/vmbuilder/vmb.conf
Locate the following entry:
configured = yes
Change it to read:
configured = no
Save the configuration file.
The following information is included in this section:
If a VM resides in the warehouse, you cannot save any configuration change to that VM from the Orchestrator console. To save configuration changes, you must first provision the VM and then make configuration changes in the console, then when it is imported or checked back into the warehouse, those changes are saved.
If you disconnect from the Orchestrator Server during a first-time VM check-in, the network settings are lost.
To work around the issue, either do not disconnect the UI before a first-time checkin is complete, or when you check in the VM for the first time, check it out again and add the network settings.
The following information is included in this section:
If you try to configure the network configurations of virtual machine through the VCenter provisioning adapter, the settings might not be applied on virtual machines managed by VCenter 2.5 and VCenter 2.0.1
The VM templates cannot be moved across VM hosts by using the x does not allow moving VM templates across VM hosts.
option through the VCenter provisioning adapter. Vcenter 2.VCenter PA for VCenter 1.x does not work properly if the vcenter_client1x policy is not configured with the correct path of JAVA (JRE) 1.4.2. Even though the job log does not report any error, the following message is logged in the ZENworks Orchestrator Server log file (server.log):
<date and time>: Broker,STATUS: assertion: workflowDone() isProcessingComplete==true jobid=zosSystem.vcenter1x.8
Workaround: Edit the vcenter_client1x policy, which has been automatically associated with the vcenter host, to correctly set the JAVA (JRE) 1.4.2 path for the vcenter PA job in the <fact> tag as follows:
<fact name="java1.4.2"
type="String"
value="location_of_the_JRE_1.4.2"
description="Location of Java VM 1.4.2"/>
If JRE 1.4.2 is installed with the ZENworks Orchestrator Agent, then the default location of the JRE on Windows is c\program files\novell\zos\agent\jre.
Workaround: Ensure that the URL, username, and password have been correctly configured in the vcenter1x policy.
VCenter PA for VCenter 2.x does not work properly if the vcenter_client2x policy is not configured with the correct path of JAVA (JRE) 1.5. Even though the job log does not report any error, the following message is logged in the ZENworks Orchestrator Server log file (server.log):
<date and time>: Broker,STATUS: assertion: workflowDone() isProcessingComplete==true jobid=zosSystem.vcenter2x.8
Workaround: Edit the vcenter_client2x policy, which has been automatically associated with the vcenter host, to correctly set the JAVA (JRE) 1.5 path for the vcenter PA job in the <fact> tag as follows:
<fact name="java1.5.0"
type="String"
value="location_of_the_JRE_1.5"
description="Location of Java VM 1.5.0"/>
If JRE 1.5 is installed with the ZENworks Orchestrator Agent, then the default location of the JRE on Windows is c\program files\novell\zos\agent\jre.
Workaround: Ensure that the URL, username, and password have been correctly configured in the vcenter2x policy.
When you try to shut down the VM host, the VMs running on the host are not automatically shut down. However, the VM host is moved to the
state in which it will not accept any Provisioning actions.Workaround: You must manually shut down all the VMs running on the host.
The following information is included in this section:
If you try to suspend a Xen* VM running on a 64-bit host, the operation fails. This is a known bug with Xen tools and will be addressed in the next release of ZENworks Orchestrator.
The following issues and limitations have been identified when users are migrating a Xen VM:
An issue in the Xen virtual machine monitor incorrectly reports migration success when migrating to a host that has shared storage unmounted. This causes the VM to go into a shutdown state and return a successful error code of 0: the host does not receive the migration because the storage location is unmounted. Xen should return a different error code stating that the migration was unsuccessful.
The following limitations have been identified in a scenario where users are migrating a Xen VM:
The target machines and the source machines must have identical architecture (64-bit to 64-bit or 32-bit to 32-bit). This is automatically enforced with Constraints.
Both VM hosts (the target and the host) must have shared storage (SAN or iSCSI). This is automatically enforced with Constraints.
The operating system of the host where the VM is created must be the same OS as the host where the guest VM runs. For example, if you build a SLES VM on a RHEL machine, you can run that VM only on a RHEL machine.
The problem occurs because SLES and RHEL use different boot loaders for VM guests (SLES uses domU loader and RHEL uses pygrub). This creates a difference in the handling of the boot partition. This is not automatically enforced.
After you migrate a XEN VM with the Orchestrator Agent installed, the agent occasionally loses connection with the Orchestrator Server. To work around the problem, restart the agent on the VM to reestablish the connection.
If a user deletes a newly created Xen Virtual Machine that has never been checked into the warehouse (using the vm-install-jobs --delete command), the disk image associated with that VM is not deleted.
The following issues and limitations have been identified when the Xen provisioning adapter:
The Xen provisioning adapter marks ISO images as “non-moveable” disks. Because of this, ISO images are not moved with the VM on move operations. Later, when the provisioning adapter attempts to start the VM, the server throws an error:
Error: Disk image does not exist
Work around this issue by putting install ISOs on shared storage that can be seen by all hosts. You can also manually remove the install ISO reference from the config.xen file.
A VM host cannot provision a VM that has a different file system than the VM host. The currently supported file systems are ext2, ext3, reiserfs, jfs, xfs, vfat, and ntfs.
Workaround: Load the VM’s file system Linux module on the VM host, or add this support to the Linux kernel if a custom kernel is being used.
Typically, the latest Linux kernels autoload the appropiate module to do the work. However, if you see a line similar to the following in the job log:
[c121] RuntimeError: vmprep: Autoprep of /var/lib/xen/images/min-tmpl-1-2/disk0 failed with return code 1: vmprep: autoprep: /var/adm/mount/vmprep.3f96f60206a2439386d1d80436262d5e: Failed to mount vm image "/var/lib/xen/images/min-tmpl-1-2/disk0": vmmount: No root device found Job 'zosSystem.vmprep.76' terminated because of failure. Reason: Job failed
you must manually load the proper kernel module on the VM host to support the VM’s filesystem.
For example, if the VM host uses ext3 and the VM image uses reiserfs, load the proper kernel module onto the VM host to support the VM image’s resierfs file system. Then, on the VM host, run:
modprobe reiserfs
Next, provision the VM.
In this documentation, a greater-than symbol (>) is used to separate actions required when navigating menus in a user interface.
A trademark symbol (®, TM, etc.) denotes a Novell trademark; an asterisk (*) denotes a third-party trademark.
Novell, Inc. makes no representations or warranties with respect to the contents or use of this documentation, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, Novell, Inc. reserves the right to revise this publication and to make changes to its content, at any time, without obligation to notify any person or entity of such revisions or changes.
Further, Novell, Inc. makes no representations or warranties with respect to any software, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, Novell, Inc. reserves the right to make changes to any and all parts of Novell software, at any time, without any obligation to notify any person or entity of such changes.
Any products or technical information provided under this Agreement may be subject to U.S. export controls and the trade laws of other countries. You agree to comply with all export control regulations and to obtain any required licenses or classification to export, re-export, or import deliverables. You agree not to export or re-export to entities on the current U.S. export exclusion lists or to any embargoed or terrorist countries as specified in the U.S. export laws. You agree to not use deliverables for prohibited nuclear, missile, or chemical biological weaponry end uses. Please refer to http://www.novell.com/info/exports/ for more information on exporting Novell software. Novell assumes no responsibility for your failure to obtain any necessary export approvals.
Copyright © 2008 Novell, Inc. All rights reserved. No part of this publication may be reproduced, photocopied, stored on a retrieval system, or transmitted without the express written consent of the publisher.
Novell, Inc. has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at http://www.novell.com/company/legal/patents and one or more additional patents or pending patent applications in the U.S. and in other countries.
For a list of Novell trademarks, see the Novell Trademark and Service Mark list at http://www.novell.com/company/legal/trademarks/tmlist.html.
All third-party products are the property of their respective owners.