The Novell Import Conversion Export utility lets you
The Novell Import Conversion Export utility manages a collection of handlers that read or write data in a variety of formats. Source handlers read data; destination handlers write data. A single executable module can be both a source and a destination handler. The engine receives data from a source handler, processes the data, then passes the data to a destination handler.
For example, if you want to import LDIF data into an LDAP directory, the Novell Import Conversion Export engine uses an LDIF source handler to read an LDIF file and an LDAP destination handler to send the data to the LDAP directory server. See Troubleshooting LDIF Files for more information on LDIF file syntax, structure, and debugging.
You can run the Novell Import Conversion Export client utility from the command line, from a snap-in to ConsoleOne®, or from the Import Convert Export Wizard in Novell iManager. The comma-delimited data handler, however, is available only in the command line utility and Novell iManager.
You can use the Novell Import Conversion Export utility in any of the following ways:
Both the wizard and the command line interface give you access to the Novell Import Conversion Export engine, but the command line interface gives you greater options for combining source and destination handlers.
The Novell Import Conversion Export utility replaces both the BULKLOAD and ZONEIMPORT utilities included with previous versions of NDS and eDirectory.
The Import Convert Export Wizard lets you
For information on using and accessing Novell iManager, see the Novell iManager 2.0.x Administration Guide.
In Novell iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Import Data from File on Disk, then click Next.
Select the type of file you want to import.
Specify the name of the file containing the data you want to import, specify the appropriate options, then click Next.
The options on this page depend on the type of file you selected. Click Help for more information on the available options.
Specify the LDAP server where the data will be imported.
Add the appropriate options, as described in the following table:
Click Next, then click Finish.
In Novell iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Export Data to a File on Disk, then click Next.
Specify the LDAP server holding the entries you want to export.
Use the Advanced Settings to configure additional options for the LDAP source handler. Click Help for more information on the available options.
Add the appropriate options, as described in the following table:
Click Next.
Specify the search criteria (described below) for the entries you want to export.
Click Next.
Select the export file type.
The exported file is saved in a temporary location. You can download this file at the conclusion of the Wizard.
Click Next, then click Finish.
In Novell iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Migrate Data Between Servers, then click Next.
Specify the LDAP server holding the entries you want to migrate.
Use the Advanced Settings to configure additional options for the LDAP source handler. Click Help for more information on the available options.
Add the appropriate options, as described in the following table:
Click Next.
Specify the search criteria (described below) for the entries you want to migrate:
Click Next.
Specify the LDAP server where the data will be migrated.
Click Next, then click Finish.
NOTE: Ensure that the schema is consistent across LDAP Services.
To enter an LDAP port number with more than four digits, such as 10389, with ICE, change the following entry in the c:\Program Files\Novell\Tomcat\webapps\nps\portal\modules\ICEWiz\skins\default\devices\default\ICEWizProfile_inc.jsp file:
<input type=text name="<%= c.var("PROFILE.SERVER_PORT") %>" value="<%= c.var(c.var("PROFILE.SERVER_PORT")) %>" size=8 maxlength=4/>
to
<input type=text name="<%= c.var("PROFILE.SERVER_PORT") %>" value="<%= c.var(c.var("PROFILE.SERVER_PORT")) %>" size=8 maxlength=5/>
You can use the command line version of the Novell Import Conversion Export utility to perform the following:
The Novell Import Convert Export Wizard is installed as part of Novell iManager. Both a Win32* version (ice.exe) and a NetWare® version (ice.nlm) are included in the installation. On Linux, Solaris, AIX, and HP-UX systems, the Import/Export utility is included in the NOVLice package.
The Novell Import Conversion Export utility is launched with the following syntax:
ice general_options
-S[LDIF | LDAP | DELIM | LOAD | SCH] source_options
-D[LDIF | LDAP | DELIM] destination_options
or when using the schema cache:
ice -C schema_options
-S[LDIF | LDAP] source_options
-D[LDIF | LDAP] destination_options
When performing an update using the schema cache, an LDIF file is not a valid destination.
General options are optional and must come before any source or destination options. The -S (source) and -D (destination) handler sections can be placed in any order.
The following is a list of the available source and destination handlers:
General options affect the overall processing of the Novell Import Conversion Export engine.
Option | Description |
---|---|
-C |
Specifies that you are using the schema cache to perform schema compare and update. |
-l log_file |
Specifies a filename where output messages (including error messages) are logged. If this option is not used, error messages are sent to ice.log. If you omit this option on Linux, Solaris, AIX, or HP-UX systems, error messages will not be logged. |
-o |
Overwrites an existing log file. If this flag is not set, messages are appended to the log file instead. |
-e LDIF_error_log_ file |
Specifies a filename where entries that fail are output in LDIF format. This file can be examined, modified to correct the errors, then reapplied to the directory. |
-p URL |
Specifies the location of an XML placement rule to be used by the engine. Placement rules let you change the placement of an entry. See Conversion Rules for more information. |
-c URL |
Specifies the location of an XML creation rule to be used by the engine. Creation rules let you supply missing information that might be needed to allow an entry to be created successfully on import. For more information, see Conversion Rules. |
-s URL |
Specifies the location of an XML schema mapping rule to be used by the engine. Schema mapping rules let you map a schema element on a source server to a different but equivalent schema element on a destination server. For more information, see Conversion Rules. |
-b (NetWare only) |
Specifies to not pause for input at the ICE console screen at the end of execution. |
-h or -? |
Displays command line help. |
The schema options let you use the schema cache to perform schema compare and update operations.
Option | Description |
---|---|
-C -a |
Updates the destination schema (adds missing schema). |
-C -c filename |
Outputs the destination schema to the specified file. |
-C -n |
Disables schema pre-checking. |
The source handler option (-S) determines the source of the import data. Only one of the following can be specified on the command line.
Option | Description |
---|---|
-SLDIF |
Specifies that the source is an LDIF file. For a list of supported LDIF options, see LDIF Source Handler Options. |
-SLDAP |
Specifies that the source is an LDAP server. For a list of supported LDAP options, see LDAP Source Handler Options |
-SDELIM |
Specifies that the source is a comma-delimited data file. For a list of supported DELIM options, see DELIM Source Handler Options. |
-SSCH |
Specifies that the source is a schema file. For a list of supported SCH options, see SCH Source Handler Options |
-SLOAD |
Specifies that the source is a DirLoad template. For a list of supported LOAD options, see LOAD Source Handler Options. |
The destination handler option (-D) specifies the destination of the export data. Only one of the following can be specified on the command line.
Option | Description |
---|---|
-DLDIF |
Specifies that the destination is an LDIF file. For a list of supported options, see LDIF Destination Handler Options. |
-DLDAP |
Specifies that the destination is an LDAP server. For a list of supported options, see LDAP Destination Handler Options. |
-DDELIM |
Specifies that the destination is a comma-delimited file. For a list of supported options, see DELIM Destination Handler Options. |
The LDIF source handler reads data from an LDIF file, then sends it to the Novell Import Conversion Export engine.
The LDIF destination handler receives data from the Novell Import Conversion Export engine and writes it to an LDIF file.
The LDAP source handler reads data from an LDAP server by sending a search request to the server. It then sends the search entries it receives from the search operation to the Novell Import Conversion Export engine.
Option | Description |
---|---|
-s server_name |
Specifies the DNS name or IP address of the LDAP server that the handler will send a search request to. The default is the local host. |
-p port |
Specifies the integer port number of the LDAP server specified by server_name. The default is 389. For secure operations, the default port is 636. |
-d DN |
Specifies the distinguished name of the entry that should be used when binding to the server-specified bind operation. |
-w password |
Specifies the password attribute of the entry specified by DN. |
-W |
Prompts for the password of the entry specified by DN. This option is applicable only for Linux, Solaris, AIX, and HP-UX. |
-F filter |
Specifies an RFC 1558-compliant search filter. If you omit this option, the search filter defaults to objectclass=*. |
-n |
Does not actually perform a search, but shows what search would be performed. |
-a attribute_list |
Specifies a comma-separated list of attributes to retrieve as part of the search. In addition to attribute names, there are three other values:
If you omit this option, the attribute list defaults to the empty list. |
-o attribute_list |
Specifies a comma-separated list of attributes to be omitted from the search results received from the LDAP server before they are sent to the engine. This option is useful in cases where you want to use a wildcard with the -a option to get all attributes of a class and then remove a few of them from the search results before passing the data on to the engine. For example, -a* -o telephoneNumber searches for all user-level attributes and filters the telephone number from the results. |
-R |
Specifies to not automatically follow referrals. The default is to follow referrals with the name and password given with the -d and -w options. |
-e value |
Specifies which debugging flags should be enabled in the LDAP client SDK. For more information, see Using LDAP SDK Debugging Flags. |
-b base_DN |
Specifies the base distinguished name for the search request. If this option is omitted, the base DN defaults to " " (empty string). |
-c search_scope |
Specifies the scope of the search request. Valid values are the following:
If you omit this option, the search scope defaults to Sub. |
-r deref_aliases |
Specifies the way aliases should be dereferenced during the search operation. Values include the following:
If you omit this option, the alias dereferencing behavior defaults to Never. |
-l time_limit |
Specifies a time limit (in seconds) for the search. |
-z size _limit |
Specifies the maximum number of entries to be returned by the search. |
-V version |
Specifies the LDAP protocol version to be used for the connection. It must be 2 or 3. If this option is omitted, the default is 3. |
-v |
Enables verbose mode of the handler. |
-L filename |
Specifies a file in DER format containing a server key used for SSL authentication. |
-A |
Retrieves attribute names only. Attribute values are not returned by the search operation. |
-t |
Prevents the LDAP handler from stopping on errors. |
-m |
LDAP operations will be modifies. |
-x |
LDAP operations will be deletes. |
-k |
Uses SSL to connect. |
-M |
Enables the Manage DSA IT control. |
-MM |
Enables the Manage DSA IT control, and makes it critical. |
The LDAP destination handler receives data from the Novell Import Conversion Export engine and sends it to an LDAP server in the form of update operations to be performed by the server.
For information about hashed password in an LDIF file, see Hashed Password Representation in LDIF Files.
Option | Description |
---|---|
-s server_name |
Specifies the DNS name or IP address of the LDAP server that the handler will send a search request to. The default is the local host. |
-p port |
Specifies the integer port number of the LDAP server specified by server_name. The default is 389. For secure operations, the default port is 636. |
-d DN |
Specifies the distinguished name of the entry that should be used when binding to the server-specified bind operation. |
-w password |
Specifies the password attribute of the entry specified by DN. |
-W |
Prompts for the password of the entry specified by DN. This option is applicable only for Linux, Solaris, AIX, and HP-UX. |
-B |
Use this option if you do not want to use asynchronous LDAP Bulk Update/Replication Protocol (LBURP) requests for transferring update operations to the server. Instead, use standard synchronous LDAP update operation requests. For more information, see LDAP Bulk Update/Replication Protocol. |
-F |
Allows the creation of forward references. When an entry is going to be created before its parent exists, a placeholder called a forward reference is created for the entry's parent to allow the entry to be successfully created. If a later operation creates the parent, the forward reference is changed into a normal entry. |
-l |
Stores password values using the simple password method of the Novell Modular Authentication Service (NMASTM). Passwords are kept in a secure location in the directory, but key pairs are not generated until they are actually needed for authentication between servers. |
-e value |
Specifies which debugging flags should be enabled in the LDAP client SDK. For more information, see Using LDAP SDK Debugging Flags. |
-V version |
Specifies the LDAP protocol version to be used for the connection. It must be 2 or 3. If this option is omitted, the default is 3. |
-L filename |
Specifies a file in DER format containing a server key used for SSL authentication. |
-k |
Uses SSL to connect. |
-M |
Enables the Manage DSA IT control. |
-MM |
Enables the Manage DSA IT control, and makes it critical. |
The DELIM source handler reads data from a comma-delimited data file, then sends it to the destination handler.
Option | Description |
---|---|
-f filename |
Specifies a filename containing comma-delimited records read by the DELIM source handler and sent to the destination handler. |
-F value |
Specifies a filename containing the attribute data order for the file specified by -f. If this option is not specified, you must enter this information directly using -t. See Performing a Comma-Delimited Import for more information. |
-t value |
Comma-delimited list of attributes specifying the attribute data order for the file specified by -f. Either this option or -F must be specified. See Performing a Comma-Delimited Import for more information. |
-c |
Prevents the DELIM source handler from stopping on errors. This includes errors on parsing comma-delimited data files and errors sent back from the destination handler. When this option is set and an error occurs, the DELIM source handler reports the error, finds the next record in the comma-delimited data file, then continues. |
-n value |
Specifies the LDAP naming attribute for the new object. This attribute must be contained in the attribute data you specify using -F or -t. |
-l value |
Specifies the path to append the RDN to (such as o=myCompany). If you are passing the DN, this value is not necessary. |
-o value |
Comma-delimited list of object classes (if none is contained in your input file) or additional object classes such as auxiliary classes. The default value is inetorgperson. |
-i value |
Comma-delimited list of columns to skip. This value is an integer specifying the number of the column to skip. For example, to skip the third and fifth columns, specify i3,5. |
-d value |
Specifies the delimiter. The default delimiter is a comma ( , ). The following values are special case delimiters: [q] = quote (a single " as the delimiter) For example, to specify a tab as a delimiter, you would pass -d[t]. |
-q value |
Specifies the secondary delimiter. The default secondary delimiter is single quotes (' '). The following values are special case delimiters: [q] = quote (a single " as the delimiter) For example, to specify a tab as a delimiter, you would pass -d[t]. |
-v |
Runs in verbose mode. |
The DELIM destination handler receives data from the source handler and writes it to a comma-delimited data file.
The SCH handler reads data from a legacy NDS or eDirectory schema file (files with a *.sch extension), then sends it to the Novell Import Conversion Export engine. You can use this handler to implement schema-related operations on an LDAP Server, such as extensions using a *.sch file as input.
The SCH handler is a source handler only. You can use it to import *.sch files into an LDAP server, but you cannot export *.sch files.
The options supported by the SCH handler are shown in the following table.
Option | Description |
---|---|
-f filename |
Specifies the full path name of the *.sch file. |
-c |
(Optional) Prevents the SCH handler from stopping on errors. |
-v |
(Optional) Run in verbose mode. |
The DirLoad handler generates eDirectory information from commands in a template. This template file is specified with the -f argument and contains the attribute specification information and the program control information.
Attribute Specifications determines the context of new objects.
See the following sample attribute specification file:
givenname: $R(first.txt)
initials: $R(initial.txt)
sn: $R(last.txt)
dn:cn=$A(givenname,%.1s)$A(initials,%.1s)$A(sn),ou=dev,ou=ds,o=novell
objectclass: inetorgperson
telephonenumber: 1-800-$N(1-999,%03d)-$C(%04d)
title: $R(titles.txt)
locality: Our location
The format of the attribute specification file resembles an LDIF file, but allows some powerful constructs to be used to specify additional details and relationships between the attributes.
Unique Numeric Value inserts a numeric value that is unique for a given object into an attribute value.
Syntax: $C[(<format)]
The optional <format specifies a print format that is to be applied to the value. Note that if no format is specified, the parenthesis cannot be used either:
$C
$C(%d)
$C(%04d)
The plain $C inserts the current numeric value into an attribute value. This is the same as $C(%d) because "%d" is the default format that the program uses if none was specified. The numeric value is incremented after each object, so if you use $C multiple times in the attribute specification, the value is the same within a single object. The starting value can be specified in the settings file by using the !COUNTER=value syntax.
Random Numeric Value inserts a random numeric value into an attribute value using the following syntax:
$N(<low-<high[,<format])]
<low and <high specify the lower and upper bounds, respectively, that are used as a random number is generated. The optional <format specifies a print format that is to be applied to a value from the list.
$N(1-999)
$N(1-999,%d)
$N(1-999,%03d)
Random String Value From a List inserts a randomly selected string from a specified list into an attribute value using the following syntax:
$R(<filename[,<format])]
The <filename specifies a file that contains a list of values. This can be an absolute or relative path to a file. Several files containing the lists are included with this package. The values are expected to be separated by a newline character.
The optional <format specifies a print format that is to be applied to a value from the list.
$A(givenname)
$A(givenname,%s)
$A(givenname,%.1s)
It is important to note that no forward references are allowed. Any attribute whose value you are going to use must precede the current attribute in the attribute specification file. In the example below, the cn as part of the dn is constructed from givenname, initials, and sn; therefore, these attributes must preceed the dn in the settings file.
givenname: $R(first.txt)
initials: $R(initial.txt)
sn: $R(last.txt) dn:o=novell,ou=dev,ou=ds,cn=$A(givenname,%.1s)$A(initials,%.1s)$A(sn)
The dn receives special handling in the LDIF file: no matter what the location of dn is in the settings, it will be written first (as per LDIF syntax) to the LDIF file. All other attributes are written in the order they appear.
Control Settings provide some additional controls for the object creation. All controls have an exclamation point (!) as the first character on the line to separate them from attribute settings. The controls can be placed anywhere in the file.
!COUNTER=300
!OBJECTCOUNT=2
!CYCLE=title
!UNICYCLE=first,last
!CYCLE=ou,BLOCK=10
Provides the starting value for the unique counter value. The counter value is inserted to any attribute with the $C syntax.
OBJECTCOUNT determines how many objects are created from the template.
CYCLE can be used to modify the behavior of pulling random values from the files ($R-syntax). This setting has three different values.
!CYCLE=title
Anytime the list named "title" is used, the next value from the list is pulled rather than randomly selecting a value. After all values have been consumed in order, the list starts from the beginning again.
!CYCLE=ou,BLOCK=10
Each value from list "ou" is to be used 10 times before moving to the next value.
The most interesting variant of the CYCLE control setting is UNICYCLE. It specifies a list of sources that are cycled through in left-to-right order, allowing you to create guaranteed unique values if desired. If this control is used, the OBJECTCOUNT control is used only to limit the number of objects to the maximum number of unique objects that can be created from the lists. In other words, if the lists that are part of UNICYCLE can produce 15000 objects, then OBJECTCOUNT can be used to reduce that number, but not to increase it.
For example, assume that the givenname file contains two values (Doug and Karl) and the sn file contains three values (Hoffman, Schultz, and Grieger).With the control setting !UNICYCLE=givenname,sn and attribute definition cn: $R(givenname) $R(sn), the following cns are created:
cn: Doug Hoffmancn
cn: Karl Hoffmancn
cn: Doug Schultzcn
cn: Karl Schultzcn
cn: Doug Griegercn
cn: Karl Grieger
Listed below are sample commands that can be used with the Novell Import Conversion Export command line utility for the following functions:
To perform an LDIF import, combine the LDIF source and LDAP destination handlers, for example:
ice -S LDIF -f entries.ldif -D LDAP -s server1.acme.com -p 389 -d cn=admin,c=us -w secret
This command line reads LDIF data from entries.ldif and sends it to the LDAP server server1.acme.com at port 389 using the identity cn=admin,c=us, and the password "secret."
To perform an LDIF export, combine the LDAP source and LDIF destination handlers. For example:
ice -S LDAP -s server1.acme.com -p 389 -d cn=admin,c=us -w password -F objectClass=* -c sub -D LDIF -f server1.ldif
This command line performs a subtree search for all objects in the server server1.acme.com at port 389 using the identity cn=admin,c=us and the password "password" and outputs the data in LDIF format to server1.ldif.
To perform a comma-delimited import, use a command similar to the following:
ice -S DELIM -f/tmp/in.csv -F /tmp/order.csv -ncn -lo=acme -D LDAP -s server1.acme.com -p389 -d cn=admin,c=us -w secret
This command reads comma-delimited values from the /tmp/in.csv file and reads the attribute order from the /tmp/order.csv file. For each attribute entry in in.csv, the attribute type is specified in order.csv. For example, if in.csv contains
pat,pat,engineer,john
then order.csv would contain
dn,cn,title,sn
The information in order.csv could be input directly using the -t option.
The data is then sent to the LDAP server server1.acme.com at port 389 using the identity cn=admin,c=us, and password "secret".
This example specifies that cn should become the new DN for this object using the -n option, and this object was added to the organization container acme using the -l option.
To perform a comma-delimited export, use a command similar to the following:
ice -S LDAP -s server1.acme.com -p 389 -d cn=admin,c=us -w password -l objectClass=* -c sub -D DELIM -f /tmp/server1.csv -F order.csv
This command line performs a subtree search for all objects in the server server1.acme.com at port 389 using the identity cn=admin,c=us and the password "password" and outputs the data in comma-delimited format to the /tmp/server1.csv file.
To perform a data migration between LDAP servers, combine the LDAP source and LDAP destination handlers. For example:
ice -S LDAP -s server1.acme.com -p 389 -d cn=admin,c=us -w password -F objectClass=* -c sub -D LDAP -s server2.acme.com -p 389 -d cn=admin,c=us -w secret
This particular command line performs a subtree search for all objects in the server server1.acme.com at port 389 using the identity cn=admin,c=us and the password "password" and sends it to the LDAP server server2.acme.com at port 389 using the identity cn=admin,c=us and the password "secret."
To perform a schema file import, use a command similar to the following:
ice -S SCH -f $HOME/myfile.sch -D LDAP -s myserver -d cn=admin,o=novell -w passwd
This command line reads schema data from myfile.sch and sends it to the LDAP server myserver using the identity cn=admin,o=novell and the password "passwd."
To perform a LOAD file import, use a command similar to the following:
ice -S LOAD -f attrs -D LDIF -f new.ldf
In this example, the contents of the attribute file attrs is as follows:
#=====================================================================
# DirLoad 1.00
#=====================================================================
!COUNTER=300
!OBJECTCOUNT=2
#-----------------------------------------------------------------------
# ATTRIBUTE TEMPLATE
# --------------------------------------------------------------------
objectclass: inetorgperson
givenname: $R(first.txt)
initials: $R(initial.txt)
sn: $R(last.txt)
dn: cn=$A(givenname,%.1s)$A(initials,%.1s)$A(sn),ou=$R(ou),ou=dev,o=novell,
telephonenumber: 1-800-$N(1-999,%03d)-$C(%04d)
title: $R(titles)
Running the previous command from a command prompt produces the following LDIF file:
version: 1
dn: cn=JohnBBill,ou=ds,ou=dev,o=novell
changetype: add
objectclass: inetorgperson
givenname: John
initials: B
sn: Bill
telephonenumber: 1-800-290-0300
title: Amigo
dn: cn=BobJAmy,ou=ds,ou=dev,o=novell
changetype: add
objectclass: inetorgperson
givenname: Bob
initials: J
sn: Amy
telephonenumber: 1-800-486-0301
title: Pomo
Running the following command from a command prompt sends the data to an LDAP server via the LDAP Handler:
ice -S LOAD -f attrs -D LDAP -s www.novell.com -d cn=admin,o=novell -w admin
If the previous template file is used, but the following command line is used, all of the records that were added with the above command will be deleted.
ice -S LOAD -f attrs -r -D LDAP -s www.novell.com -d cn=admin,o=novell -w admin
If you want to use -m to modify, the following is an example of how to modify records:
# ======================================================================
# DirLoad 1.00
# ====================================================================== !COUNTER=300
!OBJECTCOUNT=2
#----------------------------------------------------------------------
# ATTRIBUTE TEMPLATE
# ----------------------------------------------------------------------
dn: cn=$R(first),%.1s)($R(initial),%.1s)$R(last),ou=$R(ou),ou=dev,o=novell
delete: givenname
add: givenname
givenname: test1
replace: givenname
givenname: test2
givenname: test3
If the following command line is used where the attrs file contains the data above:
ice -S LOAD -f attrs -m -D LDIF -f new.ldf
then the results would be the following LDIF data:
version: 1
dn: cn=BillTSmith,ou=ds,ou=dev,o=novell
changetype: modify
delete: givenname
-
add: givenname
givenname: test1
-
replace: givenname
givenname: test2
givenname: test3
-
dn: cn=JohnAWilliams,ou=ldap,ou=dev,o=novell
changetype: modify
delete: givenname
-
add: givenname
givenname: test1
-
replace: givenname
givenname: test2
givenname: test3
-
The Novell Import Conversion Export engine lets you specify a set of rules that describe processing actions to be taken on each record received from the source handler and before the record is sent on to the destination handler. These rules are specified in XML (either in the form of an XML file or XML data stored in the directory) and solve the following problems when importing entries from one LDAP directory to another:
There are three types of conversion rules:
Rule | Description |
---|---|
Placement |
Changes the placement of an entry. For example, if you are importing a group of users in the l=San Francisco, c=US container but you want them to be in the l=Los Angeles, c=US container when the import is complete, you could use a placement rule to do this. For information on the format of these rules, see Placement Rules. |
Creation |
Supplies missing information that might be needed to allow an entry to be created successfully on import. For example, assume that you have exported LDIF data from a server whose schema requires only the cn (commonName) attribute for user entries, but the server that you are importing the LDIF data to requires both the cn and sn (surname) attributes. You could use the creation rule to supply a default sn value, (such as " ") for each entry as it is processed by the engine. When the entry is sent to the destination server, it will have the required sn attribute and can be added successfully. For information on the format of these rules, see Create Rules. |
Schema Mapping |
If, when you are transferring data between servers (either directly or using LDIF), there are schema differences in the servers, you can use Schema Mapping to
For information on the format of these rules, see Schema Mapping Rules. |
You can enable conversion rules in both the Novell eDirectory Import/Export Wizard and the command line interface. For more information on XML rules, see Using XML Rules.
In Novell iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Select the task you want to perform.
Under Advanced Settings, choose from the following options:
Click Next.
Follow the online instructions to finish your selected task.
You can enable conversion rules with the -p, -c, and -s general options on the Novell Import Conversion Export executable. For more information, see General Options.
For all three options, URL must be one of the following:
file://[path/]filename
The file must be on the local file system.
The Novell Import Conversion Export conversion rules use the same XML format as DirXML®. For more information on DirXML, see the DirXML Administration Guide.
The <attr-name-map> element is the top-level element for the schema mapping rules. Mapping rules determine how the import schema interacts with the export schema. They associate specified import class definitions and attributes with corresponding definitions in the export schema.
Mapping rules can be set up for attribute names or class names.
The following is the formal DTD definition of schema mapping rules:
<!ELEMENT attr-name-map (attr-name | class-name)*>
<!ELEMENT attr-name (nds-name, app-name)>
<!ATTLIST attr-name
class-name CDATA #IMPLIED>
<!ELEMENT class-name (nds-name, app-name)>
<!ELEMENT nds-name (#PCDATA)>
<!ELEMENT app-name (#PCDATA)>
You can have multiple mapping elements in the file. Each element is processed in the order that it appears in the file. If you map the same class or attribute more than once, the first mapping takes precedence.
The following examples illustrate how to create a schema mapping rule.
Schema Rule 1: The following rule maps the source's surname attribute to the destination's sn attribute for the inetOrgPerson class.
<attr-name-map>
<attr-name class-name="inetOrgPperson">
<nds-name>surname</nds-name>
<app-name>sn</app-name>
</attr-name>
</attr-name-map>
Schema Rule 2: The following rule maps the source's inetOrgPerson class definition to the destination's User class definition.
<attr-name-map>
<class-name>
<nds-name>inetOrgPerson</nds-name>
<app-name>User</app-name>
</class-name>
</attr-name-map>
Schema Rule 3: The following example contains two rules. The first rule maps the source's Surname attribute to the destination's sn attribute for all classes that use these attributes. The second rule maps the source's inetOrgPerson class definition to the destination's User class definition.
<attr-name-map>
<attr-name>
<nds-name>surname</nds-name>
<app-name>sn</app-name>
</attr-name>
<class-name>
<nds-name>inetOrgPerson</nds-name>
<app-name>User</app-name>
</class-name>
</attr-name-map>
Example Command: If the schema rules are saved to an sr1.xml file, the following command instructs the utility to use the rules while processing the 1entry.ldf file and to send the results to a destination file, outt1.ldf.
ice -o -sfile://sr1.xml -SLDIF -f1entry.ldf -c -DLDIF
-foutt1.ldf
Create rules specify the conditions for creating a new entry in the destination directory. They support the following elements:
Required Attributes specifies that an add record must have values for all of the required attributes, or else the add fails. The rule can supply a default value for a required attribute. If a record does not have a value for the attribute, the entry is given the default value. If the record has a value, the record value is used.
Matching Attributes specifies that an add record must have the specific attributes and match the specified values, or else the add fails.
Templates specifies the distinguished name of a Template object in eDirectory. The Novell Import Conversion Export utility does not currently support specifying templates in create rules.
The following is the formal DTD definition for create rules:
<!ELEMENT create-rules (create-rule)*>
<!ELEMENT create-rule (match-attr*,
required-attr*,
template?) >
<!ATTLIST create-rule
class-name CDATA #IMPLIED
description CDATA #IMPLIED>
<!ELEMENT match-attr (value)+ >
<!ATTLIST match-attr
attr-name CDATA #REQUIRED>
<!ELEMENT required-attr (value)*>
<!ATTLIST required-attr
attr-name CDATA #REQUIRED>
<!ELEMENT template EMPTY>
<!ATTLIST template
template-dn CDATA #REQUIRED>
You can have multiple create rule elements in the file. Each rule is processed in the order that it appears in the file. If a record does not match any of the rules, that record is skipped and the skipping does not generate an error.
The following examples illustrate how to format create rules.
Create Rule 1: The following rule places three conditions on add records that belong to the inetOrgPerson class. These records must have givenName and Surname attributes. They should have an L attribute, but if they don't, the create rule supplies a default value of Provo for them.
<create-rules>
<create-rule class-name="inetOrgPerson">
<required-attr attr-name="givenName"/>
<required-attr attr-name="surname"/>
<required-attr attr-name="L">
<value>Provo</value>
</required-attr>
</create-rule>
</create-rules>
Create Rule 2: The following create rule places three conditions on all add records, regardless of their base class:
<create-rules>
<create-rule>
<required-attr attr-name="givenName"/>
<required-attr attr-name="Surname"/>
<required-attr attr-name="L">
<value>Provo</value>
</required-attr>
</create-rule>
</create-rules>
Create Rule 3: The following create rule places two conditions on all records, regardless of base class:
<create-rules>
<create-rule>
<match-attr attr-name="uid">
<value>cn=ratuid</value>
</match-attr>
<required-attr attr-name="L">
<value>Provo</value>
</required-attr>
</create-rule>
</create-rules>
Example Command: If the create rules are saved to an crl.xml file, the following command instructs the utility to use the rules while processing the 1entry.ldf file and to send the results to a destination file, outt1.ldf.
ice -o -cfile://cr1.xml -SLDIF -f1entry.ldf -c -DLDIF
-foutt1.ldf
Placement rules determine where an entry is created in the destination directory. They support the following conditions for determining whether the rule should be used to place an entry:
Match Class: If the rule contains any match class elements, an objectClass specified in the record must match the class-name attribute in the rule. If the match fails, the placement rule is not used for that record.
Match Attribute: If the rule contains any match attribute elements, the record must contain an attribute value for each of the attributes specified in the match attribute element. If the match fails, the placement rule is not used for that record.
Match Path: If the rule contains any match path elements, a portion of the record's dn must match the prefix specified in the match path element. If the match fails, the placement rule is not used for that record.
The last element in the rule specifies where to place the entry. The placement rule can use zero or more of the following:
PCDATA uses parsed character data to specify the DN of a container for the entries.
Copy the Name specifies that the naming attribute of the old DN is used in the entry's new DN.
Copy the Attribute specifies the naming attribute to use in the entry's new DN. The specified naming attribute must be a valid naming attribute for the entry's base class.
Copy the Path specifies that the source DN should be used as the destination DN.
Copy the Path Suffix specifies that the source DN, or a portion of its path, should be used as the destination DN. If a match-path element is specified, only the part of the old DN that does not match the prefix attribute of the match-path element is used as part of the entry's DN.
The following is the formal DTD definition for the placement rule:
<!ELEMENT placement-rules (placement-rule*)>
<!ATTLIST placement-rules
src-dn-format (%dn-format;) "slash"
dest-dn-format (%dn-format;) "slash"
src-dn-delims CDATA #IMPLIED
dest-dn-delims CDATA #IMPLIED>
<!ELEMENT placement-rule (match-class*,
match-path*,
match-attr*,
placement)>
<!ATTLIST placement-rule
description CDATA #IMPLIED>
<!ELEMENT match-class EMPTY>
<!ATTLIST match-class
class-name CDATA #REQUIRED>
<!ELEMENT match-path EMPTY>
<!ATTLIST match-path
prefix CDATA #REQUIRED>
<!ELEMENT match-attr (value)+ >
<!ATTLIST match-attr
attr-name CDATA #REQUIRED>
<!ELEMENT placement (#PCDATA |
copy-name |
copy-attr |
copy-path |
copy-path-suffix)* >
You can have multiple placement-rule elements in the file. Each rule is processed in the order that it appears in the file. If a record does not match any of the rules, that record is skipped and the skipping does not generate an error.
The following examples illustrate how to format placement rules. The scr-dn-format="ldap" and dest-dn-format="ldap" attributes set the rule so that the name space for the dn in the source and destination is LDAP format.
The Novell Import Conversion Export utility supports source and destination names only in LDAP format.
Placement Example 1: The following placement rule requires that the record have a base class of inetOrgPerson. If the record matches this condition, the entry is placed immediately subordinate to the test container and the left-most component of its source dn is used as part of its dn.
<placement-rules src-dn-format="ldap" dest-dn-format="ldap">
<placement-rule>
<match-class class-name="inetOrgPerson"></match-class>
<placement>cn=<copy-name/>,o=test</placement>
</placement-rule>
</placement-rules>
With this rule, a record with a base class of inetOrgPerson and with the following dn:
dn: cn=Kim Jones, ou=English, ou=Humanities, o=UofZ
would have the following dn in the destination directory:
dn: cn=Kim Jones, o=test
Placement Example 2: The following placement rule requires that the record have an sn attribute. If the record matches this condition, the entry is placed immediately subordinate to the test container and the left-most component of its source dn is used as part of its dn.
<placement-rules src-dn-format="ldap" dest-dn-format="ldap">
<placement-rule>
<match-attr attr-name="sn"></match-attr>
<placement>cn=<copy-name/>,o=test</placement>
</placement-rule>
</placement-rules>
With this rule, a record with the following dn and sn attribute:
dn: cn=Kim Jones, ou=English, ou=Humanities, o=UofZ
sn: Jones
would have the following dn in the destination directory:
dn: cn=Kim Jones, o=test
Placement Example 3: The following placement rule requires the record to have an sn attribute. If the record matches this condition, the entry is placed immediately subordinate to the test container and its sn attribute is used as part of its dn. The specified attribute in the copy-attr element must be a naming attribute of the entry's base class.
<placement-rules src-dn-format="ldap" dest-dn-format="ldap">
<placement-rule>
<match-attr attr-name="sn"></match-attr>
<placement>cn=<copy-attr attr-name="sn"/>,o=test</placement>
</placement-rule>
</placement-rules>
With this rule, a record with the following dn and sn attribute:
dn: cn=Kim Jones, ou=English, ou=Humanities, o=UofZ
sn: Jones
would have the following dn in the destination directory:
dn: cn=Jones, o=test
Placement Example 4: The following placement rule requires the record to have an sn attribute. If the record matches this condition, the source dn is used as the destination dn.
<placement-rules src-dn-format="ldap" dest-dn-format="ldap">
<placement-rule>
<match-attr attr-name="sn"></match-attr>
<placement><copy-path/></placement>
</placement-rule>
</placement-rules>
Placement Example 5: The following placement rule requires the record to have an sn attribute. If the record matches this condition, the entry's entire DN is copied to the test container.
<placement-rules src-dn-format="ldap" dest-dn-format="ldap">
<placement-rule>
<match-attr attr-name="sn"></match-attr>
<placement><copy-path-suffix/>,o=test</placement>
</placement-rule>
</placement-rules>
With this rule, a record with the following dn and sn attribute:
dn: cn=Kim Jones, ou=English, ou=Humanities, o=UofZ
sn: Jones
would have the following dn in the destination directory:
dn: cn=Kim Jones, ou=English, ou=Humanities, o=UofZ, o=test
Placement Example 6: The following placement rule requires the record to have an sn attribute. If the record matches this condition, the entry's entire DN is copied to the neworg container.
<placement-rules>
<placement-rule>
<match-path prefix="o=engineering"/>
<placement><copy-path-suffix/>o=neworg</placement>
</placement-rule>
</placement-rules>
For example:
dn: cn=bob,o=engineering
becomes
dn: cn=bob,o=neworg
Example Command: If the placement rules are saved to a pr1.xml file, the following command instructs the utility to use the rules while processing the 1entry.ldf file and to send the results to a destination file, foutt1.ldf.
ice -o -pfile://pr1.xml -SLDIF -f1entry.ldf -c -DLDIF
-foutt1.ldf
The Novell Import Conversion Export utility uses the LDAP Bulk Update/Replication Protocol (LBURP) to send asynchronous requests to an LDAP server. This guarantees that the requests are processed in the order specified by the protocol and not in an arbitrary order influenced by multiprocessor interactions or the operating system's scheduler.
LBURP also lets the Novell Import Conversion Export utility send several update operations in a single request and receive the response for all of those update operations in a single response. This adds to the network efficiency of the protocol.
LBURP works as follows:
These requests can be sent asynchronously. Each request contains a sequence number identifying the order of this request relative to other requests sent by the client over the same connection. Each request also contains at least one LDAP update operation.
The LBURP protocol lets Novell Import Conversion Export present data to the server as fast as the network connection between the two will allow. If the network connection is fast enough, this lets the server stay busy processing update operations 100% of the time because it never has to wait for Novell Import Conversion Export to give it more work to do.
The LBURP processor in eDirectory also commits update operations to the database in groups to gain further efficiency in processing the update operations. LBURP can greatly improve the efficiency of your LDIF imports over a traditional synchronous approach.
LBURP is enabled by default, but you can choose to disable it during an LDIF import.
To enable or disable LBURP during an LDIF import:
In Novell iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Import Data From File on Disk, then click Next.
Select LDIF from the File Type drop-down list, then specify the name of the LDIF file containing the data you want to import.
Click Next.
Specify the LDAP server where the data will be imported and the type of login (anonymous or authenticated).
Under Advanced Setting, select Use LBURP.
Click Next, then follow the online instructions to complete the remainder of the LDIF Import Wizard.
IMPORTANT: Because LBURP is a relatively new protocol, eDirectory servers earlier than version 8.5 (and most non-eDirectory servers) do not support it. If you are using the Novell eDirectory Import/Export Wizard to import an LDIF file to one of these servers, you must disable the LBURP option for the LDIF import to work.
You can use the command line option to enable or disable LBURP during an LDIF import. For more information, see .
Refer to NetWare Application Notes on the Novell Developer Portal for more information about migrating the schema between LDAP directories.
In cases where you have thousands or even millions of records in a single LDIF file you are importing, consider the following:
If it's possible to do so, select a destination server for your LDIF import that has read/write replicas containing all the entries represented in the LDIF file. This will maximize network efficiency.
Avoid having the destination server chain to other eDirectory servers for updates. This can severely reduce performance. However, if some of the entries to be updated are only on eDirectory servers that are not running LDAP, you might need to allow chaining to import the LDIF file.
For more information on replicas and partition management, see Managing Partitions and Replicas.
Novell Import Conversion Export maximizes network and eDirectory server processing efficiency by using LBURP to transfer data between the wizard and the server. Using LBURP during an LDIF import greatly improves the speed of your LDIF import.
For more information on LBURP, see LDAP Bulk Update/Replication Protocol.
The amount of database cache available for use by eDirectory has a direct bearing on the speed of LDIF imports, especially as the total number of entries on the server increases. When doing an LDIF import, you might want to allocate the maximum memory possible to eDirectory during the import. After the import is complete and the server is handling an average load, you can restore your previous memory settings. This is particularly important if the import is the only activity taking place on the eDirectory server.
For more information on configuring the eDirectory database cache, see Maintaining Novell eDirectory.
Novell eDirectory uses public and private key pairs for authentication. Generating these keys is a very CPU-intensive process. With eDirectory 8.7.3, you can choose to store passwords using the simple password feature of Novell Modular Authentication Service (NMASTM). When you do this, passwords are kept in a secure location in the directory, but key pairs are not generated until they are actually needed for authentication between servers. This greatly improves the speed for loading an object that has password information.
To enable simple passwords during an LDIF import:
In Novell iManager, click the Roles and Tasks button .
Click eDirectory Maintenance > Import Convert Export Wizard.
Click Import Data From File on Disk, then click Next.
Select LDIF from the File Type drop-down list, then enter the name of the LDIF file containing the data you want to import.
Click Next.
Specify the LDAP server where the data will be imported and the type of login (anonymous or authenticated).
Under Advanced Setting, select Store NMAS Simple Passwords/Hashed Passwords.
Click Next, then follow the online instructions to complete the remainder of the LDIF import wizard.
If you choose to store passwords using simple passwords, you must use an NMAS-aware Novell ClientTM to log in to the eDirectory tree and access traditional file and print services. NMAS must also be installed on the server. LDAP applications binding with name and password will work seamlessly with the simple password feature.
For more information on NMAS, see the Novell Modular Authentication Service Administration Guide.
Having unnecessary indexes can slow down your LDIF import because each defined index requires additional processing for each entry having attribute values stored in that index. You should make sure that you don't have unnecessary indexes before you do an LDIF import, and you might want to consider creating some of your indexes after you have finished loading the data reviewed predicate statistics to see where they are really needed.
For more information on tuning indexes, see Index Manager.