Native Table Copy Task Template

An XDM native table copy task template is designed to copy data using a database specific copy tool (e.g. Db2 Unload/Load, MSSQL Export, Oracle Data Pump).

When a native table copy task is executed, it examines the source and target environment by processing selection rules, exclude rules, mapping rules, and reduction rules. The structures of the source and target tables are compared in order to determine whether there are any conditions that would prevent a successful copy, and whether any special handling of the tables is required. If necessary, a native table copy task is able to re-create tables and dependent objects.

The source and target RDBMS of a native table copy task can be of different types. For example, a native table copy task is able to copy from DB2 for Linux to Oracle for Windows. In this case XDM uses a generic load format (CSV).

Permissions

Task Templates have specific permissions to manage user access. The table below displays the available permissions and their purposes.

For more details about the concept of XDMs permission management refer to Permission Management.

Permission

Description

ADMINISTRATION

Specifies that the grantee can grant and revoke permissions to and from other users.

A user that creates an object automatically receives the ADMINISTRATION permission on that object.

DELETE

Specifies that the grantee can delete objects of the selected types.

DIAGNOSE

This permission controls access to diagnostic data of a task execution. The diagnostic data consists of the stages, their outputs and the batch reports.

EXECUTE

Specifies that the grantee may execute the object, or schedule the object for later execution. This permission only applies to objects that are executable, i.e. tasks, task templates, workflows, workflow templates, and data shops.

READ

Specifies that the grantee has read permission on the object. The grantee is able to see the object in lists and can see all of the object’s details, such as rules or access permissions.

In addition, the grantee can reference this object. For example, a user who has READ permission on a credential object can refer to this credential object when creating a new database connection.

WRITE

Specifies that the grantee has the permission to change the settings and attributes of an object. This also includes modifying any rule lists that might be associated with the object (for example, the selection rules of a task template).

Properties

The table below documents the available properties for task templates. The 'name' column displays the property name as it can be used in Groovy and Java Scripts.

Name

Type

Default

Description

Check uniqueness after modification

checkUniquenessAfterModification

Boolean

true

This option specifies whether XDM checks uniqueness of modified unique key fields. When a modification method of modification type COLUMN modifies a column which is part of a unique key, XDM will check whether all output values of the modification are unique. This only checks the values processed by the task and does not involve other values that might exist in the target table.

Compare evaluator abort on empty table set

cmpevalAbortOnEmptyTableSet

Boolean

false

This flag controls whether the task process should abort if no tables are about to be copied.

Data transfer method

dataTransferMethod

Enum

N

Specifies the method that XDM uses to transport extracted data from the source database server to the target database server when copying full tables.

Local (L)

When the source and target database systems are located on the same database server, it is not necessary to transfer the extracted data. The data will be stored on the local server.

Network (N)

XDM stores data that is extracted from the source tables into the working directory that is specified for the source database server. In a separate step, the extracted data is copied to the target server using either FTP or SFTP (depending on the preferred method that is specified for the target database server). When the stage of the table copy that populates the target tables is running, it reads data from the working directory that is specified for the target database server.

Shared drive (S)

XDM stores the content of the source tables in a directory. This directory is specified as source shared path for the XDM table copy. It can either be a local directory, or an imported network share on the source database server. In both cases, the directory must also be accessible from the target database server under the path specified as target shared path for the XDM copy task. When the stage of the table copy that populates the target tables is running, it reads data from that directory. The movement of the data between the source and target database servers is transparent, that is, the shared directory has the same characteristics as a local directory on the source or target database server. The shared drive is only used for tables that are unloaded with a native unload unit, which is usually the case when neither modification nor reduction is used on that table. Previous to version 3.24.21, it was used for all tables. The old behavior can be enforced by defining a custom task parameter legacyPathBehavior and setting it to true.

Deactivate history table for load

deactivateHistoryTableForLoad

Boolean

false

This enables versioning with history tables to be deactivated before the target load starts. After the load is finished, versioning is reactivated. This option is can be used whenever the target system is Db2 LUW.

In order to apply this option the user must have ALTER TABLE authority for all tables with an associated history table (i.e. system period temporal tables).

Deactivate identity value adaption

deactivateIdentityValueAdaption

Boolean

false

By enabling this property, identity values will not be checked and added for auto generated columns. This saves time, but can lead to incorrect identity values. Only check this option if you are sure that no inserted values will be in conflict with the current identity values of the target tables.

Delete execution files on success

deleteExecutionFilesOnSuccess

Boolean

false

Specifies if working files of a task execution should be deleted in case of a successful task execution. This reduces the required space for a task and should therefore prevent space problems on the server when many tasks exist.

Description

description

String

n/a

An optional description for this object. The description can contain multiple lines to give more context on the configured object. The description is not used in a technical context.

Name

displayName

String

n/a

Specifies the name of the object. The name is used to display and identify the object in lists. The name can contain any valid UTF-8 characters.

Drop option

dropOption

Enum

NOTHING

This option specifies how XDM handles the target tables of a task during structure comparison.

Drop all and recreate (DROP_TB)

XDM generates DROP TABLE statements as part of the generated DDL. All tables that are part of the task are dropped from the target database. They are then recreated and populated with data from the source tables.

Drop nothing and create missing (NOTHING)

XDM does not create DROP statements for any target object. However, it is not required that the target objects already exist. If they do not exist, XDM will create them. If they already exist, then XDM replaces the data within the existing objects.

Drop nothing and create nothing (LEAVE)

XDM does not create DROP statements or CREATE statements for any target tables. If the target tables do not exist, XDM skips the tables that do not exist in the target. For the existing target tables, XDM replaces the data. This option is not available in structure comparison tasks.

Only differing objects (DIFFERING)

XDM drops target tables that differ structurally from the source tables. This option is useful if you want to synchronize the structures of the target tables with the source tables, but you do not want to drop everything unconditionally.

Drop restrict on drop

dropRestrictOnDrop

Boolean

false

Within Db2, with the attribute RestrictOnDrop cannot be dropped. This is a Db2 security option. When this property is set to true, RestrictOnDrop will be removed, and the table will be dropped.

Execution platform

executionPlatform

String

default

This property defines whether and on which platform the Dataflow server is being built for a task or workflow execution. The property on the template is overwritten by the property on the task or workflow.

Execute retention period

executionRetentionPeriod

String

-1

Specifies how long executions are kept for the specific task or workflow. If an execution is older than the specified retention period, the XDM service will remove it. If the executions should not be deleted automatically, the retention period must be set to -1. Otherwise, the input must start with P for a date period or with PT with a time period.

A change of the execution retention period will only affect executions created after the change.

A detailed description of the period syntax can be found here.

Prepare extract data preview

extractTraceActive

Boolean

false

This option prepares an additional temporary database, that allows previewing of the extracted data in the user interface. When enabled, the data can be investigated in the task execution detail view using the Schema Browser button.

If this option is enabled, additional space and time will be used during the execution of the task. Use this option during task development or debugging, but disable it for productive task usages.

For this option to work, the XDM core service needs access to the task directories. Please ensure, that the /xdm/tasks directories of your xdm-dataflow service, and your xdm-core service are mounted to the same storage.

Target foreign key deactivation

foreignKeyDeactivation

Enum

DISABLE

There are three options which specify how to deal with foreign keys during the load process:

Disable foreign keys for load (DISABLE)

Foreign keys are deactivated before the load process and reactivated afterwards. This option makes it possible to load data into the target even if foreign keys would block the action, while at the same time not requiring privileges to drop and create objects. However, this option is not valid for Db2 connections, IMS connections, and generic JDBC connections.

Do not deactivate foreign keys for load (NONE)

Foreign keys are left active during the load process. This is recommended when using SQL data transport, and the database user of the task does not have the authority to disable foreign keys.

Not deactivating or disabling foreign keys when not using SQL data transport can lead to errors during the load since the load sequence does not take foreign keys into account. This should therefore only be done if you are sure that all foreign keys will be respected independently of the order in which different tables are loaded.
Drop foreign keys and recreate after load (DROP)

Foreign keys are dropped before the load process and re-created afterwards. This option is not valid for IMS connections and generic JDBC connections. It is recommended to use this option if foreign keys can block the load, but deactivating foreign keys is not possible. Furthermore, this option can improve performance for MSSQL connections because XDM can use the faster truncate function to clear target tables before the load process instead of slower delete statements. Note that privileges to drop and create objects are required when using this option.

Alias

generateAlias

Boolean

true

If you copy a source table or sequence you can specify whether any aliases for these source objects should be generated in the DDL for the target as well. If the property is set to true, DDL for aliases will be generated and executed during task execution.

Comment

generateComment

Boolean

true

If you copy a source table you can specify whether any comments for this source table should be generated in the DDL for the target as well. If the property is set to true, DDL for comments will be generated and executed during task execution.

Foreign key

generateForeignKey

Boolean

true

If you copy a source table you can specify whether the foreign keys for this table should be generated in the DDL for the target as well. If the property is set to true, DDL for foreign keys will be generated and executed during task execution.

Grant

generateGrant

Boolean

true

If you copy a source table you can specify whether the grants for this table should be generated in the DDL for the target as well. If the property is set to true, DDL for grants will be generated and executed during task execution.

Handle multi table tablespaces

handleMultiTableTablespaces

Enum

ABORT

Controls how XDM should react if data should be loaded into a Db2 for z/OS table when there are other tables in the same tablespace. This operation can be problematic, since the other tables would be overwritten by the usual LOAD REPLACE operation. Therefore, special consideration is required of the user.

This property has no effect if the target system is not Db2 for z/OS, nor if there are no other tables in the target tablespace (except for the ´APPEND´ option).

ABORT (ABORT)

If there are other tables in the same tablespace the task aborts with a return code 8 and an error message is issued. In this case the data in the tablespace is not affected. If there are no other tables in the tablespace, data is loaded with LOAD REPLACE.

APPEND (APPEND)

The task execution uses LOAD RESUME to load data into the table. Existing data in the target table will not be truncated. Other tables in the same tablespace will not be affected.

This option also affects the LOAD operations for tables not in multi table tablespaces. It can be used to append the data to the existing data in the target database.
REPLACE (REPLACE)

The task will use LOAD REPLACE. If there are other tables in the same tablespace, their contents will be overwritten. The task execution writes a warning message to the log and terminates with return code 4.

RESUME (RESUME)

The task execution clears the target tables with the TRUNCATE command and uses LOAD RESUME to load data into the table. Other tables in the same tablespace will not be affected.

Incompatible action

incompatibleAction

Enum

ABORT

Defines XDM behavior, when source and target table structures are deemed to be incompatible.

Abort task (ABORT)

The task execution terminates immediately when one or more tables are incompatible.

Skip table (SKIP)

The task execution copies only those tables that are compatible. Incompatible tables will be skipped.

Performance supervision

jdbcProfiling

Boolean

false

Activate the performance supervision in the associated task template (all task types are supported). If this option is activated certain warnings in the task logs will be activated and profiling statistics for JDBC queries are logged. They are available through a task execution export in the task folder files/performanceSupervision.

Log level

logLevel

Loglevel

INFO

Controls the granularity of the log messages for the XDM task issued directly by XDM. The number of log messages decreases in the following order: Trace → Info → Warning → Error

To keep the size of the output small, set the log level to the default value Info, unless there are errors that you need to diagnose, which require more detail than that provided by log level Info. The log size is set to 10MB by default. To modify the log size, use a custom parameter with the name modifiedLogSize. The value must be the desired size in bytes. After the log size is reached, the last log entry will notify the user that the log size has been reached and a hint that the full log can be downloaded via exporting the task execution.

It is also possible to overwrite the log level in specific program parts. This can be done by adding the prefix logger.level. + package name via a task stage hook to the global property map. For example, to set a specific log level to all components of the package de.ubs.xdm.batch.subsetapply you have to set the property logger.level.de.ubs.xdm.batch.subsetapply with the wanted required log level. inside a task stage hook. Supported values are info, warning, error, verbose and debug. A example how to set a property in a task stage hook can be found here. Typically, the support team will instruct to set a package log level for detailed analysis if necessary.

Possible values are:

Error (error)

The least fine granularity. Only severe error messages that cause the task to terminate are written to the log file.

Info (info)

The most important messages about the progress of the task are written to the log file.

Trace (verbose)

Additional messages without any messages for debugging purposes are written to the log file.

Warning (warning)

The log file contains warning messages that may or may not require user attention.

Modification set

modificationSet

ModificationSet

n/a

The property contains a list of modification sets that can be used to modify data during the task execution. For every modification set it is also stored, if it is selected or not. Only selected modification sets are shown in the view mode of the object.

Modification validation

modificationValidation

Boolean

false

By enabling this property, values affected by modification methods will be checked if they can be inserted in the target. The task will abort if this is not possible.

MVS use internal load format

mvsUseInternalLoadFormat

Boolean

true

If this is active, the internal load format will be used instead of the delimited load format. This may enhance the load performance and can be used to transport binary data, which would be truncated using the delimited format.

MVS use spanned load format

mvsUseSpannedLoadFormat

Boolean

false

If this is active, all lobs will be transported inline in spanned datasets, instead of using PDS format. This option increases transport and load performance when GENERIC processing is active for a table. This flag will only be evaluated if using MVS internal LOAD format.

Load without database logging

nonrecoverableMode

Boolean

true

Specifies whether the target DBMS logs all changes that are made to the target tables when XDM populates them with data from the source tables. Writing log records ensures that the target environment is able to perform a recovery operation if required. However, it also means that a large number of log records will be created in the target environment, which may not be desirable. If you want to minimize the number of log records that are written in the target, consider setting this option to true and taking a fresh backup copy of the target environment after the copy process finishes.

If the property is set to true, no log records will be written for the target tables while they are populated. Otherwise, the target DBMS is instructed to write log records for all changes that XDM makes to the contents of the target tables.

If the LUW database is configured as high available database, this flag must be set to false and in the connection, the parameter Backup path for COPY YES option must contain a valid path for the LOAD statement to be successful. Otherwise an error message SQL2032N The "COPY FLAG" parameter is not valid occurs.

Parallel modification

parallelModification

Boolean

true

The parallel modifications flag determines whether modifications to tables should be processed in parallel (simultaneously) or sequentially (one after the other). Setting the flag to true enables parallel processing, which is faster but only suitable if there are no dependencies between the modification rules. Setting it to false enforces sequential processing, ensuring that modifications are applied in the correct order when dependencies exist.

Dependencies Between Modification Rules:

Dependencies occur when one modification rule relies on the results of another. If such dependencies exist, sequential processing is necessary to ensure data integrity and correctness.

Examples of Dependencies:

Data Collection Dependency:

  • Modification 1: Gathers information from Table A and stores it in a central location.

  • Modification 2: Uses the collected information from the central location to modify Table B.

  • Dependency: Modification 2 depends on the data collected by Modification 1, requiring sequential processing.

Data Obfuscation and Usage:

  • Modification Rule A: Obfuscates (masks) names in Table A.

  • Modification Rule B: Creates email addresses in Table B based on the names.

  • Dependency: Modification Rule B relies on the names being obfuscated by Modification Rule A. If processed in parallel, email addresses might be created using unmasked names.

Summary: Use the parallel modifications flag to control the order of table modifications:

True

Process tables in parallel for faster performance, suitable when no dependencies exist.

False

Process tables sequentially to respect dependencies between modification rules, ensuring accurate and consistent modifications.

Source connection

sourceConnection

Connection

n/a

Specifies the connection that is used as source connection in the task. A source connection must be set in either the task template or the task. If it is set in both, the connection set in the task is used.

Source mount point

sourceMountPoint

String

n/a

The mount point on the source database server for a shared drive. When copying data between two different database servers, using a shared drive is one of the available transport options to move the data from the source database server to the target database server. This property specifies the full path under which the shared drive is accessible from the source database server.

Source working directory

sourceWorkingDirectory

String

n/a

With this option it is possible to use a specific working directory on the source database server. If a file path has been specified here, this path will be independent of that which is defined on the database server under working directory / qualifier. If nothing else has been specified here, the default is to use of the working directory from the underlying storage location.

Stop source

stopSource

Enum

TABLESPACE

Specifies whether the source tables are stopped while the copy process is running.

Database (DATABASE)

The whole database is stopped during the copy process. This means that tablespaces are stopped, even if they are not involved in the copy process.

false (NO)

Nothing is stopped during the copy process.

Tablespace (TABLESPACE)

Only tablespaces involved in the copy process are stopped.

Choose Tablespace when the source database contains both table spaces that participate in the copy process and table spaces that do not participate. Use Database when you want to copy most or all of the source tables. Stopping an entire database will prevent access to all table spaces inside the database, even if these table spaces are not part of the actual copy process.

Tags

tags

Tag

n/a

Contains the tags that apply to this object. These tags can be used in the search to find objects quickly and effortlessly.

Target connection

targetConnection

Connection

n/a

Specifies the connection that is used as the target in the task. A target connection must be set in either the task template or the task. If it is set in both, then the connection set in the task is used.

Target mount point

targetMountPoint

String

n/a

The mount point on the target database server for a shared drive. When copying data between two different database servers, using a shared drive is one of the available transport options to move the data from the source database server to the target database server. This option specifies the full path under which the shared drive is accessible from the target database server.

Target working directory

targetWorkingDirectory

String

n/a

With this option it is possible to use a specific working directory on the target database server. If a file path has been specified here, this path will be independent of that which is defined on the database server under working directory / qualifier. If nothing else has been specified here, the default is to use of the working directory from the underlying storage location.

Task execution report

taskExecutionReport

TaskExecutionReport

n/a

This list defines which reports are created at the end of an execution and are displayed in the task execution. The list includes all defined reports to which the current user has READ access. The selected reports are generated at the end of a task or workflow execution.

If a workflow template calls one or more tasks, only the reports that were defined on the workflow template are generated. The reports that have been specified for the respective tasks are not generated at the end of the execution.

Toleration mode

tolerationMode

Enum

1

If a source table differs in structure from a destination table you have to specify how differences are treated while copying it from source to target. The values are still written to the target database as they are. It depends on the handling of the database if there is a truncation or an error message. It is necessary to use a modification rule on the respective column to implement a proper data truncation.

Allow truncation (2)

Target columns are tolerated even if copying the data can lead to truncation or errors in the target.

Prevent truncation (1)

Only columns that can hold equally large or larger values are tolerated in the target.

Strict (0)

No differences in structure of source and target table are allowed. If source and target table differ in structure, the copy process will stop.

Trace data configuration

traceDataConfiguration

String

n/a

This property specifies a list with one or more values which should be traced during XDM task processing. If multiple values are specified, these should be separated by semicolons. The specified values are reported in a trace data report at the end of a task stage in which the value was detected. Reported are the rows of a table in which one of the specified values occurs in any of the row’s columns, whenever that row is part of an INSERT, UPDATE, DELETE, or SELECT action.

If a specified value is detected in a stage as part of one of the above named actions, then the row in which it occurs is reported in the trace data report at the end of that stage. In the event that a specified value is affected by a modification method, the row in which it occurs is reported in the trace data modification report.

Delimiter

useDelimiter

Boolean

true

Specifies whether a double quote delimiter should be used in DDL and SQL statements to identify objects such as schemas, tables, etc.

Unquoted identifiers are case-insensitive. This means that they will recognize customer, Customer, and CUSTOMER as the same object. However, quoted identifiers are case-sensitive. This leads to treating "CUSTOMER" and "customer" as entirely different objects.

You should keep in mind that XDM uses double quote delimiters by default. Turning this feature off can lead to usability issues. If you decide not to use double quote delimiters, you should be aware of how the database you are dealing with handles identifiers. For example, Db2 converts unquoted identifiers to uppercase, whereas PostgreSQL converts them to lowercase. Therefore, if you want to select a table named customer in Db2, this property should be set to true. Otherwise, XDM will look for a table named CUSTOMER, which either would not exist or would be a different table than the one you are looking for.

You should avoid mixing tasks with and without double quote delimiters.

Using aliased tables

usingAliasedTables

Boolean

false

This property enables a task to use an alias on a table as a replacement for this table. This is useful in the following scenario: You have defined a uniform application model that names tables in the rules. In a certain environment the tables are named differently and have an alias for the original name. Then XDM creates a virtual table object in the task process under the name of the alias, and the column list of the underlying table.

When Using aliased tables is set to true, it is not possible to generate DDL with XDM, because the DDL generator does not recognize, if the table is an aliased table or not and will create wrong DDL. Therefore, the Drop option must be set to Drop nothing and create nothing in this case. Otherwise, an error occurs, and the task execution will be aborted.

Using viewed tables

usingViewedTables

Boolean

false

Using viewed tables controls when views should be handled like tables. In the event that a view references another view or an alias, XDM will follow the chain of references to a depth of 5 steps by default. You can control the selection depth by setting a custom parameter objSel_viewSelectionDepth with your preferred value.

Views can be used as a source in most cases, even if they are based on multiple tables or using functions to modify column values. For target usage, only views can be used that are based on a single table and reflect the columns without modifying any value. Please refer to the definition on deletable views, insertable views and updatable views of the database system documentation. When inserting on a view, the view will receive the DELETE, UPDATE and INSERT statements and the database system evaluates if the operation is permitted.

This property is currently in prototype state and only works with Db2 for z/OS and Db2 for LUW. It is not guaranteed that the property can handle all use cases.
When Using aliased tables is set to true, it is not possible to generate DDL with XDM, because the DDL generator does not recognize, if the table is an aliased table or not and will create wrong DDL. Therefore, the Drop option must be set to Drop nothing and create nothing in this case. Otherwise, an error occurs, and the task execution will be aborted.

Actions

The available actions are described below. Some actions apply to the list, while others are specific to selected task templates.

List Actions

The following actions are available on the task templates list. If the action is disabled a tooltip will provide the exact reason for the deactivation. The required permissions are described in detail for each action.

  • Bulk Create Permission

  • Bulk Delete

  • Bulk Export

  • Create

  • List History

Create a new permission on the selected objects. Shows in the result list whether the permission could be granted on the respective object. Only these permissions can be granted that are existing on the underlying object.

A permission in the result list can have three different states, these are:

CREATED

The permission successfully granted on the object.

MERGED

The granted permission already exists on the object and merged with the new permission.

SKIPPED

The permission could not be granted, because of missing administration permission on the object.

The following permissions are required on the list:

  • ADMINISTRATION

  • READ

Delete the selected objects.

The following options are available:

Cascade

Recursively delete depending objects.

When using cascade, dependent objects are deleted first also with cascade enabled. Thus, a cascade deletion is a recursive function that deeply searches for dependent objects and deletes them first. There is only a confirmation for the first object. The dependent objects are deleted without confirmation but only when the user has the DELETE permission.

This feature is only available in development mode. More information about development mode can be found in the chapter User Settings. It should be used with caution.

An object in the result list can have two different states, these are:

DELETED

The object could be deleted.

NOT_DELETED

The object could be not deleted. This may be because the executing person does not have a delete permission on the object or the object is still referenced by others. A detailed reason can be determined with the help of the error message. If the object is still in use, these objects are also displayed.

The following permissions are required on the list:

  • DELETE

  • READ

Exports the selected objects.

YAML

Generates a YAML file containing all the object’s settings. The user has the option to download the export file, or to paste the content in the import dialog. The YAML export is particularly suitable for importing the exported objects again via the XDM UI.

ZIP

This export writes several individual YAML-files. Each YAML-file is stored in a directory according to its type. For example, when exporting a native table backup task template named 'A backup template', a YAML-file 'A backup template.yaml' is created inside the directory /TaskTemplate/native-table-backup-task-template/ of the ZIP-file. This kind of export is suitable for usage in git-repositories together with XDM’s configuration as code feature.

Related and dependent objects can optionally be included in the export. The export dialog has the following options:

Include dependent objects

Dependent objects only belong to the exported object like rules and tasks.

Include permissions

Permissions of each exported object, only when the object supports permissions. Some objects like rules don’t have permissions.

Include referenced objects

Referenced objects exist by their own and are used in the exported object like connections and environments.

Include objects that depend on referenced objects

Also include the dependent objects of the referenced objects. E.g. the rules of a modification set or the rules in an application model version.

Objects on which the user does not have READ permission are not exported. This includes dependent and referenced objects. However, the reference to an object will be exported. For example a connection object would refer to the credential, even if the user does not have READ permission on the credential. The definition of the credential object itself will not be part of the export file. This can lead to issues during the import, because the connection cannot be created without an existing credential.

The following permissions are required on the list:

  • READ

Creates a new object in the current list. Depending on the object type either a popup dialog is shown for the most important settings, or the complete object is shown in edit mode. The dialog provides the option to create the object and remain in the current list or to switch to the newly created object in edit mode to perform further changes.

The following permissions are required on the list:

  • CREATE

The history list tracks all modifications made to objects within it. A new record is added each time an object is created, edited, or deleted. A record indicates who made the change, which object was affected, and when the change was made.

For more information about the concept of the history refer to the history concepts.

The following permissions are required on the list:

  • READ

Object Actions

The following actions are available on specific task templates. In order to execute the action, the user must possess the necessary permissions for the object. The permissions required for each action are described individually. If the user does not have these permissions, the action will be disabled and the tooltip will provide the exact reason for the deactivation.

  • Check

  • Delete

  • Duplicate

  • Edit

  • Event List

  • Export

  • Object History

  • Permission Check

  • Usage

  • Uses

This action validates the object and its dependencies, reporting configuration errors that could cause issues during task or workflow execution. The validation will cascade through the child objects of the checked objects and objects referenced by them.

For instance, if an installed application of an environment is checked, the check will process the application model, the specified version, the connection, modification sets, and involved modification methods. If an object has rules, all active rules will be checked. The modeling connection and version, including their modification sets and methods, will also be checked. Deactivated objects will not be included in recursive checks, but can be checked individually if the check is executed on the object itself.

Checks often require additional information from the context of the objects being checked, such as necessary connections or custom parameter values. The check will gather information from the objects being checked and use it to perform checks on child objects. Any required additional information must be provided before the check begins. The check queries the user to provide these missing information.

Database object checks

For all rules which reference database objects such as tables, columns, etc, the check verifies that the those objects exist in the database system. If a connection can be inferred from the context, then this connection is used. If no connection is available in the context, it must be specified before the check is executed.

Connection checks

For objects which configure access to external systems, such as connections or storage locations, the configuration check verifies that access can be established using the given credentials. Furthermore, additional operations on database connections are performed to check whether the credential user has the necessary authorization to access relevant database objects. In particular, the credential user’s permission to read source tables and write to target tables is verified. Similarly, for storage locations the check verifies that the credential user has permission to write to the working directory.

Code checks

For all entities containing code segments, such as modification methods or condition scripts, the syntax for the code is checked. This does not check, however, whether at run time all necessary variables are likely to be available.

The following permissions are required:

  • READ

Delete the object. If the object is still used by another entity, an error message is displayed, and the object is not deleted. The delete operation must be confirmed in a separate popup.

The following options are available:

Cascade

Recursively delete depending objects.

When using cascade, dependent objects are deleted first also with cascade enabled. Thus, a cascade deletion is a recursive function that deeply searches for dependent objects and deletes them first. There is only a confirmation for the first object. The dependent objects are deleted without confirmation but only when the user has the DELETE permission.

This feature is only available in development mode. More information about development mode can be found in the chapter User Settings. It should be used with caution.

The following permissions are required:

  • DELETE

  • READ

Will create an exact copy of the current object with a different display name in the same list. Users can decide whether they want to copy child objects like rules, permissions or tasks. It is only possible to select complete classes of objects and not to select individual child objects. Copied child-objects will preserve their display name. The default is to copy all child objects.

The following permissions are required:

  • CREATE

  • READ

Opens the current entity in edit mode.

The following permissions are required:

  • READ

  • WRITE

This list shows all registered events for the object. It includes events that are specific to the object, or for that type.

The following permissions are required:

  • READ

This action allows to export XDM objects in different formats in order to import them via export or CasC in another environment.

Refer to configuration of export for more information.

Related and dependent objects can optionally be included in the export. The export dialog has the following options:

Include dependent objects

Dependent objects only belong to the exported object like rules and tasks.

Include permissions

Permissions of each exported object, only when the object supports permissions. Some objects like rules don’t have permissions.

Include referenced objects

Referenced objects exist by their own and are used in the exported object like connections and environments.

Include objects that depend on referenced objects

Also include the dependent objects of the referenced objects. E.g. the rules of a modification set or the rules in an application model version.

Include implicit created objects

Implicit created objects are tasks or workflows which were automatically created for execution. These objects won’t be exported by default, but can be included by setting this flag. When exporting implicit objects, make sure that the Include dependent objects flag is also enabled.

Objects on which the user does not have READ permission are not exported. This includes dependent and referenced objects. However, the reference to an object will be exported.

For example a connection object would refer to the credential, even if the user does not have READ permission on the credential. The definition of the credential object itself will not be part of the export file. This can lead to issues during the import, because the connection cannot be created without an existing credential.

The following permissions are required:

  • READ

The history displays all changes made to the respective XDM object, including any changes made to its rules.

Each change record includes information about the operation performed (e.g. CREATE, UPDATE, DELETE), the timestamp, and the user responsible for the change.

For more information about the concept of the history refer to the history concepts.

The following permissions are required:

  • READ

The check verifies that the current user has the authorization to access the object. The check can also be performed for a specific user or role, if needed. By default, the check is performed using the current user’s credentials. It is then applied to child and referenced objects.

Additional permission checks are applied when these can be inferred from the context in which the check was started. For example, if the check is performed on a table copy task, the referenced source and target connections are checked to determine whether the given identity has source or target usage permission respectively.

The following permissions are required:

  • READ

The Usage List shows all objects that refer to the current object. It provides an overview of the relationships and makes it easy to track these relationships.

The following permissions are required:

  • READ

The Uses List shows all objects that the current object uses. It provides an overview of the relationships and makes it easy to track these relationships.

The following permissions are required:

  • READ

Child objects

License Options

This object is available if at least one of the following license options is enabled:

  • TASK_TYPE:NATIVE_TABLE_COPY_TASK

  • TASK_TYPE:TABLE_COPY_TASK

The object is also available if the license package is at least: ESSENTIAL.

Prerequisites

Before an XDM native table copy task template can be completely configured you have to set up a connection with storage location and corresponding credentials.

If the source database is PostgreSQL and the target database is not or vice versa, then you should use a compatibility table copy task template.

Required permissions

Users and Groups

During execution a native table copy task requires four different users:

  • Source operating system user, defined in the storage location credential of the source connection,

  • Source database user, defined in the credential of the source connection,

  • Target operating system user, defined in the storage location credential of the target connection and,

  • Target database user, defined in the credential of the target connection.

Authorization for the source operating system user

The operating system user on the source database server must have authority to perform the following actions:

  • Create a working directory on source database server or shared drive (via FTP or SSH)

  • Write to the working directory (via FTP, FTPS or SSH)

  • Read from the working directory (via FTP, FTPS or SSH)

  • Delete the working directory (via FTP, FTPS or SSH)

For Oracle databases, the following authority is also required:

  • Execute Data pump

  • Execute SQL Loader jobs

The process owner for Data Pump jobs is typically not the user who invokes the job. Most commonly it is the user oracle. The Data Pump process owner must be made a member of the XDM administration group on the target operating system.

When using reduction or modification in the task, the data will be written directly in the target working directory. In this case, it is necessary to have authority to perform the following actions:

  • Create files in the target working directory on target database server or target shared drive mount point (via FTP or SSH)

  • Write to the target working directory (via FTP, FTPS or SSH)

  • Read from the target working directory (via FTP, FTPS or SSH)

Additional authorization for members of the XDM administration group

When using an XDM administration group for the source operating system, additional authorization is required if you are working with DB2 LUW or Oracle as database systems.

For DB2 LUW, members of this group must be:

  • Instance fenced user of the source database (for UNLOAD of source database tables)

  • Operating system user to copy files from the source database server.

For Oracle, members of this group must be:

  • Instance user of the source database (in order to use the Data Pump utility on the source database)

  • Operating system user to copy files from the source database server.

When extracting data and copying files to the target server, the group of the data files will be set to the administration group and the permissions for the data files will be set to 770 by XDM.

Authorization for the source database user

  • Select information from database catalog

  • Read from data tables (SELECT authority)

Additional authorization for the source database user on a DB2 z/OS system

  • Authorization to call SYSPROC.DSNUTILU/V in order to execute utilities on the source system. Section Customizing SYSPROC.DSNUTILU/V in DB2 z/OS explains how SYSPROC.DSNUTILU/V has to be customized to work with XDM.

  • Authorization to call SYSPROC.ADMIN_COMMAND_DB2 in order to execute DB2 commands on the source system

  • Authorization to run the UNLOAD utility on the selected tables. For UNLOAD one of the following privileges must be held:

    • Ownership of the table,

    • SELECT authority on the table,

    • UNLOAD authority on the table,

    • DBADM authority for the database,

    • DATAACCESS authority,

    • SYSADM authority.

Additional authorization for the source database user on a DB2 LUW system

  • Authority to use the stored procedure SYSPROC.ADMIN_CMD for EXPORT statements

  • Read access to the administrative view SYSIBMADM.TBSP_UTILIZATION to determine the sizes of tablespaces

  • System monitor authority (SYSMON) to retrieve additional information from the database catalog (optional, but recommended)

Additional authorization for the source database user on an Oracle system

  • Role EXP_FULL_DATABASE

Additional authorization for the source database user on a MS SQL system

  • The user must be a database user, a domain user is not sufficient.

  • The user must have authority to execute BCP utility with the EXEC xp_cmdshell command.

    The system procedure xp_cmdshell must be activated on the database server. If this is not possible or not desired for security reasons, it is possible to use local BCP instead. See the local BCP property reference documentation for further information.

Authorization for the target operating system user

The operating system user on the target database server must have authority to perform the following actions:

  • Create a working directory on the target database server (via FTP, FTPS or SSH).

  • Write to the working directory on the target database server (via FTP, FTPS or SSH).

  • Read from the working directory (via FTP, FTPS or SSH).

  • Delete the working directory (via FTP, FTPS or SSH).

For Oracle, the following additional authority is required:

  • Execute Data pump

  • Execute SQL Loader jobs

Additional authorization for members of the XDM administration group

When using an XDM administration group for the target operating system, some additional authorization is required if you are working with DB2 LUW or Oracle as database system.

For DB2 LUW members of this group must be:

  • An instance user of the target database in order to use the LOAD utility.

  • An instance fenced user of the target database in order to be a file owner.

  • The target operating system user of the task to copy files to the target server.

For Oracle members of this group must be:

  • An instance user of the target database who is the owner of the data files in order to use the Data Pump and SQL Loader utilities of target database

  • The target operating system user of the task to copy files to the target server

When copying files to the target server, the owning group of the data files will be set to the administration group and the permissions for the data files will be set to 770 by XDM.

Authorization for the target database user

For the target database user the following privileges are required:

  • SELECT authority for the database catalog

  • INSERT/UPDATE/DELETE authority to write into the database tables

  • Authority to execute DDL (only required if database objects are to be created)

Additional authorization for the target database user on a DB2 z/OS system

  • Authorization to execute the REPAIR utility with the NOCHECKPEND option and

  • Authorization to execute the LOAD utility on the database. For LOAD one of the following privileges must be held:

    • Ownership of the table,

    • LOAD privilege on the table,

    • DBADM or DBCTRL authority for the database,

    • DATAACCESS authority or

    • SYSCTRL or SYSADM authority.

  • If system-period temporal tables are copied, the user also needs authorization for:

    • ALTER TABLE <table_name> DROP VERSIONING and

    • ALTER TABLE <table_name> ADD VERSIONING.

Additional authorization for the target database user on a DB2 LUW system

  • Authority to use the stored procedure SYSPROC.ADMIN_CMD for IMPORT statements

  • Execute SET INTEGRITY PENDING statements.

  • System monitor authority (SYSMON) to retrieve additional information from the database catalog (optional, but recommended).

Additional authorization for the target database user on an Oracle system

  • Role IMP_FULL_DATABASE

Additional authorization for the target database user on an MS SQL system

  • The user must be a database user, a domain user is not sufficient,

  • Authority to execute BCP utility with the EXEC xp_cmdshell command.

    The system procedure xp_cmdshell must be activated on the database server. If this is not possible or not desired for security reasons, it is possible to use local BCP instead. See the local BCP property reference documentation for further information.

Space requirement considerations

  • Enough physical free space on the target database management system.

  • Enough allocated free space in the target database.

  • Enough physical free space on the source database server.

  • Enough physical free space on the target database server.

  • Enough physical free space on the shared drive, if used.

Restrictions

  • Oracle only: When using modification or reduction rules on a table, XDM uses SQL Loader to load data into the target table. If the table contains LOB columns or XML columns, the SQL Loader is not able to load LOBs and XML into the target column. In this case, use a compatibility table copy task template.

    If the LOBs or XML can be omitted in the target system, it is possible to remove the LOBs or XML by using a modification rule.

XDM Permissions to execute a task

The user needs the following permissions to successfully start the task execution:

  • READ permission on the task and task template,

  • EXECUTE permission on the task,

  • SOURCE_USAGE on the source connection,

  • TARGET_USAGE on the target connection,

  • READ permission on any storage location linked in the connections if table copy tools (UNLOAD/LOAD) are used,

  • READ permission on any credential used in the connections or storage locations,

  • READ permission on any modification set linked to the task, task template, or connection,

  • READ permission on any modification method used in the linked modification sets, and

  • READ permission on any custom parameter used in the task template.

Task procedure

XDM table copy tasks have six stages. This procedure description applies to both compatibility table copy tasks and native table copy tasks.

Before executing Stage 1, XDM creates all necessary files and folders in the task directory with the Tailoring process.

Tailoring

All necessary files and folders are created in the task directory.
The following reports are generated by this stage.

Stage 1

The structure of the source connection is stored in task files. XDM collects all tables that match at least one selection rule together with their dependent objects. Tables and columns that match any exclude rule will not be in the task’s selection set.
The following reports are generated by this stage:

Stage 2

The structure of the target connection is stored in task files. XDM fetches all objects for the target environment matching a table from the source environment, in accordance with mapping rules. Exclude rules with scope Target can further reduce the selection set.
The following reports are generated by this stage:

Stage 3

XDM compares the structures in the source and target connections and generates DDL for new objects in the target connection, if required.
The following reports are generated by this stage:

Stage 4

XDM executes the DDL that was created in Stage 3 of the task execution, if any.
The following reports are generated by this stage:

Stage 5

XDM extracts the data from the source tables and copies it into a file on the source server. Afterwards, these files will be moved to the target server, if necessary. If no reduction rules are set on a table, then XDM will extract all rows. During the extraction all modification Rules with scope source are processed. Those rules can be taken from the definition of a modification set, either on the task template or on the connection.
The following reports are generated by this stage:

Stage 6

XDM loads the data into the target tables. Additionally, all modification Rules with scope target are processed. Those rules can be taken from the definition of a modification set, either on the task template or on the connection. After the execution, working files are deleted to save storage space.
The following reports are generated by this stage: