Db2 z/OS Connection
Defines a JDBC connection and additional information to be able to work with a Db2 z/OS instance.
Permissions
Connections have specific permissions to manage user access. The table below displays the available permissions and their purposes.
For more details about the concept of XDMs permission management refer to Permission Management.
Permission |
Description |
|---|---|
ADMINISTRATION |
Specifies that the grantee can grant and revoke permissions to and from other users. A user that creates an object automatically receives the |
APPLY_SQL |
Specifies that the grantee can apply DDL/DML statements against the database connection. |
BROWSE |
Specifies that the grantee is allowed to see the contents of database tables when using the schema browser, and when inspecting the output of an XDM task that provides a data preview. This permission only applies to connections. |
DELETE |
Specifies that the grantee can delete objects of the selected types. |
READ |
Specifies that the grantee has read permission on the object. The grantee is able to see the object in lists and can see all of the object’s details, such as rules or access permissions. In addition, the grantee can reference this object.
For example, a user who has |
SOURCE_USAGE |
Specifies that the object can be used as the source for an XDM task.
This permission only applies to environments and connections.
If a user does not have the |
TARGET_USAGE |
Specifies that the object can be used as the target for an XDM task.
This permission only applies to environments and connections.
If a user does not have the |
WRITE |
Specifies that the grantee has the permission to change the settings and attributes of an object. This also includes modifying any rule lists that might be associated with the object (for example, the selection rules of a task template). |
Properties
The table below documents the available properties for connections. The 'name' column displays the property name as it can be used in Groovy and Java Scripts.
Name |
Type |
Default |
Description |
||
|---|---|---|---|---|---|
Interactive connection pool size connectionPoolSize |
Number |
2 |
Specifies the number of simultaneous connections to the database system. This setting is used to control the number of connections that can be opened at the same time. The connection pool is used by the schema browser to manage concurrent SQL requests. |
||
|
credential |
Credential |
n/a |
Specifies the credential used by a connection or storage location to authenticate XDM with a database or server respectively. The user in the connection must have the necessary permission to access the specified object. Exactly what privileges are required depends on the task type for which this connection is to be used. Details can be found in the respective task descriptions. |
||
Data extract generic fetch size dataExtractGenericFetchSize |
Number |
400 |
This property controls how many rows are fetched in one network operation from the database when using the generic extract unit. This unit is used for modification, reduction and in cases of SQL data transport in table copy Tasks. |
||
|
description |
String |
n/a |
An optional description for this object. The description can contain multiple lines to give more context on the configured object. The description is not used in a technical context. |
||
|
displayName |
String |
n/a |
Specifies the name of the object. The name is used to display and identify the object in lists. The name can contain any valid UTF-8 characters. |
||
|
extractPackageSize |
Number |
500 |
Defines, how many data sets are extracted together in one package during the subset extract. The package size is the number of keys that are stored together. The package size affects processing speed and memory usage when extracting data in row level processing tasks. Using a larger package size in subset extract can enhance the processing speed, but uses more memory. It also influences the number of keys that are fetched with a single query. If the package size is too large, this can lead to queries which are too long for the source database system. |
||
|
jdbcBatchActivated |
Boolean |
true |
This property controls whether JDBC batch mode is supported for the given database system. If JDBC batch mode is activated, multiple rows will be bundled and sent as a batch to the database system when applying the data. Using batch mode can improve performance, but also requires more memory on both the execution server and the database server. |
||
|
jdbcBatchSize |
Number |
500 |
This property controls how many rows are packed into one unit when the subset apply batch mode is active. Valid values are all integer values greater than one. The maximum values is limited by the capacity of your execution server and the database server. Typically, this maximum is around 5000 rows per batch statement. It is only used when subset apply batch mode is active.
|
||
|
loadTemplateSkeletonZOS |
String |
n/a |
The LOAD utility uses a TEMPLATE statement to define the dataset containing the data to be loaded. Additionally, there are more templates that can be used to define the work datasets used by the LOAD utility. The work datasets can also be defined in the procedure of the WLM environment used for DSNUTILU, but using templates in the skeleton is the recommended way for Db2 of newer versions. The location and allocation of these datasets can be defined using the Load Template Skeleton. To define the work datasets, templates with the names For the template defining the record dataset, a variable Here is an example defining TEMPLATE ${loadTemplateName}
DSN('${file}')
TEMPLATE TSYSUT1
DSN('&US..XDM.SYSUT1.&UQ.') UNIT 3390 SPACE (100,300) CYL
TEMPLATE TSORTOUT DSN('&US..XDM.SORTOUT.&UQ.') UNIT 3390 SPACE (100,300) CYL
|
||
|
modificationSets |
ModificationSet |
n/a |
The property contains a list of modification sets that can be used to modify data during the task execution. For every modification set it is also stored, if it is selected or not. Only selected modification sets are shown in the view mode of the object. NOTE: Modification sets that are set in a connection are applied to any task which uses this connection. The settings can be supplemented in the task template. |
||
|
storageLocation |
StorageLocation |
n/a |
The storage location used for the connection. This setting is only required if the connection is to be used for native table copy tasks, native table icebox tasks or database clone tasks. If the connection is only to be used in compatibility table copy tasks, compatibility table icebox tasks or all types of row level processor tasks, then this field does not need any specification. |
||
|
tags |
Tag |
n/a |
Contains the tags that apply to this object. These tags can be used in the search to find objects quickly and effortlessly. |
||
|
threadCount |
Number |
4 |
The number of parallel threads that is used to extract data from the source environment and load it into the target environment for table copying tasks. The minimum value for this property is 1. When set to 1, objects are processed sequentially. The ideal number depends on the CPU and I/O capacity of the database server. You are encouraged to try different degrees of parallelism until you find the number of threads that yields the highest throughput.
|
||
|
unloadTemplateSkeletonZOS |
String |
PRIMARY=50 SECONDARY=100 CYL |
The UNLOAD utility uses a TEMPLATE statement to define the dataset containing the extracted data. Additionally, there are more templates that can be used to define the work datasets used by the UNLOAD utility. The work datasets can also be defined in the procedure of the WLM environment used for DSNUTILU, but using templates in the skeleton is the recommended way for Db2 of newer versions. The location and allocation of these datasets can be defined using the Load Template Skeleton. For the UNLOAD, it might be important to define additional allocation parameters for the data set used for the extracted data. The data set is pre-allocated by the settings defined in MVS allocation FTP site command, but can be overwritten here. To define the work datasets, templates with the names For the template defining the record dataset, a variable Here is an example defining TEMPLATE ${loadTemplateName}
DSN('${file}') UNIT 3390 SPACE (100,300) CYL
TEMPLATE TSYSUT1
DSN('&US..XDM.SYSUT1.&UQ.') UNIT 3390 SPACE (100,300) CYL
TEMPLATE TSORTOUT DSN('&US..XDM.SORTOUT.&UQ.') UNIT 3390 SPACE (100,300) CYL
|
||
|
url |
String |
n/a |
A JDBC URL to be used to connect to the database. The URL will typically have a format like: jdbc:<driver>://<server-address>:<port>/<database>[:<options>] The precise settings are dependent on the database server. |
||
MVS allocation FTP site command for LOB datasets zosSiteCommandPartitioned |
String |
PRIMARY=50 SECONDARY=100 CYL |
Specifies the allocation parameters for the LOAD files as partitioned data sets on the z/OS system via FTP.
The commands are MVS SITE subcommands.
Per default, the value is set to |
||
MVS allocation FTP site command zosSiteCommandSequential |
String |
PRIMARY=10 SECONDARY=10 CYL |
Specifies the allocation parameters for the LOAD files as sequential data sets on the z/OS system via FTP.
The commands are MVS SITE subcommands.
Per default, the value is set to
|
Actions
The available actions are described below. Some actions apply to the list, while others are specific to selected connections.
List Actions
The following actions are available on the connections list. If the action is disabled a tooltip will provide the exact reason for the deactivation. The required permissions are described in detail for each action.
-
Bulk Create Permission
-
Bulk Delete
-
Bulk Export
-
Create
-
List History
Create a new permission on the selected objects. Shows in the result list whether the permission could be granted on the respective object. Only these permissions can be granted that are existing on the underlying object.
A permission in the result list can have three different states, these are:
- CREATED
-
The permission successfully granted on the object.
- MERGED
-
The granted permission already exists on the object and merged with the new permission.
- SKIPPED
-
The permission could not be granted, because of missing administration permission on the object.
The following permissions are required on the list:
-
ADMINISTRATION
-
READ
Delete the selected objects.
The following options are available:
- Cascade
-
Recursively delete depending objects.
|
When using cascade, dependent objects are deleted first also with cascade enabled. Thus, a cascade deletion is a recursive function that deeply searches for dependent objects and deletes them first. There is only a confirmation for the first object. The dependent objects are deleted without confirmation but only when the user has the DELETE permission. This feature is only available in development mode. More information about development mode can be found in the chapter User Settings. It should be used with caution. |
An object in the result list can have two different states, these are:
- DELETED
-
The object could be deleted.
- NOT_DELETED
-
The object could be not deleted. This may be because the executing person does not have a delete permission on the object or the object is still referenced by others. A detailed reason can be determined with the help of the error message. If the object is still in use, these objects are also displayed.
The following permissions are required on the list:
-
DELETE
-
READ
Exports the selected objects.
- YAML
-
Generates a YAML file containing all the object’s settings. The user has the option to download the export file, or to paste the content in the import dialog. The YAML export is particularly suitable for importing the exported objects again via the XDM UI.
- ZIP
-
This export writes several individual YAML-files. Each YAML-file is stored in a directory according to its type. For example, when exporting a native table backup task template named 'A backup template', a YAML-file 'A backup template.yaml' is created inside the directory /TaskTemplate/native-table-backup-task-template/ of the ZIP-file. This kind of export is suitable for usage in git-repositories together with XDM’s configuration as code feature.
Related and dependent objects can optionally be included in the export. The export dialog has the following options:
- Include dependent objects
-
Dependent objects only belong to the exported object like rules and tasks.
- Include permissions
-
Permissions of each exported object, only when the object supports permissions. Some objects like rules don’t have permissions.
- Include referenced objects
-
Referenced objects exist by their own and are used in the exported object like connections and environments.
- Include objects that depend on referenced objects
-
Also include the dependent objects of the referenced objects. E.g. the rules of a modification set or the rules in an application model version.
| Objects on which the user does not have READ permission are not exported. This includes dependent and referenced objects. However, the reference to an object will be exported. For example a connection object would refer to the credential, even if the user does not have READ permission on the credential. The definition of the credential object itself will not be part of the export file. This can lead to issues during the import, because the connection cannot be created without an existing credential. |
The following permissions are required on the list:
-
READ
Creates a new object in the current list. Depending on the object type either a popup dialog is shown for the most important settings, or the complete object is shown in edit mode. The dialog provides the option to create the object and remain in the current list or to switch to the newly created object in edit mode to perform further changes.
The following permissions are required on the list:
-
CREATE
The history list tracks all modifications made to objects within it. A new record is added each time an object is created, edited, or deleted. A record indicates who made the change, which object was affected, and when the change was made.
For more information about the concept of the history refer to the history concepts.
The following permissions are required on the list:
-
READ
Object Actions
The following actions are available on specific connections. In order to execute the action, the user must possess the necessary permissions for the object. The permissions required for each action are described individually. If the user does not have these permissions, the action will be disabled and the tooltip will provide the exact reason for the deactivation.
-
Apply SQL
-
Check
-
Delete
-
Duplicate
-
Edit
-
Event List
-
Export
-
Object History
-
Permission Check
-
Usage
-
Uses
The apply SQL action allows the user to execute DDL or DML statements against the selected connection.
The action can be used to either modify existing tables with an INSERT, UPDATE or DELETE statement, or to create or drop objects with a CREATE or DROP statement.
The action does not support SELECT statements, because they return a result set.
To execute a SELECT statement refer to the Browse data action
|
Required permissions
By default, the Apply SQL action is only present for the user that created the connection object. Other users that want to execute SQL statements need the following privileges on the connection:
-
READ
-
APPLY_SQL
Keep in mind that these statements are executed with the user that is specified by the credential of the connection.
If that user has the permission to execute DELETE, or even DROP statements against the database, data can be lost by executing an incorrect statement against the database. APPLY_SQL permission should only be granted to users who are authorized to modify and maintain the database.
|
Invocation parameter
- Input File
-
The SQL statements can be provided by an existing XDM file object. The file must be of type SQL or TEXT, and contain SQL statements delimited by the specified separator. Besides, an XDM file object, a local file can also be used. This file must be stored on the client’s local computer and must contain the SQL statements delimited by the specified separator.
- Separator
-
Controls the delimiter that indicates the end of an SQL statement. By default, a semicolon is used to indicate the end of an SQL statement.
- Continue On Error
-
Flag to indicate that all failures in SQL should be logged but not cause a failure. The execution will proceed even if an SQL error is encountered. By default the execution of SQL statements terminates after the first error.
- Ignore Failed Drops
-
Flag to indicate that a failed
DROPstatement can be ignored.
The following permissions are required:
-
APPLY_SQL
-
READ
This action validates the object and its dependencies, reporting configuration errors that could cause issues during task or workflow execution. The validation will cascade through the child objects of the checked objects and objects referenced by them.
For instance, if an installed application of an environment is checked, the check will process the application model, the specified version, the connection, modification sets, and involved modification methods. If an object has rules, all active rules will be checked. The modeling connection and version, including their modification sets and methods, will also be checked. Deactivated objects will not be included in recursive checks, but can be checked individually if the check is executed on the object itself.
Checks often require additional information from the context of the objects being checked, such as necessary connections or custom parameter values. The check will gather information from the objects being checked and use it to perform checks on child objects. Any required additional information must be provided before the check begins. The check queries the user to provide these missing information.
- Database object checks
-
For all rules which reference database objects such as tables, columns, etc, the check verifies that the those objects exist in the database system. If a connection can be inferred from the context, then this connection is used. If no connection is available in the context, it must be specified before the check is executed.
- Connection checks
-
For objects which configure access to external systems, such as connections or storage locations, the configuration check verifies that access can be established using the given credentials. Furthermore, additional operations on database connections are performed to check whether the credential user has the necessary authorization to access relevant database objects. In particular, the credential user’s permission to read source tables and write to target tables is verified. Similarly, for storage locations the check verifies that the credential user has permission to write to the working directory.
- Code checks
-
For all entities containing code segments, such as modification methods or condition scripts, the syntax for the code is checked. This does not check, however, whether at run time all necessary variables are likely to be available.
The following permissions are required:
-
READ
Delete the object. If the object is still used by another entity, an error message is displayed, and the object is not deleted. The delete operation must be confirmed in a separate popup.
The following options are available:
- Cascade
-
Recursively delete depending objects.
|
When using cascade, dependent objects are deleted first also with cascade enabled. Thus, a cascade deletion is a recursive function that deeply searches for dependent objects and deletes them first. There is only a confirmation for the first object. The dependent objects are deleted without confirmation but only when the user has the DELETE permission. This feature is only available in development mode. More information about development mode can be found in the chapter User Settings. It should be used with caution. |
The following permissions are required:
-
DELETE
-
READ
Will create an exact copy of the current object with a different display name in the same list. Users can decide whether they want to copy child objects like rules, permissions or tasks. It is only possible to select complete classes of objects and not to select individual child objects. Copied child-objects will preserve their display name. The default is to copy all child objects.
The following permissions are required:
-
CREATE
-
READ
Opens the current entity in edit mode.
The following permissions are required:
-
READ
-
WRITE
This list shows all registered events for the object. It includes events that are specific to the object, or for that type.
The following permissions are required:
-
READ
This action allows to export XDM objects in different formats in order to import them via export or CasC in another environment.
Refer to configuration of export for more information.
Related and dependent objects can optionally be included in the export. The export dialog has the following options:
- Include dependent objects
-
Dependent objects only belong to the exported object like rules and tasks.
- Include permissions
-
Permissions of each exported object, only when the object supports permissions. Some objects like rules don’t have permissions.
- Include referenced objects
-
Referenced objects exist by their own and are used in the exported object like connections and environments.
- Include objects that depend on referenced objects
-
Also include the dependent objects of the referenced objects. E.g. the rules of a modification set or the rules in an application model version.
- Include implicit created objects
-
Implicit created objects are tasks or workflows which were automatically created for execution. These objects won’t be exported by default, but can be included by setting this flag. When exporting implicit objects, make sure that the
Include dependent objectsflag is also enabled.
|
Objects on which the user does not have For example a connection object would refer to the credential, even if the user does not have |
The following permissions are required:
-
READ
The history displays all changes made to the respective XDM object, including any changes made to its rules.
Each change record includes information about the operation performed (e.g. CREATE, UPDATE, DELETE), the timestamp, and the user responsible for the change.
For more information about the concept of the history refer to the history concepts.
The following permissions are required:
-
READ
The check verifies that the current user has the authorization to access the object. The check can also be performed for a specific user or role, if needed. By default, the check is performed using the current user’s credentials. It is then applied to child and referenced objects.
Additional permission checks are applied when these can be inferred from the context in which the check was started. For example, if the check is performed on a table copy task, the referenced source and target connections are checked to determine whether the given identity has source or target usage permission respectively.
The following permissions are required:
-
READ
The Usage List shows all objects that refer to the current object. It provides an overview of the relationships and makes it easy to track these relationships.
The following permissions are required:
-
READ
The Uses List shows all objects that the current object uses. It provides an overview of the relationships and makes it easy to track these relationships.
The following permissions are required:
-
READ
License Options
This object is available if the following license option is enabled:
-
DATABASE_TYPE:DB2_ZOS
Appendix
Using SYSPROC.DSNUTILU/V in DB2 z/OS
When using DB2 z/OS in a native table copy or in native table icebox tasks, XDM uses the DB2 stored procedure SYSPROC.DSNUTILU/V.
This section explains how SYSPROC.DSNUTILU/V must be configured for XDM.
The work datasets used by the LOAD utility should be configured using additional TEMPLATE statements, that can be defined by using the Load Template Skeleton. Find detailed information there.
Alternatively, the procedure for the WLM environment can be customized by adding the required DD cards. Find more information in the following section.
Customizing SYSPROC.DSNUTILU/V invocations by adding required DD cards
When loading data during the execution of a task, XDM calls the DB2 stored procedure SYSPROC.DSNUTILV or SYSPROC.DSNUTILU
via JDBC to execute a LOAD statement.
You must make sure SYSPROC.DSNUTILU/V
is operational in your Db2 subsystem.
Refer to the IBM Db2 for z/OS SQL Reference, Chapter Procedures that are supplied with Db2, section DSNUTILU/V stored procedure, for information on how to configure SYSPROC.DSNUTILU and the associated WLM environment correctly.
To check which WLM environment is associated with SYSPROC.DSNUTILU/V , execute the following query:
For Db2 z/OS 12 or higher:
SELECT WLM_ENVIRONMENT FROM SYSIBM.SYSROUTINES
WHERE SCHEMA = 'SYSPROC' AND NAME = 'DSNUTILV'
For Db2 z/OS 11 or lower:
SELECT WLM_ENVIRONMENT FROM SYSIBM.SYSROUTINES WHERE SCHEMA = 'SYSPROC' AND NAME = 'DSNUTILU'
in your DB2 environment.
If SYSPROC.DSNUTILU/V is not operational, XDM will be unable to load data to the target.
In this case use compatibility table copy or compatibility table icebox tasks.
Supported DDL Objects
The following shows a list of supported object types for DDL generation:
| Object Type | Description / Features |
|---|---|
Alias |
Creation of alias objects ( |
Database |
Creation of databases ( |
Constraint |
Supports CHECK, PRIMARY KEY, UNIQUE, FOREIGN KEY. Added via |
Drop |
Drop statements for Tablespace, Index, Table, View, Sequence. Uses appropriate |
Grant |
Grants privileges on tables, views, sequences. Supported privileges: ALTER, INDEX, TRIGGER, DELETE, INSERT, REFERENCES, SELECT, UPDATE. |
Index |
Creation of indexes with options: UNIQUE, partitioning, additional columns, EXCLUDE NULL KEYS, partitioned indexes. |
Sequence |
Creation of sequences with parameters: data type, start value, increment, min/max, CYCLE, CACHE, ORDER. Optional comment support. |
Table |
Creation of tables and user-defined types (DISTINCT TYPE). Supports temporal tables (PERIOD BUSINESS_TIME/SYSTEM_TIME), generated/hidden columns. |
Tablespace |
Creation of tablespaces, including partitioning, size, StorageGroup, compression, cache, etc. |
View |
Creation of views with arbitrary SQL text. Optional comment support. |