DataStage Term | Description |
DataStage Administrator | Client program that is used to manage the project of DataStage. |
DataStage server | DataStage server engine component that links to DataStage client programs. |
DataStage Director | Execution and monitoring of the program that you want to use DataStage job. |
DataStage Designer | Graphical design tool that developers use to design and development of the DataStage job. |
DataStage Manager | Program used to manage the contents of the repository and see DataStage. Please refer to the "DataStage Manager Guide." |
Stage | Component that represents the data source for DataStage job, and process steps. |
Source | I pointed to the source of data extraction. STEPS, such as, for example. |
Target | Refers to the destination of the data. For example, point to the file to be loaded to AML (output by DataStage). |
Category | Category names used to categorize the jobs in DataStage. |
Container | Process of calling a common process. |
Job | Program that defines how to do data extraction, transformation, integration, and loading data into the target database. |
Job templates | And job processing is performed similar role model. |
Job parameters | Variables that are included in the job design. (For example,. Is the file name and password, for example) |
Job sequence | Control is a job for a start and run other jobs. |
Scratch disk | Disk space to store the data set, such as virtual record. |
Table definition | Definition that describes the required data includes information about the associated data tables and columns. Also referred to as metadata. |
Partitioning | For high-speed processing of large amounts of data, the mechanism of DataStage to perform split of the data. |
Parallel engine | Engine running on multiple nodes to control jobs DataStage. |
Parallel jobs | Available to parallel processing, the DataStage job type. |
Project | Jobs and a collection of components required to develop and run. The entry level to DataStage from the client. DataStage project must be licensed. |
Metadata | Data about the data. For example, a table definition that describes the columns that are building data. |
Link | Each stage of the job combined to form a data flow or reference lookup. |
Routine | Functions that are used in common. |
Column definition | Define the columns to be included in the data table. Contains the names and data types of the columns that make up the column. |
Environmental parameters | Variables that are included in the job design. For example, the file name and password. |
DB2 stage | Stage to be able to read and write to DB2 database. |
FileSet stage | Collection of files used to store the data. |
Lookup Stage | Performing processing such as a file reference to be used in DataStage or text file or table. |
LookupFileSet Stage | The process of storing a lookup table. |
Sequential Stage | Want to manipulate text files. |
Custom Stage | The process can not be implemented in stages, which is provided as standard DataStage, the process of implementing the language C. |
Copy Stage | The process of copying the data set. |
Stage Generator | The process of generating a dummy data set. |
DataSet stage | Data file to be used in the parallel engine. |
Funnel Stage | The process of copying the data from one set of multiple data sets. |
Filter Stage | Processing to extract records from the input data set. |
Merge Stage | Processing the join more than one input record. |
LinkCorrector Stage | Process of collecting the data that was previously divided. |
RemoveDuplicate Stage | The process of removing duplicate entries from the data set. |
ChangeCapture Stage | The process of comparing the two data sets, the difference between the two records. |
RowImport Stage | Process of importing a column from a string or binary column. |
RowExport Stage | Processing to export a column of another type string or binary column. |
Transformer Stage | Processing to edit item, or type conversion. |
Modify Stage | Process of conversion to the specified data type, the conversion of the value to the specified value NULL. |
XML input stage | Reading and processing of XML file, the extraction of the required element from the XML data. |
Sort Stage | Process to sort data in ascending or descending order. |
Join Stage | Processing the join more than one input record. |
RowGenerator Stage | Process of adding a line to an existing dataset |
ColumnGenerator Stage | Process of adding a column to an existing dataset |
Aggregator stage | The process of aggregating the data. |
Pivot Stage | The process of converting multiple columns into multiple rows of records. |
Peek Stage | Treatment to destroy the records. |
Stream link | Link that represents the flow of data. |
Reference Links | Input link that represents the reference data. |
Reject link | Link to output the data that do not meet the criteria you specify. |
Integer type | Data type that represents the integer value. |
Decimal type | Data type represents a number that contains the value of the decimal point. |
NULL value | Specific value indicating the unknown value. 0, not the same as a blank or empty sequence. |
DataSet | Collection of data. |
SQLStatement | Statement to manipulate the data in the table. |
TWS | Stands for Tivoli Workload Scheduler. The name of the product you want to create a job net. |
Hash | One way of dividing the data specified in the partitioning function of DataStage. Partitioning is performed by using a hash value. |
Modulus | One way of dividing the data specified in the partitioning function of DataStage. I do partitioning by dividing the number of partitions in the specified value. |
Same | One way of dividing the data specified in the partitioning function of DataStage. Processing as input partition without subdivision has been output by the previous stage. |
Job Properties | Property sheet to make settings for the job. |
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://groups.google.com/forum/#!forum/datagenx
No comments:
Post a Comment