Quantcast
Channel: Attunity Integration Technology Forum
Viewing all 411 articles
Browse latest View live

Kafka Support

$
0
0
We are using vendor supplied version of Replicate, currently version 4.0.9.132.
I want to play around with Apache Kafka integration, but I can't find anywhere to configure this.
I am guessing that Kafka integration is not available in this version.
Is anyone able to tell me which minimum version we would have to upgrade to integrate with Kafka.
Note apologies if this is in the wrong topic, feel free to move, I couldn't find a Kafka specific topic to use.

Aborted without detailed error even though trace mode on

$
0
0
Hi,

I'm doing a PoC with Attunity CDC for Oracle but it has aborted without a detailed error.

When I go on "Collect Diagnostics", I have:

"11/2/2017 12:06:05 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T:Th e Metadata Manager BUILD TABLE LIST Enter","engine","",""
"11/2/2017 12:06:05 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T:Th e Metadata Manager CONNECT Enter","engine","",""
"11/2/2017 12:06:05 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T:Th e Metadata Manager CONNECT Exit","engine","",""
"11/2/2017 12:06:05 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T:Th e Metadata Manager GET TABLES Enter","engine","",""
"11/2/2017 12:06:05 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T:Th e Metadata Manager GET TABLES Exit","engine","",""
"11/2/2017 12:06:05 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T:Th e Metadata Manager BUILD INDEXES Enter","engine","",""
"11/2/2017 12:06:05 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T:Th e Metadata Manager BUILD INDEXES Exit","engine","",""
"11/2/2017 12:06:05 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T:Th e Metadata Manager BUILD TABLE LIST Enter","engine","",""
"11/2/2017 12:06:05 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T: ** Debug / Announce input parameters. **Component Type: Component Name: at: 'oracle_source_allocate(...)'","source","",""
"11/2/2017 12:06:05 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T: ** Debug / Announce input parameters. **Prepare at: 'oracle_source_prepare(...)'","source","",""
"11/2/2017 12:06:07 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T:Th e Sorter CONSTRUCTOR Enter","sorter","",""
"11/2/2017 12:06:07 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T:Th e Sorter CONSTRUCTOR Exit","sorter","",""
"11/2/2017 12:06:07 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T: ** Debug / Announce input parameters. ** at: 'oracle_source_set_stream(...)'","source","",""
"11/2/2017 12:06:07 PM","TRACE","xxxx","RUNNING","IDLE","ORACDC000T:Or acle CDC Capture start","source","",""
"11/2/2017 12:06:07 PM","TRACE","xxxx","RUNNING","INIT","ORACDC000T:Th e Sorter RUN Enter","sorter","",""
"11/2/2017 12:06:07 PM","TRACE","xxxx","RUNNING","IDLE","ORACDC000T:Or acle CDC Merger was allocated","source","",""

And in my xdbcdc_trace table and I have this, even though trace mode on, I didn't get a detailed error:
timestamp type node status sub_status status_message source text_data binary_data
2017-11-02 12:05:24.4460000 ERROR xxxxxxx ORACDC205E:The Oracle CDC instance MichelOracleCDC was aborted. service NULL NULL
2017-11-02 12:05:32.5800000 ERROR xxxxxxx ORACDC205E:The Oracle CDC instance MichelOracleCDC was aborted. service NULL NULL
2017-11-02 12:05:38.6890000 ERROR xxxxxxx ORACDC205E:The Oracle CDC instance MichelOracleCDC was aborted. service NULL NULL
2017-11-02 12:05:46.8160000 ERROR xxxxxxx ORACDC205E:The Oracle CDC instance MichelOracleCDC was aborted. service NULL NULL
2017-11-02 12:05:54.9590000 ERROR xxxxxxx ORACDC205E:The Oracle CDC instance MichelOracleCDC was aborted. service NULL NULL
2017-11-02 12:06:03.0710000 ERROR xxxxxxx ORACDC205E:The Oracle CDC instance MichelOracleCDC was aborted. service NULL NULL
2017-11-02 12:06:11.2140000 ERROR xxxxxxx ORACDC205E:The Oracle CDC instance MichelOracleCDC was aborted. service NULL NULL


best regards,
Michel

Replicate 5.5 via ODBC Horrible Throughput

$
0
0
Hello everyone!
I'm hoping that there are other shops out there using Replicate 5.5. with DB2z V11 as the source who can help us solve our throughput issues. Here's some background:

When we first started using Replicate (older version 5.3.3.45 MVS 32-bit) it was remarkably slow with horrible throughput ranging from a couple hundred records per second to maybe 3000 records per second. On million and billion row tables the bulk load process took forever to say the least. This older version of Replicate connected to DB2z natively using a daemon on the mainframe. To speed up the process, our Architect figured out that we could bind certain packages that replicate was using with DEGREE ANY (turning on parallelism) and our throughput jumped not only into the tens of thousands but over 100 thousand records per second at times.

Fast forward to earlier this year when we installed Replicate 5.5 and connected via ODBC. It's like we've started over again in the throughput department. On a "good" load we maybe get 2100 records per second. Sometimes we don't even get 1000 records per second! We did some research to see if we could turn parallelism on and discovered that the following packages are being used with this new version:

SYSLH200
SYSSH100
SYSSH200
SYSSTAT

We bound those packages with DEGREE ANY but this time our throughput has not increased. Here are additional details of our environment:

Replicate: V5.5.0.345, 363_r4db2
ODBC: 11.1.0.1527
Source: DB2z V11 non data sharing, Z/os V2.2
Target: Teradata 15.10


Does anyone know how and what we need to configure so that we can get back to throughput of at least tens of thousands of records per second? Any ideas are greatly appreciated.

Thanks!
Candace

HR - TRAVEL MANAGEMENT -- New Data Types

$
0
0
If your organization uses the Travel Management portion of the HR module, then you may want to add some or all of the data types listed below to your Gold Client configuration. To view the details about creating these data types, view the series of screen captures that follow below.
This topic updated in Oct 2014 with additional details such as new tables and new Data Type linkages
HR - QUICKTRIP TRAVEL
HR - TRAVEL MASTER DATA
HR - TRAVEL PLANS
HR - TRAVEL REQUEST
HR - TRIP DETAILS
LINK - TRIP RESULTS TO FI DOCS

Creating Data Types (defining table and field relations)
Data type: HR – TRIP DETAILS

Name:  Capture19.PNG
Views: 1
Size:  37.0 KB

The tables below all use common fields PERNR-PERNR as seen in the following screen capturePTRV_ADMIN
PTRV_BTRGSUM
PTRV_CCC_RUNS
PTRV_KMSUM
PTRV_ME_CCC_RUN

Name:  Capture20.PNG
Views: 1
Size:  24.3 KB
The tables below all use common fields PERNR-PERNR and REINR-REINR as seen in the following screen capture
PTRV_ARCHIVE
PTRV_HEAD
PTRV_NOT_CH_TR
PTRV_TRIPCHN_46C
PTRV_TRIP_DELETE

Name:  Capture21.PNG
Views: 1
Size:  34.0 KB
The tables below all use common fields PERNR-PERNR, REINR-REINR, and PERIO-PERIO as seen in the following screen capture
FITV_HINZ_WERB_B
FITV_HINZ_WERB_S
PTRV_SADD
PTRV_SBACKLOG
PTRV_SCOS
PTRV_SHDR
PTRV_SREC
PTRV_TRIP_CHAIN

Name:  Capture22.PNG
Views: 1
Size:  42.3 KB

The tables below all use common fields PERNR-PERNR, REINR-REINR, and HDVRS-HDVRS as seen in the following screen capturePTRV_ARCH_HEAD

Name:  Capture23.PNG
Views: 1
Size:  3.5 KB
The tables below all use common fields PERNR-PERNR, REINR-REINR, PDVRS-PDVRS, and PERIO-PERIO as seen in the following screen capture
PTRV_ARCH_PERIO
PTRV_PERIO

Name:  Capture24.PNG
Views: 1
Size:  52.6 KB

Data type: LINK - TRIP RESULTS TO FI DOCS

Name:  Capture25.PNG
Views: 1
Size:  11.0 KB

Table PTRV_ROT_AWKEY uses the common fields below:

Name:  Capture26.PNG
Views: 1
Size:  6.0 KB

Data type: HR - TRAVEL MASTER DATA

Name:  Capture27.PNG
Views: 1
Size:  21.7 KB

Common field relations for table PA0017-PA0017 are:

Name:  Capture28.PNG
Views: 1
Size:  62.5 KB

All other table relations use only PERNR as seen in this screen capture:

Name:  Capture29.PNG
Views: 1
Size:  22.2 KB

Data type: HR - TRAVEL REQUEST

Name:  Capture30.PNG
Views: 1
Size:  15.7 KB

Common field relations for table FTPT_REQ_HEAD are:

Name:  Capture31.PNG
Views: 1
Size:  47.1 KB

All other table relations use only PERNR and REINR as seen in this screen capture:

Name:  Capture32.PNG
Views: 1
Size:  31.8 KB

Data type: HR - TRAVEL PLANS

Name:  Capture33.PNG
Views: 1
Size:  49.7 KB

The tables below all use common fields PERNR-PERNR and REINR-REINR as seen in the following screen captureFTPT_TM_MEMO
FTPT_VARIANT

Name:  Capture34.PNG
Views: 1
Size:  3.0 KB

The tables below all use common fields PERNR-PERNR, PLANNR-PLANNR, and REINR-REINR as seen in the following screen captureFTPT_PLAN
FTPT_PLANHISTORY

Name:  Capture35.PNG
Views: 1
Size:  3.2 KB

The tables below all use common fields PERNR-PERNR, PLANNR-PLANNR, REINR-REINR, VARIANT-VARIANT, and VARIANTVRS-VARIANTVRS as seen in the following screen capture. Note: some fields will use VARIANTE-VARIANT and that is okayFTPT_CAR
FTPT_CAR_PREF
FTPT_FARE_COMP
FTPT_FARE_NOTE
FTPT_FLIGHT
FTPT_FLIGHT_FARE
FTPT_FLIGHT_FCMP
FTPT_FLIGHT_LEG
FTPT_FLIGHT_PREF
FTPT_FLIGHT_TST
FTPT_FLIGHT_TSTK
FTPT_HOTEL
FTPT_HOTEL_PREF
FTPT_ITEM
FTPT_OTHER
FTPT_PNR
FTPT_PNR_ADDRESS
FTPT_PNR_NAME
FTPT_PNR_OSI
FTPT_PNR_PHONE
FTPT_PNR_REMARKS
FTPT_PNR_SSR
FTPT_SERVICE
FTPT_SYNC_DATA
FTPT_TRAIN
FTPT_TRAIN_PREF
FTPT_VAR_INFO

Name:  Capture36.PNG
Views: 1
Size:  28.1 KB

Common field relations for all tables are QTNR as seen in this screen capture:

Name:  Capture37.PNG
Views: 1
Size:  28.2 KB

Creating Data Type Links

From HR - TRIP DETAILS, add any of the data type links in the screen capture below that your organization may require

Name:  Capture38.PNG
Views: 1
Size:  13.9 KB

Linkage details can be seen in the screen capture below for linking to data type HR – PAYROLL CLUSTER (PCL1). Notice that a concatenation of fields PERNR, REINR, PERIO, and PDVRS is required for passing a value to PCL1-SRTFD (when the RELID = 'TE'; this is not necessary for RELID = 'TC').

Name:  Capture39.PNG
Views: 1
Size:  29.3 KB

Linkage details can be seen in the screen capture below for linking to data type LINK – TRIP RESULTS TO FI DOCS

Name:  Capture40.PNG
Views: 1
Size:  21.4 KB

Linkage details can be seen in the screen capture below for linking to data type FI - FINANCE DOCUMENTS

Name:  Capture41.PNG
Views: 1
Size:  17.3 KB

Linkage details can be seen in the screen capture below for linking to data type LINK TO WORKFLOW. Notice that a concatenation of fields PERNR and REINR is required for passing a value to SWW_WI2OBJ-INSTID.

Name:  Capture42.PNG
Views: 1
Size:  30.0 KB

In addition to the linkages above, the various HR data types can be linked together so that they are part of the same data export. Our standard config has them linked using this flow: HR - TRAVEL PLANS > HR - TRAVEL REQUEST > HR - TRIP DETAILS; however this flow could be changed to match your organization's specific requirements. The linkage details for this specific flow are visible in the screen captures below.

Name:  Capture43.PNG
Views: 1
Size:  37.3 KB
Attached Images
                         

Create date type for Packing Instructions

$
0
0
If your organization generates Packing Instructions data in your system and wants to use Gold Client to copy this data, then the set of configuration changes below is relevant. Follow these instructions to align your configuration with what we have in our golden version.

Step 1: In your primary Source system (typically Production), go to ZGOLD > Configuration > Framework > Data Echo
Step 2: Create a new data type named CA - PACKING INSTRUCTIONS and input these header details:

  • Datatype Kind: M (master data)
  • File ID: CAPACK
  • Data Snap Rec Limit: 1000

Step 3: Open the new data type and select the 'Tables' maintenance view. Add the three table relations in the screen capture, and be sure to add the common fields as well.

Name:  Capture44.PNG
Views: 1
Size:  24.0 KB

Step 4: Select data type CA - PACKING INSTRUCTIONS and create a copy of it (use the 'Copy' function located on the toolbar). Call the new data type CA - SUBORD PACKING INSTRUCTIONS and assign to it the file ID of CA_SPI.
Step 5: Create a new data type named LINK - PACK INSTRUCTIONS TO TEXT and input these header details:

  • Datatype Kind: Z (other)
  • File ID: LPI2TX
  • Data Snap Rec Limit: 1000

Step 6: Open the new data type and select the 'Tables' maintenance view. Add the lone table relation in the screen capture, and be sure to add the common fields as well.

Name:  Capture45.PNG
Views: 1
Size:  15.1 KB

Step 7: Select data type LINK - PACK INSTRUCTIONS TO TEXT and create a copy of it (use the 'Copy' function located on the toolbar). Call the new data type LINK - PI TO SUBORDINATE PI and assign to it the file ID of LPI2PI.

Now that the four new data types have been created, it is necessary to link them together and to link in other data types as well.
Step 8: Open data type CA - PACKING INSTRUCTIONS and select the 'Linkages' maintenance view. Add the three linkages in the screen capture, and be sure to add the linkage details as well.
Note: Some of these linkage details requires the entry of an asterisk (*) in the 'Prv. Field' column which indicates that concatenation is being used. Double-click on this row to make visible the bottom sub-screen so that it is possible to input the concatenation details.

Name:  Capture46.PNG
Views: 1
Size:  46.3 KB
Name:  Capture47.PNG
Views: 1
Size:  19.2 KB

Step 9: Open data type LINK - PACK INSTRUCTIONS TO TEXT and select the 'Linkages' maintenance view. Add the link in the screen capture, and be sure to add the linkage details as well.

Name:  Capture48.PNG
Views: 1
Size:  23.9 KB

Step 10: Open data type LINK - PI TO SUBORDINATE PI and select the 'Linkages' maintenance view. Add the link in the screen capture, and be sure to add the linkage details as well.

Name:  Capture49.PNG
Views: 1
Size:  17.3 KB

Step 11: Open data type CA - SUBORD PACKING INSTRUCTIONS and select the 'Linkages' maintenance view. Add the link in the screen capture, and be sure to add the linkage details as well.

Name:  Capture50.PNG
Views: 1
Size:  17.8 KB

Save changes and exit the framework.
Attached Images
       

Add data type for copying New GL data objects

$
0
0
If your organization is using the New General Ledger functionality, you might need to create a new data type to be able to copy the New GL data objects. To deploy these configuration changes into your own system, follow the steps below.

Note: this topic has been updated as of Dec 26, 2014 in regards to updates that have been deployed to our standard configuration.

The scope of this update is to remove tables FAGL_SPLINFO and FAGL_SPLINFO_VAL from data type FI - GL ITEMS (FAGLFLEX) and add them to their own data type (named FI - GL SPLIT OPEN ITEMS). Then add this new data type from FI - FINANCE DOCUMENTS.The purpose behind this change is to better optimize the data export process. The screen captures below now represent how our standard configuration has been updated.



Step 1: In your production system, go to ZGOLD > Configuration > Framework > Data Echo

Step 2: Create a new data type named FI - GL ITEMS (FAGLFLEX) and add the table relations shown in the screen captures below.

Name:  Capture51.PNG
Views: 1
Size:  14.3 KB

Step 3: Add the common fields for each table relation using the details outlined below

FAGLCOFITRACE < FAGLFLEXA

Name:  Capture53.PNG
Views: 1
Size:  3.5 KB

FAGLFLEXA < FAGLFLEXA, and FAGLFLEXP < FAGLFLEXA

Name:  Capture54.PNG
Views: 1
Size:  6.1 KB

FAGL_BSBW_HISTRY < FAGLFLEXA

Name:  Capture55.PNG
Views: 1
Size:  3.9 KB

Step 4: Create a new data type named FI - GL SPLIT OPEN ITEMS and add the table relations shown in the screen captures below.

Name:  Capture56.PNG
Views: 1
Size:  11.8 KB

Step 5: Add the common fields for each table relation using the details outlined below

Name:  Capture57.PNG
Views: 1
Size:  7.5 KB

Step 6: Locate data type FI – FINANCE DOCUMENTS and select the ‘Linkages’ maintenance view. Add links to data types FI – GL ITEMS (FAGLFLEX) and FI - GL SPLIT OPEN ITEMS as recipient data types. Use the details shown in the screen captures below to link these data types together. When these changes are complete, save your work and exit the Data Echo framework.

Name:  Capture58.PNG
Views: 1
Size:  43.4 KB
Attached Images
       

Data type updates relevant to Variant Configuration

$
0
0
If your organization creates sales orders which use variant configuration, the change outlined in this forum topic may be relevant to you. This change should be of particular interest to any customers who have copied sales orders to a target client and then tried to change the data but end up encountering this error:
"Internal error in communication between configuration and sales doc. GET_CONFIG_MODE".
UPDATED March 29, 2016: Our team has recently worked with other customers who have Configuration data assigned to Production Orders, Purchase Requisitions, and Planned Orders. Having learned this, we have updated our standard configuration accordingly and have updated this posting to include these details here. Freely apply any or all of these config changes as are relevant to your system.

SALES ORDER CONFIGURATION
1. In your source system – typically Production – go into the Data Echo Framework and locate data type LINK - SALES ORDER ITEMS
2. Select the data type to open the underlying maintenance views and select ‘Linkages’
3. Add a new link to recipient data type CA - INSTALLED BASE; use the linkage details that can be viewed in this screen capture
Note: This change may now leave you with an active link to both CA - INSTALLED BASE and SD - INSTALLED BASE, and we suggest that you delete the link to SD - INSTALLED BASE. If you don’t want to delete the link to this data type you should at least set it as inactive as it otherwise will create an unnecessary redundancy. Although CA - INSTALLED BASE and SD - INSTALLED BASE were similar at one point, we are now only using CA - INSTALLED BASE within our configuration and have subsequently deleted SD - INSTALLED BASE from our configuration altogether.

Name:  Capture59.PNG
Views: 1
Size:  17.5 KB

PRODUCTION ORDERS CONFIGURATION
1. In your source system – typically Production – go into the Data Echo Framework and create a new data type named LINK - PROD ORDR TO INSTALL BASE using the details in this screen capture

Name:  Capture60.PNG
Views: 1
Size:  14.7 KB
2. Select this new data type to open the underlying maintenance views and select ‘Linkages’
3. Add a new link to recipient data type CA - INSTALLED BASE using the linkage details that can be viewed in this screen capture

Name:  Capture61.PNG
Views: 1
Size:  16.1 KB

4. Select data type PP - PRODUCTION ORDERS to open the underlying maintenance views and select ‘Linkages’5. Add a new link to recipient data type LINK - PROD ORDR TO INSTALL BASE using the linkage details that can be viewed in this screen capture

Name:  Capture62.PNG
Views: 1
Size:  18.9 KB

PURCHASE REQUISITIONS CONFIGURATION
1. In your source system – typically Production – go into the Data Echo Framework and select data type MM - PURCHASING REQUISITIONS to open the underlying maintenance views and select ‘Linkages’
2. Add a new link to recipient data type CA - INSTALLED BASE using the linkage details that can be viewed in this screen capture

Name:  Capture63.PNG
Views: 1
Size:  18.0 KB

3. You may choose to repeat this process for data type MM - PURCHASE REQS W/PURCH DOCS
PLANNED ORDERS CONFIGURATION
1. In your source system – typically Production – go into the Data Echo Framework and select data type MM - PURCHASING REQUISITIONS to open the underlying maintenance views and select ‘Linkages’
2. Add a new link to recipient data type CA - INSTALLED BASE using the linkage details that can be viewed in this screen capture

Name:  Capture64.PNG
Views: 1
Size:  17.1 KB
Attached Images
      

Add Tables to SD - SALES DOCUMENTS

$
0
0
If your company runs the SD module, you may want to add this configuration to Gold Client so you can include the data in your export. To verify if you use it, in your Production system or a recent copy of Production, execute transaction SE16 or SE16N for tables MSKA, MSKAH, MSSA, MSSAH, EBEW, EBEWH, VBSS, and VBSK to check if there are any records. If data exists, update data type SD - SALES DOCUMENTS in the Data Echo framework to include each table that has entries in Production. See the screenshots attached for how the configuration should look.
Note: this forum topic updated October 10, 2014...

MSKA with VBAP as its source; use common fields VBELN and POSNR

Note: using only VBELN and POSNR should be sufficient as these fields are part of a secondary index delivered by SAP

Name:  Capture65.PNG
Views: 1
Size:  19.0 KB

MSKAH with VBAP as its source; use common fields MATNR, POSNR, SOBKZ, VBELN and POSNR

Name:  Capture66.PNG
Views: 1
Size:  20.8 KB

MSSA with VBAP as its source; use common fields MATNR, POSNR, SOBKZ, VBELN and POSNR

Name:  Capture67.PNG
Views: 1
Size:  21.0 KB

MSSAH with VBAP as its source; use common fields MATNR, POSNR, SOBKZ, VBELN and POSNR

Name:  Capture68.PNG
Views: 1
Size:  20.6 KB

EBEW with VBAP as its source; use common fields VBELN and POSNR
Note: using only VBELN and POSNR should be sufficient as these fields are part of a secondary index delivered by SAP

Name:  Capture69.PNG
Views: 1
Size:  18.1 KB

EBEWH with VBAP as its source; use common fields MATNR, BWKEY (from WERKS), POSNR, SOBKZ, VBELN and POSNR
Note: the common field relation BWKEY-WERKS will need to be added manually as the 'Common Fields' utility will not propose it

Name:  Capture70.PNG
Views: 1
Size:  21.3 KB

VBSS with VBAK as its source; use common field VBELN

Name:  Capture71.PNG
Views: 1
Size:  13.5 KB

VBSK with VBSS as its source; use common field SAMMG

Name:  Capture72.PNG
Views: 1
Size:  13.5 KB

If you have any variations of data types SD - SALES DOCUMENTS in your Gold Client configuration such as SD - SALES DOCS TO RETURNS, SD - SALES DOCS TO CREDIT MEMO, SD - SALES DOCS TO DEBIT MEMO, SD - SALES DOCUMENTS FROM PO, etc., you should add these same table relations to those data types for consistency.




Attached Images
        

Addition of PM master data tables

$
0
0
If your organization uses the Plant Maintenance module, some or all of the configuration enhancements detailed below may be relevant if you wish to copy this data from one system to another. Those tables included in these enhancements are listed here:
Table Table Description
IHPA Plant Maintenance: Partners
IRLOT Reference Functional Location (Table)
IRLOTX Reference Functional Location: Short Texts
IHGNS Permit Segment for Plant Maintenance
IHSG Object-Related Permits in Plant Maintenance
T357G Permits
T357G_T Text for Table 357GT
IMEL Entry List for Measurement Documents
IMEP Items in the Entry List
IMPH Measurement and Counter Reading Transmission History
IMPTT Measuring Point (Table)
IMRG Measurement Document

Enhancements to the Client Construct framework should be made as follows
Step 1: Add tables IHPA, IRLOT, and IRLOTX to data type FUNCTIONAL LOCATIONS

Name:  Capture73.PNG
Views: 1
Size:  11.9 KB

NOTE: Be aware that table IHPA has been included within the Client Construct framework for some time now as part of master data type PROJECT SYSTEMS; however the data in this table aligns more closely with the data in master data type FUNCTIONAL LOCATIONS. So to avoid copying the data in table IHPA redundantly, we recommend that you either inactivate it within the PROJECT SYSTEMS data type (and consider adding the reason why in the ‘Exclusion Reason’ field) or delete it outright.Step 2: Create a new master data type named PERMITS and add tables IHSG, IHGNS, T357G, and T357G_T.

  • Tables that begin with the letter ‘T’ are typically SAP configuration tables; however SAP classifies these specific tables as application data.


Name:  Capture74.PNG
Views: 1
Size:  5.3 KB

Step 3: Create a new master data type named MEASUREMENT READINGS & COUNTERS and add tables IMEL, IMEP, IMPH, and IMPTT.

Name:  Capture75.PNG
Views: 1
Size:  5.8 KB

If you want to be able to copy subsets of these various Plant Maintenance data objects then enhancements to the Data Echo framework should be made as per the instructions that follow.

Step 1: Add table IHPA to the following data types: PM - FUNCTIONAL LOCATIONS, CA – EQUIPMENT DATA, and CA – EQUIP WITH MEASUREMENT DATA, and be sure to add the common field using the details displayed in the screen captures that follow

Name:  Capture76.PNG
Views: 1
Size:  30.5 KB
Name:  Capture77.PNG
Views: 1
Size:  15.8 KB

Step 2: Add tables IRLOT, IRLOTX, JEST, and JSTO to new data type PM – REF FUNCTIONAL LOCATIONS using the details displayed in the two screen captures that follow

Name:  Capture78.PNG
Views: 1
Size:  21.4 KB

Step 3: Add tables IHSG, IHGNS, T357G, and T357G_T to new data type PM – PERMITS using the details displayed in the screen captures that follow

Name:  Capture79.PNG
Views: 1
Size:  23.3 KB

Step 4: Add tables IMPTT, IMPH, CABN, and CABNT to new data type PM – MEASURING POINTS using the details displayed in the four screen captures that follow.

Name:  Capture80.PNG
Views: 1
Size:  22.8 KB

Step 5: Add table IMRG to new data type PM - MEASURING DOCUMENTS using the details displayed in the screen capture that follows.

Name:  Capture81.PNG
Views: 1
Size:  14.0 KB

Step 6: Add tables IMEL and IMEP to new data type PM – MEASUREMENT DOC ENTRY LISTS using the details displayed in the two screen captures that follow.

Name:  Capture82.PNG
Views: 1
Size:  17.7 KB

Step 7: Save changes one final time and exit the framework
Attached Images
          

Add Revenue Recognition (VBREV*) tables to the Data Echo framework

$
0
0
If your organization generates data in the Revenue Recognition tables (see list below) and needs to copy this data using Gold Client, you may want to check your existing configuration against our standard as we have enhanced this data type over time. If you wish to align this data type in your config with our current standard, follow the steps below. If you're not sure whether this data exists in your system, use t-code SE16 or SE16N to query these tables.
VBREVAC
VBREVC
VBREVE
VBREVK
VBREVR

Step 1: In your primary Source client (typically Production), go to ZGOLD > Configuration > Framework > Data Echo
Step 2: Create a new data type using the details that follow:
Data Type: FI - REVENUE RECOGNITION
Datatype Kind: T (transactional)
File ID: FI_RR_
Data Snap Rec Limit: 1000
Step 3: Open the data type and select the 'Tables' maintenance view and add the five table relations below, and be sure to add the common fields for each relation as seen in the following screen captures
VBREVAC - VBREVE
VBREVC - VBREVE
VBREVE - VBREVE
VBREVK - VBREVE
VBREVR - VBREVE

Name:  Capture83.PNG
Views: 1
Size:  39.4 KB

Step 4: Open the data type and select the 'Linkages' maintenance view to add the data type links in the screen capture below, and be sure to add the linkage details as well

Name:  Capture84.PNG
Views: 1
Size:  24.1 KB

Note: your organization's CO-PA data type will not be named CO - PA (CE1S001) but instead will use the CE1* table name that is specific to your company's operating concern.

Name:  Capture85.PNG
Views: 1
Size:  106.6 KB

Note: This linkage detail requires the entry of an asterisk (*) in the 'Prv. Field' column which indicates that concatenation is being used. Double-click on this row to make visible the bottom sub-screen so that it is possible to input the concatenation details.

Name:  Capture86.PNG
Views: 1
Size:  104.8 KB
Name:  Capture87.PNG
Views: 1
Size:  54.0 KB

Step 5: You can now add the FI - REVENUE RECOGNITION data type as a 'child' from multiple 'parent' data types. We suggest adding it from the list of data types below but your organization can add the config wherever relevant. SD - BILLING DOCUMENTS
SD - CONTRACTS
SD - RMA
SD - SALES DOCS TO CREDIT MEMO
SD - SALES DOCS TO DEBIT MEMO
SD - SALES DOCS TO RETURNS
SD - SALES DOCUMENTS
SD - SALES DOCUMENTS FROM PO
To add the link, open the respective data type and select the 'Linkages' maintenance view. Add a link to FI - REVENUE RECOGNITION, and be sure to add the linkage details as well.
Note: the linkage details to FI - REVENUE RECOGNITION are the same for all data types above which consists of VBREVE VBELN = VBELN as visible in the screen capture below

Name:  Capture88.PNG
Views: 1
Size:  38.5 KB
Attached Images
      

Config enhancements for Asset Master financial records

$
0
0
If your organization uses data type CA - ASSET MASTER to copy asset records and you also want the related financial documents to be included, then the changes outlined here are potentially relevant. We have recently enhanced our standard configuration to copy FI-CO objects that are related to Asset transactions like acquisition, transfer, and retirement (however these enhancements do not involve depreciation postings). If you want to take advantage of some or all of these enhancements, follow the details listed below.
This posting was updated on December 16, 2014 in regards to additional enhancements that were applied to our standard configuration with regards to asset depreciation postings. Because of this additional content, this forum topic has been divided into two sections: the first for FI-CO objects related to depreciation, and the second for FI-CO objects related to other Asset transactions (except depreciation).

Section 1: FI-CO for Asset depreciation transactions
Check that your configuration for data type CA - ASSET MASTER has a link to LINK - DEPRECIATION TO FI; our standard has this link set as 'Inactive' but to include the related Financial documents, you will need to activate this link.

Name:  Capture89.PNG
Views: 1
Size:  25.2 KB

The enhancements made to our standard configuration are specific to those data types added as links from LINK - DEPRECIATION TO FI; add whichever data type linkages are relevant to your organization. Important: many of the data type linkages detailed below contain an asterisk (*) in the Provider DT Field column; this means that concatenation is in use for that specific row. You must double-click this line and then add the entries that are displayed in the very last sub-screen. In most cases this consists of BUKRS (Company Code) and GJAHR (Fiscal Year) but be aware that there are exceptions.

CO - LINE ITEMS

Name:  Capture90.PNG
Views: 1
Size:  56.6 KB


CO - PA (CE1S001)
Note: Your CO-PA table will not use CE1S001 but rather the table unique to your organization's Operating Concern.

Name:  Capture91.PNG
Views: 1
Size:  58.9 KB

FI - CONSOLIDATIONS

Name:  Capture92.PNG
Views: 1
Size:  53.9 KB

FI - FINANCE DOCUMENTS

Name:  Capture93.PNG
Views: 1
Size:  54.8 KB

FI - GMIA
Note: our standard has this link set as 'inactive' but if you want Grants Mgmt data to be included, set it as active

Name:  Capture94.PNG
Views: 1
Size:  53.2 KB

FI - MATERIAL LEDGER
Note: our standard has this link set as 'inactive' but if you want Material Ledger records to be included, set it as active

Name:  Capture95.PNG
Views: 1
Size:  42.9 KB

FI - PCA (GLIDXA)

Name:  Capture96.PNG
Views: 1
Size:  54.7 KB

FI - PROFIT CENTER DOCS

Name:  Capture97.PNG
Views: 1
Size:  48.1 KB

FM - FUNDS

Note: our standard has this link set as 'inactive' but if you want Funds Mgmt data to be included, set it as active

Name:  Capture98.PNG
Views: 1
Size:  42.0 KB

Section 2: FI-CO for Asset transactions (other than depreciation)

Check that your configuration for data type CA - ASSET MASTER has a link to LINK - ASSETS TO FI; our standard has this link set as 'Inactive' but to include the related Financial documents, you will need to activate this link.

Name:  Capture99.PNG
Views: 1
Size:  42.3 KB

The enhancements made to our standard are in relation to data type LINK - ASSETS TO FI and the linkages from this data type; add whichever data type linkages are relevant to your organization.

CO - LINE ITEMS

Name:  Capture100.PNG
Views: 1
Size:  52.0 KB

CO - PA (CE1S001)

Note: Your CO-PA table will not use CE1S001 but rather the table unique to your organization's Operating Concern. Also, our standard has this link set as 'inactive' but if you want CO-PA data to be included, set it as active.

Name:  Capture101.PNG
Views: 1
Size:  57.3 KB

FI - CONSOLIDATIONS

Note: our standard has this link set as 'inactive' but if you want Consolidations data to be included, set it as active

Name:  Capture102.PNG
Views: 1
Size:  109.9 KB

FI - GMIA

Note: our standard has this link set as 'inactive' but if you want Grants Mgmt data to be included, set it as active

Name:  Capture103.PNG
Views: 1
Size:  98.8 KB
Name:  Capture104.PNG
Views: 1
Size:  110.6 KB
Name:  Capture105.PNG
Views: 1
Size:  55.1 KB
Attached Images
                 

Enhancements for copying Work Orders and related data objects

$
0
0
If your organization creates Work Orders (Maintenance Orders), then some or perhaps all of the changes documented in this forum posting may be relevant.
The Gold Client standard configuration has contained two data types named PM - WORK ORDERS and PP - MAINTENANCE ORDERS for quite some time. These two data types share the primary object - Work Order - and they have a few other similarities as well, but they also have some uniqueness about them. Ideally, these two data types should collect the data in relation to both data types (no uniqueness basically) and so now our standard has been enhanced in what is basically a merging of these data types. Some additional enhancements were included during this time as well. If you are interested in updating your Gold Client configuration, please follow the changes documented below.
Note: the number of config changes here are quite numerous. If you have questions or require support, please submit a support ticket.

Section 1: Add new tables to QM - QUALITY NOTIFICATIONS
1.1: Add table relations IHPA-QMEL, JCDS-QMEL, QNOTIF_HEADER_DS-QMEL, and QNOTIF_TASK_DS-QMEL using the details in the screen captures below; be sure to include the common fields for each table relation

Name:  Capture.PNG
Views: 1
Size:  99.6 KB
Name:  Capture1.PNG
Views: 1
Size:  33.9 KB

Section 2: Repair and add new data type links to QM - QUALITY NOTIFICATIONS
2.1: Correct the linkage details for CA - OBJECT STATUS from using Recipient Table JEST and to use JSTO instead

Name:  Capture2.PNG
Views: 1
Size:  73.1 KB

Section 3: Create five new data types using the details displayed in the screen captures below; be sure to include the common fields for each table relation
3.1: PM - MAINTENANCE ORDER HEADER
Attached Images
   

Fail to user Azure SQL DB as source

$
0
0
Hi All,
I am currently having issue to use Azure SQL DB as source, same DB use as target no issue.

When configure as source, click "Test Connection", will having below error message

  • SYS-E-HTTPFAIL, Request failed. Application status is RetCode: SQL_ERROR SqlState: 42S02 NativeError: 208 Message: [Microsoft][SQL Server Native Client 11.0][SQL Server]Invalid object name 'sys.servers'. Line: 1 Column: -1 [122502] ODBC general error.


If ignore "Test Connection" error, use it as source. When click "Table Selection", will having below error message

  • SYS-E-HTTPFAIL, Request failed. Application status is Command get_owner_list failed when creating the stream component. [122517] Table error.

Adding config for (SD) Quotations and Inquiries

$
0
0
We have recently added configuration to the Data Echo framework to support the copying of SD quotations and inquiries in relation to sales documents. To clarify, it's possible to use Gold Client to copy SD quotations and inquiries without having to deploy these config changes -- just use data type SD - SALES DOCUMENTS to copy these objects; however if there is a need to copy a sales order and include in that data copy the related inquiry or quotation, then this change is relevant. Apply the configuration details below to your ECC systems.
A total of four new data types need to be created, two for Quotations and two for Inquiries. If your organization only creates one of these two objects, freely limit your config changes to just the one relevant object.

QUOTATIONS
Step 1: In your primary Source client (typically Production) go to ZGOLD > Configuration > Framework > Data Echo
Step 2: Create a new data type named LINK - SALES DOCS TO QUOTATIONS
Note: the recommended File ID is LSD2QO

Name:  Capture18.PNG
Views: 1
Size:  9.0 KB
Step 3: Locate the new data type at the end of the Data Type list and open it; select the 'Tables' maintenance view
Step 4: Add table relation VBFA-VBFA and common fields

Name:  Capture19.PNG
Views: 1
Size:  37.9 KB

Step 5: Within the list of data types, select the one named SD - SALES DOCUMENTS and click the 'Copy' button located on the toolbar
Step 6: Name the new data type SD - QUOTATIONS FROM SALES DOCS

Name:  Capture20.PNG
Views: 1
Size:  12.2 KB

Step 7: Locate the new data type at the end of the Data Type list and open it
Step 8: In the Data Type header area, remove the check from the 'Primary' field and the value from the 'Primary Date Field' (Note: these settings were inherited from copying SD - SALES DOCUMENTS but are not needed here and so should be removed)

Name:  Capture21.PNG
Views: 1
Size:  13.0 KB

Step 9: Select the 'Linkages' maintenance view and then update the data type links to be active or inactive as displayed in the screen capture. Any other data type links not in this screen capture can also be set as inactive.

Name:  Capture22.PNG
Views: 1
Size:  50.3 KB

Step 10: Open data type LINK - SALES DOCS TO QUOTATIONS and select the 'Linkages' view
Step 11: Add a link to data type SD - QUOTATIONS FROM SALES DOCS and then add the linkage details: VBAK VBELN = VBELV

Name:  Capture23.PNG
Views: 1
Size:  37.7 KB

Step 12: Open data type SD - SALES DOCUMENTS and select the 'Linkages' view
Step 13: In this final step, add a link to data type LINK - SALES DOCS TO QUOTATIONS and then add the linkage details:
VBFA VBELN = VBELN AND VBFA VBTYP_V = 'B'

Name:  Capture24.PNG
Views: 1
Size:  44.2 KB

INQUIRIES
Step 1: In your primary Source client (typically Production) go to ZGOLD > Configuration > Framework > Data Echo
Step 2: Create a new data type named LINK - SALES DOCS TO INQUIRIES
Note: the recommended File ID is LSD2NQ

Name:  Capture25.PNG
Views: 1
Size:  8.8 KB

Step 3: Locate the new data type at the end of the Data Type list and open it; select the 'Tables' maintenance view
Step 4: Add table relation VBFA-VBFA and common fields

Name:  Capture26.PNG
Views: 1
Size:  36.4 KB

Step 5: Within the list of data types, select the one named SD - SALES DOCUMENTS and click the 'Copy' button located on the toolbar
Step 6: Name the new data type SD - INQUIRIES FROM SALES DOCS

Name:  Capture27.PNG
Views: 1
Size:  12.6 KB

Step 7: Locate the new data type at the end of the Data Type list and open it
Step 8: In the Data Type header area, remove the check from the 'Primary' field and the value from the 'Primary Date Field' (Note: these settings were inherited from copying SD - SALES DOCUMENTS but are not needed here and so should be removed)

Name:  Capture28.PNG
Views: 1
Size:  10.4 KB

Step 9: Select the 'Linkages' maintenance view and then update the data type links to be active or inactive as displayed in the screen capture. Any other data type links not in this screen capture can also be set as inactive.

Name:  Capture29.PNG
Views: 1
Size:  56.6 KB

Step 10: Open data type LINK - SALES DOCS TO INQUIRIES and select the 'Linkages' view
Step 11: Add a link to data type SD - INQUIRIES FROM SALES DOCS and then add the linkage details: VBAK VBELN = VBELV

Name:  Capture30.PNG
Views: 1
Size:  36.8 KB

Step 12: Open data type SD - SALES DOCUMENTS and select the 'Linkages' view
Step 13: In this final step, add a link to data type LINK - SALES DOCS TO INQUIRIES and then add the linkage details:
VBFA VBELN = VBELN AND VBFA VBTYP_V = 'A'

Name:  Capture32.PNG
Views: 1
Size:  43.5 KB
Attached Images
              

Attunity and AWS RDS - SQL Server

$
0
0
Hi,

Is it possible to use Attunity to extract data from an AWS RDS SQLServer instance to an onpremises Kafka topic ?
Your response would be helpful.

Regards,
Bérenger

How many tables can a replication task handle for a single CloudBeam AMI setting?

$
0
0
How many tables can a replication task handle for a single CloudBeam AMI setting?


I'm having problem with replicating Oracle DB to a Redshift Cluster. We've tried 2 configurations, and neither work so far.


Config 1:
Single Task consiting all 125 Oracle tables
1 Attunity Replicate Server (Windows Server 2012 (64-bit); CPU INTEL® Xeon® CPU E5-2460 @2.50 GHz, 2.49 GHz (3 processors); RAM 8 GB)
1 Attunity CloudBeam AMI (m4.large)
dedicated 10 GB network bandwidth between Replicate Server and CloudBeam AMI


Config 2:
2 Tasks; tables split across the 2 tasks
1 Attunity Replicate Server (Windows Server 2012 (64-bit); CPU INTEL® Xeon® CPU E5-2460 @2.50 GHz, 2.49 GHz (3 processors); RAM 8 GB)
2 Attunity CloudBeam AMI (m4.large)
dedicated 10 GB network bandwidth between Replicate Server and CloudBeam AMI


The Result for Config 1, only 90% of replication task finished, the rest shows Fatal Error message. Config 2, all full load tasks finished, but some of tables in Task 2 stuck in Queue forever.


Do you have recommendation how we should set-up or configure this for our use-case? Neither configuration we tried seem to work.


Thank you.

Express license: Target database File is not licensed under the Express license.

$
0
0
hey everybody,

i installed and started attunity replicate express and if i want to replicate a DB2 database/table into MySQL or even into a file it pop up an error :
"

  • Task cannot run.
  • Express license: Target database File is not licensed under the Express license."


in the log file is written:


00006812: 2017-11-08T14:33:46 [SERVER ]I: Attunity Replicate Server Log (V5.0.2.49 EX383.exxeta-de.local Microsoft Windows 64-bit, PID: 7376) started at Wed Nov 08 14:33:46 2017 (logger.c:475)
00006812: 2017-11-08T14:33:46 [SERVER ]I: Licensed to Attunity Replicate Express users (software license acceptance implied)Express license: You are running the Express Edition with reduced functionality (131 days remaining) (logger.c:478)
00006812: 2017-11-08T14:33:46 [SERVER ]I: The server is listening on TCP port 3550 (server.c:1353)
00006812: 2017-11-08T14:33:46 [SERVER ]I: Client session (ID 19789) allocated (dispatcher.c:241)
00004172: 2017-11-08T14:33:46 [REST_API ]I: Agent was started successfully (listening on port 3552). (atctl_serve.c:409)
00008120: 2017-11-08T14:36:36 [REST_API ]E: DSP-E-NOHNDLR, No handler for the url entered </attunityreplicate/rest/servers/local/keepalive> Base general error. (dispatcher_misc.c:531)
00010044: 2017-11-08T14:36:36 [REST_API ]E: DSP-E-NOHNDLR, No handler for the url entered </attunityreplicate/rest/servers/local/keepalive> Base general error. (dispatcher_misc.c:531)
00010040: 2017-11-08T14:36:36 [REST_API ]E: DSP-E-NOHNDLR, No handler for the url entered </attunityreplicate/rest/servers/local/keepalive> Base general error. (dispatcher_misc.c:531)
00005124: 2017-11-08T14:36:36 [REST_API ]E: DSP-E-NOHNDLR, No handler for the url entered </attunityreplicate/rest/servers/local/keepalive> Base general error. (dispatcher_misc.c:531)
00010856: 2017-11-08T14:36:36 [REST_API ]E: DSP-E-NOHNDLR, No handler for the url entered </attunityreplicate/rest/servers/local/keepalive> Base general error. (dispatcher_misc.c:531)
00006812: 2017-11-08T14:37:17 [SERVER ]I: Additional properties = '(null)' (db2luw_endpoint_imp.c:556)
00006812: 2017-11-08T14:37:17 [SERVER ]I: DB2LUW connection string: DRIVER={IBM DB2 ODBC DRIVER};HOSTNAME=172.17.0.2;SERVICENAME=50000;DATA BASE=SAMPLE;UID=db2inst1;PWD=***; (db2luw_endpoint_imp.c:183)
00006812: 2017-11-08T14:37:17 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T14:37:17 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T14:41:46 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T14:41:46 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:25:20 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:25:20 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:25:20 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:25:20 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:25:21 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:25:21 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:25:22 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:25:22 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:25:22 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:25:22 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:25:23 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:25:23 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:25:23 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:25:23 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:25:23 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:25:23 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:25:23 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:25:23 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:25:24 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:25:24 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:25:24 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:25:24 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:25:24 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:25:24 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:25:41 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:25:41 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:26:23 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:26:23 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:29:35 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:29:35 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:39:57 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:39:57 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:48:45 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:48:45 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:48:52 [INFRASTRUCTURE ]I: Allocated 20 ODBC connection handles (ar_odbc_func.c:193)
00006812: 2017-11-08T15:48:52 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:48:52 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:49:00 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:49:00 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:49:23 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:49:23 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:49:29 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:49:29 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:50:27 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:50:27 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:50:48 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:50:48 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T15:55:26 [SERVER ]E: RetCode: SQL_ERROR SqlState: IM002 NativeError: 0 Message: [Microsoft][ODBC Driver Manager] Der Datenquellenname wurde nicht gefunden, und es wurde kein Standardtreiber angegeben [122502] ODBC general error. (ar_odbc_conn.c:434)
00006812: 2017-11-08T15:55:26 [SERVER ]E: Cannot connect to DB2 LUW Server [122505] Fatal error has occurred (db2luw_endpoint_imp.c:564)
00006812: 2017-11-08T16:07:41 [SERVER ]I: DB2 server version: 10.05.0005 (db2luw_endpoint_capture.c:63)
00006812: 2017-11-08T16:07:41 [SERVER ]I: DB2 client version: 11.01.0000 (db2luw_endpoint_capture.c:64)
00006812: 2017-11-08T16:07:41 [SERVER ]I: Parameter 'Max Buffer Size for Read' = 64 KB (db2luw_endpoint_capture.c:154)
00006812: 2017-11-08T16:07:41 [SERVER ]I: Parameter 'Events Poll Interval' = 5 (db2luw_endpoint_capture.c:165)
00006812: 2017-11-08T16:07:41 [SERVER ]I: Parameter 'CCSIdMapping' = NULL (db2luw_endpoint_capture.c:184)
00006812: 2017-11-08T16:07:41 [SERVER ]I: Parameter 'Report DB2 Read Log Time' = -1 (db2luw_endpoint_capture.c:194)
00006812: 2017-11-08T16:12:31 [SERVER ]I: Parameter 'Events Poll Interval' = 5 (db2luw_endpoint_capture.c:165)
00006812: 2017-11-08T16:21:15 [SERVER ]E: Failed to load the library 'sqlncli11.dll' [720126] Das angegebene Modul wurde nicht gefunden. (system.c:448)
00006812: 2017-11-08T16:21:15 [STREAM_COMPONENT]E: Error Initializing endpoint. Failed to load SQL Server Native Client 11.0 [121914] API library cannot be loaded. (sqlserver_api.c:118)
00006812: 2017-11-08T16:21:15 [STREAM_COMPONENT]E: Stream component initialization function has failed for component 'SQL Server', type 'SQL Server'. [121914] API library cannot be loaded. (streamcomponent.c:908)
00006812: 2017-11-08T16:21:42 [SERVER ]E: Failed to load the library 'sqlncli11.dll' [720126] Das angegebene Modul wurde nicht gefunden. (system.c:448)
00006812: 2017-11-08T16:21:42 [STREAM_COMPONENT]E: Error Initializing endpoint. Failed to load SQL Server Native Client 11.0 [121914] API library cannot be loaded. (sqlserver_api.c:118)
00006812: 2017-11-08T16:21:42 [STREAM_COMPONENT]E: Stream component initialization function has failed for component 'SQL Server', type 'SQL Server'. [121914] API library cannot be loaded. (streamcomponent.c:908)
00006812: 2017-11-08T16:21:49 [SERVER ]E: Failed to load the library 'sqlncli11.dll' [720126] Das angegebene Modul wurde nicht gefunden. (system.c:448)
00006812: 2017-11-08T16:21:49 [STREAM_COMPONENT]E: Error Initializing endpoint. Failed to load SQL Server Native Client 11.0 [121914] API library cannot be loaded. (sqlserver_api.c:118)
00006812: 2017-11-08T16:21:49 [STREAM_COMPONENT]E: Stream component initialization function has failed for component 'SQL Server', type 'SQL Server'. [121914] API library cannot be loaded. (streamcomponent.c:908)
00006812: 2017-11-08T16:21:58 [SERVER ]E: Failed to load the library 'sqlncli11.dll' [720126] Das angegebene Modul wurde nicht gefunden. (system.c:448)
00006812: 2017-11-08T16:21:58 [STREAM_COMPONENT]E: Error Initializing endpoint. Failed to load SQL Server Native Client 11.0 [121914] API library cannot be loaded. (sqlserver_api.c:118)
00006812: 2017-11-08T16:21:58 [STREAM_COMPONENT]E: Stream component initialization function has failed for component 'SQL Server', type 'SQL Server'. [121914] API library cannot be loaded. (streamcomponent.c:908)
00006812: 2017-11-08T16:22:11 [SERVER ]E: Failed to load the library 'sqlncli11.dll' [720126] Das angegebene Modul wurde nicht gefunden. (system.c:448)
00006812: 2017-11-08T16:22:11 [STREAM_COMPONENT]E: Error Initializing endpoint. Failed to load SQL Server Native Client 11.0 [121914] API library cannot be loaded. (sqlserver_api.c:118)
00006812: 2017-11-08T16:22:11 [STREAM_COMPONENT]E: Stream component initialization function has failed for component 'SQL Server', type 'SQL Server'. [121914] API library cannot be loaded. (streamcomponent.c:908)
00006812: 2017-11-08T16:23:03 [SERVER ]E: Missing database name [120401] Endpoint initialization failed. (odbc_endpoint_imp.c:1895)
00006812: 2017-11-08T16:23:11 [SERVER ]I: Going to connect to ODBC connection string: DRIVER={MySQL ODBC 5.2 Unicode Driver};SERVER=127.0.0.1;PORT=3306;DATABASE=test;U ID=root; (odbc_endpoint_imp.c:2112)
00006812: 2017-11-08T16:37:03 [STREAM_COMPONENT]E: Getting DB object ('testFile') info from repository failed [120500] requested object was not found in the repository. (endpointshell.c:2218)
00006812: 2017-11-08T16:37:03 [STREAM_COMPONENT]E: create_stream_handle failed [120500] requested object was not found in the repository. (endpointshell.c:2184)
00006812: 2017-11-08T16:37:03 [STREAM_COMPONENT]E: Failed getting stream handle [120500] requested object was not found in the repository. (endpointshell.c:2323)
00006812: 2017-11-08T16:37:03 [STREAM_COMPONENT]E: Command get_table_list failed when creating the stream component. [120500] requested object was not found in the repository. (endpointshell.c:2443)
00006812: 2017-11-08T16:37:57 [SERVER ]I: File source using '65001' codepage (file_imp.c:145)
00006812: 2017-11-08T16:45:49 [SERVER ]I: Parameter 'Events Poll Interval' = 5 (db2luw_endpoint_capture.c:165)

Does anyone had the same problem and can help?

Oracle to SQL Migration - ORA-27091: unable to queue I/O ORA-17510

$
0
0
Hi,


Would anyone have any advice on why I am receiving the Oracle error below when attempting to migrate data from an Oracle 12c database (Oracle redo logs are on ASM - Solaris O/S) to SQL Server 2016.


The Oracle Endpoint has all the ASM details, is using the Binary Reader method, has all the required privileges and tests sucessfully. I have tried multiple times with copying the redo logs to an alternate folder and not copying them, all tests fail with the same error. The Oracle DBA's have checked ASM and there are no issues, or database integrity issues.


I am using Replicate Express.


00006164: [SOURCE_CAPTURE ]E: OCI error 'ORA-27091: unable to queue I/O
ORA-17510: An attempt to do I/O of size 512 to block 0 is beyond file size 131072. Logical block size: 512.
ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 422
ORA-06512: at line 1' [122307] OCI error. (oradcdc_io.c:673)
00006164: [SOURCE_CAPTURE ]E: Failed to read from ASM file with thread id '1' from block number '0', size '512' [20014] Internal error (oradcdc_io.c:675)
00006164: [SOURCE_CAPTURE ]E: Failed to read from Redo log +ORADATA/TSTG/tstg_redo01c.log [20014] Internal error (oradcdc_redo.c:808)
00006164: [SOURCE_CAPTURE ]D: Close Redo log '+ORADATA/TSTG/tstg_redo01c.log' (oradcdc_redo.c:222)


Thanks for your advice.


Cheers
Mick

CDC Apply On Disk getting very large

$
0
0
After finished running full load on 125 tables (approx. 4 billion record), which took about 15-17 hours to complete without any error, the task have switched to CDC since.

I noticed a few things on my CDC Monitor:
1. The incoming changes is getting very large, 17 Million per this thread is created, about 2.4 million transactions.
2. The 17 million is building up on Apply On Disk. It also says applying 2.4 million transactions (until target commit).
3. The Apply latency actually continues to drop from 11 hours about 2 hours ago before this thread is created, to 3 hours when this thread is created.

The attached screenshot depicts the above.

Name:  attunity_screenshot.JPG
Views: 9
Size:  45.6 KB

Question:
- What is actually happening?
- Is this normal behavior after a very long full load duration?
- Is there some tuning that can be done?

Thanks!

Kind regards,

R
Attached Images
 

SQLServer replication problem

$
0
0
Hi All,


I have installed Attunity Replication Express and SQL Server 2016(Sp1 cu5) in Windows Server 2016 VM in Azure environment.
I try to do SQLServer 2016 replication to SQLServer 2016 with no result. (currently the source and the target DB are same)


Thanks for the response.


The error message is the follows:


.
.
.
00004228: 2017-11-14T08:41:53 [SOURCE_UNLOAD ]I: NLS configuration sampled: Associated code page=1250 (sqlserver_endpoint_imp.c:1828)
00005640: 2017-11-14T08:41:53 [TARGET_LOAD ]I: Bulk is set to ignore max row size warnings (sqlserver_endpoint_imp.c:1351)
00005640: 2017-11-14T08:41:53 [TARGET_LOAD ]I: Going to connect to server localhost database Omdw (sqlserver_endpoint_imp.c:1394)
00000956: 2017-11-14T08:41:53 [SOURCE_CAPTURE ]E: SqlStat: Can't retrieve exception Information. [120102] Stream Component recoverable error. (sqlserver_log_processor.c:3703)
00005640: 2017-11-14T08:41:53 [SOURCE_CAPTURE ]I: NLS configuration sampled: Associated code page=1250 (sqlserver_endpoint_imp.c:1828)
00000956: 2017-11-14T08:41:53 [SOURCE_CAPTURE ]E: Unknown '0' native error detected while SQL_ERROR is flagged / SQLSTATE is not empty. [120102] Stream Component recoverable error. (sqlserver_log_processor.c:3704)
00000956: 2017-11-14T08:41:53 [SOURCE_CAPTURE ]E: sqlserver_capture_source_loop(...) encountered an unexpeceted error. [120102] Stream Component recoverable error. (sqlserver_endpoint_capture.c:749)
00000956: 2017-11-14T08:41:53 [SOURCE_CAPTURE ]E: Error executing source loop [120102] Stream Component recoverable error. (streamcomponent.c:1489)
00002180: 2017-11-14T08:41:53 [TASK_MANAGER ]E: Task error notification received from subtask 0, thread 0 [120102] Stream Component recoverable error. (replicationtask.c:2047)
00002180: 2017-11-14T08:41:53 [TASK_MANAGER ]I: Task 'SQL1 to SQL2' encountered a recoverable error (repository.c:4411)
00005360: 2017-11-14T08:41:53 [SORTER ]I: Final saved task state. Stream position 0000002E:00006C30:0006, Source id 1, next Target id 1, confirmed Target id 0 (sorter.c:556)
00000956: 2017-11-14T08:41:53 [TASK_MANAGER ]E: Stream component failed at subtask 0, component st_0_SQLServer [120102] Stream Component recoverable error. (subtask.c:1345)
00000956: 2017-11-14T08:41:53 [SOURCE_CAPTURE ]E: Stream component 'st_0_SQLServer' terminated [120102] Stream Component recoverable error. (subtask.c:1510)
00002180: 2017-11-14T08:41:57 [TASK_MANAGER ]I: Subtask #0 ended (replicationtask_util.c:933)
00002180: 2017-11-14T08:41:57 [TASK_MANAGER ]I: Subtask #1 ended (replicationtask_util.c:933)
00002180: 2017-11-14T08:41:58 [SERVER ]I: Stop server request received internally (server.c:2577)
00002180: 2017-11-14T08:41:58 [TASK_MANAGER ]I: Task management thread terminated (replicationtask.c:2802)
00005608: 2017-11-14T08:41:59 [SERVER ]I: Client session (ID 31869) closed (dispatcher.c:194)
00005608: 2017-11-14T08:41:59 [UTILITIES ]I: The last state is saved to file 'C:\Program Files\Attunity\Replicate\data\tasks\SQL1 to SQL2/StateManager/ars_saved_state_000002.sts' at Tue, 14 Nov 2017 08:41:53 GMT (1510648913548913) (statemanager.c:673)
00000776: 2017-11-14T08:41:59 [SERVER ]I: The process stopped (server.c:2701)
00000776: 2017-11-14T08:41:59 [SERVER ]I: Closing log file at Tue Nov 14 08:41:59 2017 (logger.c:1972)
Viewing all 411 articles
Browse latest View live