Quantcast
Channel: Attunity Integration Technology Forum
Viewing all 411 articles
Browse latest View live

Issue with SELECT ... FOR UPDATE db2 LUW

$
0
0
Hi!
We are a issue in our database DB2 11.1 with a SELECT ... FOR UPDATE which is executed many times by second and it is generating many locks and the applications hangs.
SQL_TEXT:
SELECT flag1,flag2,flag3,status1,status2,status3,status4, status5,status_ext,device_fitness,cassette_fitness ,configid,last_cmdtype,last_txntype,oar_screen,las tmsg_time,trace,tpdu,emv_identifier,language,pcode ,respcode,acct_num,acct_num2,tvn,luno,cap_date,msg _coord_num,oar_line,model_specific,node, cassette_dispense, statetablename, statetablever, fault, severity, availdate, availtime,cmdmbid, balancerid, site_id, shclog_id, last_modify_dttime FROM atmdevicestate WHERE institutionid = ? and group_name = ? and unit = ? FOR UPDATE


The ATMDEVICESTATE table only has 1063 rows and two indexes:

INDEX ATMDEVICESTATE_1 ("LASTMSG_TIME" ASC,"GROUP_NAME" ASC,"UNIT" ASC) COMPRESS YES
UNIQUE INDEX ATMDEVICESTATE_IX ("INSTITUTIONID" ASC, "GROUP_NAME" ASC,"UNIT" ASC) COMPRESS YES


Is there a way to prevent these locks generated by the application in the database by SELECT ... FOR UPDATE with some parameter in the instace/database or howto can avoid this locks ???

In the top 5 SQL by executions appears the sentences SELECT...FOR UPDATE, the UPDATE of CURSOR and SELECT, the tree for the same table ATMDEVICESTATE:
EXECUTIONS TIME_SECONDS TEXT
-------------------- -------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
4617270 0 SELECT flag1,flag2,flag3,status1,status2,status3,status4, status5,status_ext,device_fitness,cassette_fitness ,configid,last_cmdtype,last_txntype,oar_screen,las tmsg_time,trace,tpdu,emv_identifier,language,pcode ,respcode,acct_num,acct_num2,tvn,luno,cap_date,msg _coord_num,oar_line,model_specific,node, cassette_dispense, statetablename, statetablever, fault, severity, availdate, availtime,cmdmbid, balancerid, site_id, shclog_id, last_modify_dttime FROM atmdevicestate WHERE institutionid = ? and group_name = ? and unit = ? FOR UPDATE
4616826 0 UPDATE atmdevicestate SET flag1 = ?,flag2 = ?,flag3 = ?,status1 = ?,status2 = ?,status3 = ?,status4 = ?,status5 = ?,status_ext = ?,device_fitness = ?,cassette_fitness = ?,configid = ?,last_cmdtype = ?,last_txntype = ?,oar_screen = ?,lastmsg_time = ?,trace = ?,tpdu = ?,emv_identifier = ?,language = ?,pcode = ?,respcode = ?,acct_num = ?,acct_num2 = ?,tvn = ?,luno = ?,cap_date = ?,msg_coord_num = ?,oar_line = ?,model_specific = ?,node = ?, cassette_dispense = ?, statetablename = ?, statetablever = ?, fault = ?, severity = ?, availdate = ?, availtime = ?,cmdmbid = ?, balancerid = ?, site_id = ?, shclog_id = ?, last_modify_dttime = ? WHERE CURRENT OF SQL_CURSH200C4
3174610 0 SELECT
flag1,flag2,flag3,status1,status2,status3,status4, status5,status_ext,device_fitness,cassette_fitness ,configid,last_cmdtype,last_txntype,oar_screen,las tmsg_time,trace,tpdu,emv_identifier,language,pcode ,respcode,acct_num,acct_num2,tvn,luno,cap_date,msg _coord_num,oar_line,model_specific,node, cassette_dispense, statetablename, statetablever, fault, severity, availdate, availtime,cmdmbid, balancerid, site_id, shclog_id, last_modify_dttime FROM atmdevicestate WHERE institutionid = ? and group_name = ? and unit = ?


We are think it's a programming issue, but the applications say than is a problem with the db2sync since his analysis they are observed high time in commits/checkpoints in the database

IBM team said there is no way to avoid this locks. We are check isolation level but we do not know what level of isolation is recommended to reduce/avoid blocking in the application side ???

-> The history file is less than 5 mb
-> Lock waits are not so high in the snapshots
-> Query execution times are good


IBM requires it to be executed at the moment the issue occurs:
-> db2mon.pl and db2fodc -hang full
I understand to detect something that could be tuned in the db2 engine.

We know that the application has worked well with Oracle and they did not have this Issue of locks.

The table ATMDEVICESTATE is the table that more OVERFLOWs and more UPDATE suffers, here statistics since the BD is reactivated at
Start Date Start Time
2018/06/05 02:25:15

TABLE_NAME PAGES OVERFLOWS
--------------- ----------- -----------
ATMDEVICESTATE 12 1,192,483


The Database page size = 4096 but the table was moved to tablespaces of 32K, nevertheless OVERFLOWs still appears at the top.

The ATMDEVICESTATE table only has 1063 rows with two indexes and has columns VARCHAR (10,32,50,64 and 256) we have considered rebuilding the table and change VARCHAR by CHAR, of course VARCHAR (256) changing it only by CHAR (255), we have monitored the column VARCHAR (256) and the maximum reached at the moment is 116 in length.
Do you think this could help ???, Any other idea ???
I will review the combination of FOR READ ONLY and USE AND KEEP UPDATE LOCKS together.

Not able to Login to Replicate Console

$
0
0
I installed Attunity Replicate for the first time on Linux 7 with root user

rpm was installed successfully I could see the attunity replicat running on port
ps -ef | grep -i attunity
user1 28003 1 0 13:13 ? 00:00:00 /opt/attunity/replicate/bin/repctl -d /opt/attunity/replicate/data service start port=3550 rest_port=3552
user1 28004 28003 0 13:13 ? 00:00:02 /opt/attunity/replicate/bin/repctl -d /opt/attunity/replicate/data service start port=3550 rest_port=3552


However when I tried to login into console it gave me ceriticate error and did not allowed to login . I tried to using "user1" to try to Login which specified during installation
I tried to use root login as well, please help.

https://<hostname>:3552/attunityreplicate


Please help

Getting a rejecting new connections error, 1500 running processes

$
0
0
I have a job that's failing and I've already increased the max server threads number in the config file, as well as even rebooting the whole server, and it's still giving me this error. Am I missing something? And why does it say there's 1500 processes running when I set the max to 600?

-Erik

SQL Server - uniformly mapped across partitions

$
0
0
Hi,

I have trouble with the cdc of some tables, as demo environment i'm using the test db from microsoft: WideWorldImporters.
Some tables won't capture any data and give the error: Table 'Sales.InvoiceLines' is not uniformly mapped across partitions. Therefore - it is excluded from CDC

Is there any way to solve this, i don't really understand what the problem is.

Kind regards and thanks,
Robrecht

WebHDFS HA incorrect test results

$
0
0
Hi guys,
I am trying to set up a hadoop endpoint that is HA. I have selected WebHDFS and High Availability and then Test Connection and it keeps responding with...

SYS,GENERAL_EXCEPTION,Failed to allocate hdfs file factory. Base general error.,failed to create hadoop file factory Base general error. Invalid Hadoop configuration; both name nodes (ua0hdp1nn101 and ua0hdp1nn102) are active; This is considered bad configuration according to hadoop documentation Failed to detect Hadoop active name node; invalid configuration

Is this the correct way to set this up?

Thank You
Jamie

use of downstream mining database as source

$
0
0
hi,

is it possible if we use Oracle downstream database server as source? or do attunity need to connect on the primary server for reading of redo logs?

How to do filter on Source table using rownum

$
0
0
How to do filter on Source table using rownum in attunityreplicate/5.5.0.283.
I tried $ROWNUM< 1000 in Filter. it failed with no such column

ORA-03137: malformed TTC packet from client rejected: [kpoal8Check-3] [32768] [0] [0x

$
0
0
hi ,
i am getting below error when i start the process, please advise

ORA-03137: malformed TTC packet from client rejected: [kpoal8Check-3] [32768] [0] [0x000000000] [789544] [] [] [] [1022307]

00008040: 2018-08-01T04:35:18 [AT_GLOBAL ]I: Task Server Log (V5.5.0.283 amil-db.com Microsoft Windows Server 2012 (build 9200) 64-bit, PID: 3416) started at Wed Aug 01 04:35:18 2018 (at_logger.c:2426)
00008040: 2018-08-01T04:35:18 [AT_GLOBAL ]I: Licensed to Attunity Replicate Express users (software license acceptance implied)Express license: You are running the Express Edition with reduced functionality (0 days remaining) (at_logger.c:2429)
00008040: 2018-08-01T04:35:18 [SERVER ]I: Client session (ID 18557) allocated (dispatcher.c:274)
00008040: 2018-08-01T04:35:18 [TASK_MANAGER ]I: Task 'SIT_LOAD_ENTITY' running CDC only in fresh start mode (replicationtask.c:1048)
00008852: 2018-08-01T04:35:18 [TASK_MANAGER ]I: Task Id: ef7bd421-5295-444d-a7b2-aac16f27e676 (replicationtask.c:2578)
00008852: 2018-08-01T04:35:18 [METADATA_MANAGE ]I: Going to connect to Oracle server (description=(address=(protocol=tcp)(host=22.X.X.X )(port=1521))(connect_data=(sid=XXX))) with username QQ_XXX (oracle_endpoint_imp.c:958)
00008852: 2018-08-01T04:35:19 [METADATA_MANAGE ]I: Source endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:579)
00008852: 2018-08-01T04:35:19 [METADATA_MANAGE ]I: Target endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:585)
00008852: 2018-08-01T04:35:19 [TASK_MANAGER ]I: Creating threads for all components (replicationtask.c:1590)
00008852: 2018-08-01T04:35:20 [TASK_MANAGER ]I: Threads for all components were created (replicationtask.c:1738)
00008852: 2018-08-01T04:35:20 [TASK_MANAGER ]I: Task initialization completed successfully (replicationtask.c:2637)
00006680: 2018-08-01T04:35:20 [SOURCE_CAPTURE ]I: Use any Oracle Archived Log Destination (oracle_endpoint_imp.c:723)
00006680: 2018-08-01T04:35:20 [SOURCE_CAPTURE ]I: Oracle CDC uses LogMiner access mode (oracle_endpoint_imp.c:732)
00006680: 2018-08-01T04:35:20 [SOURCE_CAPTURE ]I: retry timeout is '120' minutes (oracle_endpoint_imp.c:879)
00006680: 2018-08-01T04:35:20 [SOURCE_CAPTURE ]I: Scale is set to 10 for NUMBER Datatype (oracle_endpoint_imp.c:901)
00006680: 2018-08-01T04:35:20 [SOURCE_CAPTURE ]I: Retry interval is set to 5 (oracle_endpoint_imp.c:909)
00006680: 2018-08-01T04:35:20 [SOURCE_CAPTURE ]I: Oracle database version is 12.1.0.2.0 (oracle_endpoint_conn.c:546)
00006680: 2018-08-01T04:35:20 [SOURCE_CAPTURE ]I: Oracle compatibility version is 12.1.0.2.0 (oracle_endpoint_conn.c:86)
00006680: 2018-08-01T04:35:21 [SOURCE_CAPTURE ]I: Standby database role is used. (oracle_endpoint_conn.c:136)
00006680: 2018-08-01T04:35:21 [SOURCE_CAPTURE ]I: SUPPLEMENTAL_LOG_DATA_PK is set (oracle_endpoint_conn.c:146)
00004764: 2018-08-01T04:35:21 [TARGET_APPLY ]I: Target endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:585)
00004764: 2018-08-01T04:35:21 [TARGET_APPLY ]I: Restore bulk state. Last bulk last record id - '0', last applied record id - '0', target confirmed record id - '0' (endpointshell.c:1413)
00004764: 2018-08-01T04:35:21 [TARGET_APPLY ]I: Working in bulk apply mode (endpointshell.c:1420)
00004764: 2018-08-01T04:35:22 [SOURCE_CAPTURE ]I: Source endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:579)
00008852: 2018-08-01T04:35:22 [TASK_MANAGER ]I: All stream components were initialized (replicationtask.c:2444)
00006680: 2018-08-01T04:35:22 [SOURCE_CAPTURE ]I: Oracle capture start time: now (oracle_endpoint_capture.c:638)
00008852: 2018-08-01T04:35:22 [TASK_MANAGER ]I: Starting subtask #1 (replicationtask_util.c:874)
00008852: 2018-08-01T04:35:22 [TASK_MANAGER ]I: Starting subtask #2 (replicationtask_util.c:874)
00008852: 2018-08-01T04:35:22 [TASK_MANAGER ]I: Starting subtask #3 (replicationtask_util.c:874)
00008852: 2018-08-01T04:35:22 [TASK_MANAGER ]I: Starting subtask #4 (replicationtask_util.c:874)
00008852: 2018-08-01T04:35:22 [TASK_MANAGER ]I: Starting subtask #5 (replicationtask_util.c:874)
00006680: 2018-08-01T04:35:23 [SOURCE_CAPTURE ]I: Used Oracle archived Redo log destination id is '1' (oracdc_merger.c:543)
00007816: 2018-08-01T04:35:23 [SOURCE_UNLOAD ]I: Standby database role is used. (oracle_endpoint_conn.c:136)
00007816: 2018-08-01T04:35:23 [SOURCE_UNLOAD ]I: SUPPLEMENTAL_LOG_DATA_PK is set (oracle_endpoint_conn.c:146)
00004072: 2018-08-01T04:35:23 [TARGET_LOAD ]I: Target endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:585)
00006680: 2018-08-01T04:35:23 [SOURCE_CAPTURE ]I: No opened transactions (oracle_endpoint_capture.c:852)
00008444: 2018-08-01T04:35:23 [SORTER ]I: Transaction consistency reached (sorter_transaction.c:262)
00008852: 2018-08-01T04:35:23 [TASK_MANAGER ]I: Starting replication now (replicationtask.c:2193)
00004072: 2018-08-01T04:35:24 [SOURCE_CAPTURE ]I: Source endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:579)
00008444: 2018-08-01T04:35:24 [SORTER ]I: Start collecting changes for table id = 1 (sorter_transaction.c:1881)
00008852: 2018-08-01T04:35:24 [TASK_MANAGER ]I: Start loading table 'SS'.'INST' (Id = 1) by subtask 1. Start load timestamp 0005725D407CD0C0 (replicationtask_util.c:1040)
00006680: 2018-08-01T04:35:24 [SOURCE_CAPTURE ]I: New Log Miner boundaries in thread '1' : First REDO Sequence is '175', Last REDO Sequence is '175' (oracdc_reader.c:633)
00008852: 2018-08-01T04:35:24 [TASK_MANAGER ]E: Task error notification received from subtask 0, thread 1 [1020401] (replicationtask.c:2272)
00008852: 2018-08-01T04:35:24 [TASK_MANAGER ]W: Task 'SIT_LOAD_ENTITY' encountered a fatal error (repository.c:4773)
00004764: 2018-08-01T04:35:24 [TARGET_APPLY ]E: ORA-03137: malformed TTC packet from client rejected: [kpoal8Check-3] [32768] [0] [0x000000000] [789544] [] [] [] [1022307] (oracle_endpoint_apply.c:2256)
00008444: 2018-08-01T04:35:24 [SORTER ]I: Final saved task state. Stream position timestamp:2018-08-01T10:34:33, Source id 1, next Target id 1, confirmed Target id 0 (sorter.c:652)
00004764: 2018-08-01T04:35:24 [TARGET_APPLY ]E: Cannot create Special table [1022307] (endpointshell.c:2282)
00004764: 2018-08-01T04:35:24 [TARGET_APPLY ]E: Cannot create Exception table [1022307] (endpointshell.c:2467)
00004764: 2018-08-01T04:35:24 [TARGET_APPLY ]E: Error executing command [1020401] (streamcomponent.c:1644)
00004764: 2018-08-01T04:35:24 [TASK_MANAGER ]E: Stream component failed at subtask 0, component st_0_QQSIT [1020401] (subtask.c:1350)
00004764: 2018-08-01T04:35:24 [TARGET_APPLY ]E: Stream component 'st_0_QQSIT' terminated [1020401] (subtask.c:1513)
00004956: 2018-08-01T04:35:25 [SOURCE_UNLOAD ]I: Standby database role is used. (oracle_endpoint_conn.c:136)
00004956: 2018-08-01T04:35:25 [SOURCE_UNLOAD ]I: SUPPLEMENTAL_LOG_DATA_PK is set (oracle_endpoint_conn.c:146)
00003796: 2018-08-01T04:35:26 [TARGET_LOAD ]I: Target endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:585)
00003796: 2018-08-01T04:35:27 [SOURCE_CAPTURE ]I: Source endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:579)
00008040: 2018-08-01T04:35:27 [SERVER ]E: GEN-E-MESSAGE, Unexpected EOF detected: requested to read 16 bytes, actually read 0 bytes [1020601] (ar_net.c:734)
00008120: 2018-08-01T04:35:28 [SOURCE_UNLOAD ]I: Standby database role is used. (oracle_endpoint_conn.c:136)
00008120: 2018-08-01T04:35:28 [SOURCE_UNLOAD ]I: SUPPLEMENTAL_LOG_DATA_PK is set (oracle_endpoint_conn.c:146)
00006016: 2018-08-01T04:35:28 [TARGET_LOAD ]I: Target endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:585)
00006016: 2018-08-01T04:35:28 [SOURCE_CAPTURE ]I: Source endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:579)
00006704: 2018-08-01T04:35:29 [SOURCE_UNLOAD ]I: Standby database role is used. (oracle_endpoint_conn.c:136)
00006704: 2018-08-01T04:35:29 [SOURCE_UNLOAD ]I: SUPPLEMENTAL_LOG_DATA_PK is set (oracle_endpoint_conn.c:146)
00009148: 2018-08-01T04:35:29 [TARGET_LOAD ]I: Target endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:585)
00009148: 2018-08-01T04:35:30 [SOURCE_CAPTURE ]I: Source endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:579)
00006084: 2018-08-01T04:35:31 [SOURCE_UNLOAD ]I: Standby database role is used. (oracle_endpoint_conn.c:136)
00006084: 2018-08-01T04:35:31 [SOURCE_UNLOAD ]I: SUPPLEMENTAL_LOG_DATA_PK is set (oracle_endpoint_conn.c:146)
00009144: 2018-08-01T04:35:31 [TARGET_LOAD ]I: Target endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:585)
00009144: 2018-08-01T04:35:32 [SOURCE_CAPTURE ]I: Source endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:579)
00008852: 2018-08-01T04:35:32 [TASK_MANAGER ]I: Subtask #0 ended (replicationtask_util.c:937)
00008852: 2018-08-01T04:35:32 [TASK_MANAGER ]I: Subtask #1 ended (replicationtask_util.c:937)
00008852: 2018-08-01T04:35:32 [TASK_MANAGER ]I: Subtask #2 ended (replicationtask_util.c:937)
00008852: 2018-08-01T04:35:32 [TASK_MANAGER ]I: Subtask #3 ended (replicationtask_util.c:937)
00008852: 2018-08-01T04:35:32 [TASK_MANAGER ]I: Subtask #4 ended (replicationtask_util.c:937)
00008852: 2018-08-01T04:35:32 [TASK_MANAGER ]I: Subtask #5 ended (replicationtask_util.c:937)
00008852: 2018-08-01T04:35:32 [SERVER ]I: Stop server request received internally (server.c:2956)
00008852: 2018-08-01T04:35:32 [TASK_MANAGER ]I: Task management thread terminated (replicationtask.c:3089)
00008040: 2018-08-01T04:35:32 [SERVER ]I: Client session (ID 18557) closed (dispatcher.c:200)
00008040: 2018-08-01T04:35:32 [UTILITIES ]I: The last state is saved to file 'C:\Program Files\Attunity\Replicate\data\tasks\SIT_LOAD_ENTIT Y/StateManager/ars_saved_state_000001.sts' at Wed, 01 Aug 2018 10:35:27 GMT (1533119727024017) (statemanager.c:601)
00004360: 2018-08-01T04:35:32 [SERVER ]I: The process stopped (server.c:3079)
00004360: 2018-08-01T04:35:32 [AT_GLOBAL ]I: Closing log file at Wed Aug 01 04:35:32 2018 (at_logger.c:2288)

Failed to write file

$
0
0
Hi,
I am getting the following error loading data into Hadoop, can you help me with this?

00008596: 2018-08-02T07:18:28 [SOURCE_CAPTURE ]I: Source endpoint 'IBM DB2 for z/OS' is using provider syntax '<default>' (provider_syntax_manager.c:610)
00003104: 2018-08-02T07:18:28 [SORTER ]I: Start collecting changes for table id = 1 (sorter_transaction.c:1992)
00005596: 2018-08-02T07:18:28 [TASK_MANAGER ]I: Start loading table 'HUNG1'.'MVRAE' (Id = 1) by subtask 1. Start load timestamp 000572747C13D2C0 (replicationtask_util.c:707)
00008612: 2018-08-02T07:18:29 [SOURCE_UNLOAD ]I: resolve_table_orig_db: table_def orig_db_id of table 'HUNG1.MVRAE' = 01C3015E (db2z_endpoint_metadata.c:288)
00008596: 2018-08-02T07:19:53 [AT_GLOBAL ]E: Json doesn't start with '{' [1003001] (at_cjson.c:1740)
00008612: 2018-08-02T07:19:53 [SOURCE_UNLOAD ]I: Unload finished for table 'HUNG1'.'MVRAE' (Id = 1). 2147208 rows sent. (streamcomponent.c:3303)
00008596: 2018-08-02T07:19:53 [AT_GLOBAL ]E: Error parsing Json [1000251] (at_protobuf.c:1334)
00008596: 2018-08-02T07:19:53 [COMMUNICATION ]E: failed to write file, got unexpected http status code 400 (Bad Request) (expected status code: 201): <html><head><title>Apache Tomcat/6.0.48 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 400 - Data upload requests must have content-type set to 'application/octet-stream'</h1><HR size="1" noshade="noshade"><p><b>type</b> Status report</p><p><b>message</b> <u>Data upload requests must have content-type set to 'application/octet-stream'</u></p><p><b>description</b> <u>The request sent by the client was syntactically incorrect.</u></p><HR size="1" noshade="noshade"><h3>Apache Tomcat/6.0.48</h3></body></html> [1001801] (at_curl_http_client.c:413)
00008596: 2018-08-02T07:19:53 [FILE_FACTORY ]E: Failed to upload file <R:\Replicate\data\tasks\MVR_DB2_HADOOP\data_files \1\0\FL-0-20180802-1418331518.csv> to HADOOP target </apps/hive/warehouse/insurance_staging.db/MVRAE/FL-0-20180802-1418331518.csv> [1001900] (at_hadoop_client.c:776)
00005596: 2018-08-02T07:19:53 [TASK_MANAGER ]W: Table 'HUNG1'.'MVRAE' (subtask 1 thread 1) is suspended (replicationtask.c:2171)
00008596: 2018-08-02T07:19:53 [FILE_FACTORY ]E: Failed to upload <R:\Replicate\data\tasks\MVR_DB2_HADOOP\data_files \1\0\FL-0-20180802-1418331518.csv> to </apps/hive/warehouse/insurance_staging.db/MVRAE/FL-0-20180802-1418331518.csv> [1000722] (at_hadoop_ff.c:556)
00008596: 2018-08-02T07:19:53 [FILE_FACTORY ]E: failed to write entire file [1000722] (at_universal_fs_object.c:1199)
00008596: 2018-08-02T07:19:53 [FILE_FACTORY ]E: Write entire file failed: source = 'R:\Replicate\data\tasks\MVR_DB2_HADOOP\data_files \1\0\FL-0-20180802-1418331518.csv' target = '/apps/hive/warehouse/insurance_staging.db/MVRAE/FL-0-20180802-1418331518.csv' open type = 3 [1000731] (at_universal_fs_object.c:932)
00008596: 2018-08-02T07:19:53 [TARGET_LOAD ]E: Failed to upload file from R:\Replicate\data\tasks\MVR_DB2_HADOOP\data_files\ 1\0\FL-0-20180802-1418331518.csv to HDFS path /apps/hive/warehouse/insurance_staging.db/MVRAE/FL-0-20180802-1418331518.csv. [1000731] (hadoop_utils.c:847)
00008596: 2018-08-02T07:19:53 [TARGET_LOAD ]E: Failed to load file '1'. [1000731] (hadoop_load.c:1312)

Thank You
Jamie

Failed to write file

$
0
0
Hi Guys,
I am setting up a job to write to Hadoop and am getting this error. Can you help me out?

00008596: 2018-08-02T07:18:28 [SOURCE_CAPTURE ]I: Source endpoint 'IBM DB2 for z/OS' is using provider syntax '<default>' (provider_syntax_manager.c:610)
00003104: 2018-08-02T07:18:28 [SORTER ]I: Start collecting changes for table id = 1 (sorter_transaction.c:1992)
00005596: 2018-08-02T07:18:28 [TASK_MANAGER ]I: Start loading table 'HUNG1'.'MVRAE' (Id = 1) by subtask 1. Start load timestamp 000572747C13D2C0 (replicationtask_util.c:707)
00008612: 2018-08-02T07:18:29 [SOURCE_UNLOAD ]I: resolve_table_orig_db: table_def orig_db_id of table 'HUNG1.MVRAE' = 01C3015E (db2z_endpoint_metadata.c:288)
00008596: 2018-08-02T07:19:53 [AT_GLOBAL ]E: Json doesn't start with '{' [1003001] (at_cjson.c:1740)
00008612: 2018-08-02T07:19:53 [SOURCE_UNLOAD ]I: Unload finished for table 'HUNG1'.'MVRAE' (Id = 1). 2147208 rows sent. (streamcomponent.c:3303)
00008596: 2018-08-02T07:19:53 [AT_GLOBAL ]E: Error parsing Json [1000251] (at_protobuf.c:1334)
00008596: 2018-08-02T07:19:53 [COMMUNICATION ]E: failed to write file, got unexpected http status code 400 (Bad Request) (expected status code: 201): <html><head><title>Apache Tomcat/6.0.48 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 400 - Data upload requests must have content-type set to 'application/octet-stream'</h1><HR size="1" noshade="noshade"><p><b>type</b> Status report</p><p><b>message</b> <u>Data upload requests must have content-type set to 'application/octet-stream'</u></p><p><b>description</b> <u>The request sent by the client was syntactically incorrect.</u></p><HR size="1" noshade="noshade"><h3>Apache Tomcat/6.0.48</h3></body></html> [1001801] (at_curl_http_client.c:413)
00008596: 2018-08-02T07:19:53 [FILE_FACTORY ]E: Failed to upload file <R:\Replicate\data\tasks\MVR_DB2_HADOOP\data_files \1\0\FL-0-20180802-1418331518.csv> to HADOOP target </apps/hive/warehouse/insurance_staging.db/MVRAE/FL-0-20180802-1418331518.csv> [1001900] (at_hadoop_client.c:776)
00005596: 2018-08-02T07:19:53 [TASK_MANAGER ]W: Table 'HUNG1'.'MVRAE' (subtask 1 thread 1) is suspended (replicationtask.c:2171)
00008596: 2018-08-02T07:19:53 [FILE_FACTORY ]E: Failed to upload <R:\Replicate\data\tasks\MVR_DB2_HADOOP\data_files \1\0\FL-0-20180802-1418331518.csv> to </apps/hive/warehouse/insurance_staging.db/MVRAE/FL-0-20180802-1418331518.csv> [1000722] (at_hadoop_ff.c:556)
00008596: 2018-08-02T07:19:53 [FILE_FACTORY ]E: failed to write entire file [1000722] (at_universal_fs_object.c:1199)
00008596: 2018-08-02T07:19:53 [FILE_FACTORY ]E: Write entire file failed: source = 'R:\Replicate\data\tasks\MVR_DB2_HADOOP\data_files \1\0\FL-0-20180802-1418331518.csv' target = '/apps/hive/warehouse/insurance_staging.db/MVRAE/FL-0-20180802-1418331518.csv' open type = 3 [1000731] (at_universal_fs_object.c:932)
00008596: 2018-08-02T07:19:53 [TARGET_LOAD ]E: Failed to upload file from R:\Replicate\data\tasks\MVR_DB2_HADOOP\data_files\ 1\0\FL-0-20180802-1418331518.csv to HDFS path /apps/hive/warehouse/insurance_staging.db/MVRAE/FL-0-20180802-1418331518.csv. [1000731] (hadoop_utils.c:847)
00008596: 2018-08-02T07:19:53 [TARGET_LOAD ]E: Failed to load file '1'. [1000731] (hadoop_load.c:1312)

Thank You
Jamie

Not able to perform CDC, it is getting stopped- Attunity 5.5.0.283

$
0
0
00005616: 2018-08-06T01:24:06 [TARGET_LOAD ]I: Load finished for table 'ISE'.'S_I_T' (Id = 7). 0 rows received. 0 rows skipped. Volume transfered 0 (streamcomponent.c:3084)
00005900: 2018-08-06T01:24:06 [TASK_MANAGER ]I: Table 'ISE'.'S_I_T' (Id = 7) Loading finished by subtask 2. 0 records transferred. (replicationtask.c:1950)
00005900: 2018-08-06T01:24:06 [TASK_MANAGER ]I: Subtask #2 ended (replicationtask_util.c:937)
00002336: 2018-08-06T01:24:07 [SOURCE_CAPTURE ]I: Source endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:579)
00005900: 2018-08-06T01:24:07 [TASK_MANAGER ]I: Subtask #3 ended (replicationtask_util.c:937)
00000308: 2018-08-06T01:24:08 [SOURCE_UNLOAD ]I: Standby database role is used. (oracle_endpoint_conn.c:136)
00000308: 2018-08-06T01:24:08 [SOURCE_UNLOAD ]I: SUPPLEMENTAL_LOG_DATA_PK is set (oracle_endpoint_conn.c:146)
00008896: 2018-08-06T01:24:08 [TARGET_LOAD ]I: Target endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:585)
00008896: 2018-08-06T01:24:08 [SOURCE_CAPTURE ]I: Source endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:579)
00005900: 2018-08-06T01:24:08 [TASK_MANAGER ]I: Subtask #4 ended (replicationtask_util.c:937)
00003908: 2018-08-06T01:24:09 [SOURCE_UNLOAD ]I: Standby database role is used. (oracle_endpoint_conn.c:136)
00003908: 2018-08-06T01:24:09 [SOURCE_UNLOAD ]I: SUPPLEMENTAL_LOG_DATA_PK is set (oracle_endpoint_conn.c:146)
00005524: 2018-08-06T01:24:10 [TARGET_LOAD ]I: Target endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:585)
00005524: 2018-08-06T01:24:10 [SOURCE_CAPTURE ]I: Source endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:579)
00005900: 2018-08-06T01:24:11 [TASK_MANAGER ]I: Subtask #5 ended (replicationtask_util.c:937)
00007512: 2018-08-06T01:27:25:801731 [INFRASTRUCTURE ]I: The log level for 'SOURCE_CAPTURE' has been changed from 'INFO' to 'VERBOSE'. (at_logger.c:2656)
00007512: 2018-08-06T01:27:25:801731 [SOURCE_CAPTURE ]T: Oracle CDC retry counter exceeded, retry once more in debug mode (oracdc_merger.c:1062)
00007512: 2018-08-06T01:27:25:801731 [SOURCE_CAPTURE ]V: Failed to fetch from thread '1' (oracdc_merger.c:1069)
00007512: 2018-08-06T01:27:25:801731 [SOURCE_CAPTURE ]V: Waiting 5 seconds before retry (oracdc_merger.c:878)
00007512: 2018-08-06T01:27:30:803150 [SOURCE_CAPTURE ]V: Reading next record in thread '1' (oracdc_reader.c:2022)
00007512: 2018-08-06T01:27:30:803150 [SOURCE_CAPTURE ]T: Set position in LOG MINER (oracdc_reader.c:1283)
00007512: 2018-08-06T01:27:30:953166 [SOURCE_CAPTURE ]T: Start LogMiner Session in thread '1' for: StartScn '0000000010AEE1B3', MaxScn '000000001141054E', LastTransactionScn '0000000000000000', ScnNotFoundError '0' (oracdc_reader.c:858)
00007512: 2018-08-06T01:27:32:182263 [SOURCE_CAPTURE ]T: Archived Redo log for the sequence 2919 does not exist, thread 1 (oracdc_reader.c:559)
00007512: 2018-08-06T01:27:32:182263 [SOURCE_CAPTURE ]T: Failed to set position in thread '1' (oracdc_reader.c:2038)
00007512: 2018-08-06T01:27:32:183293 [SOURCE_CAPTURE ]V: Going to execute the statement 'select status from v$thread where thread#= :thread' for thread '1' (oracdc_merger.c:74)
00007512: 2018-08-06T01:27:32:273278 [SOURCE_CAPTURE ]T: Oracle CDC stopped [1022301] (oracdc_merger.c:1056)
00007512: 2018-08-06T01:27:32:273278 [SOURCE_CAPTURE ]E: Oracle CDC stopped [1022301] (oracdc_merger.c:1056)
00007512: 2018-08-06T01:27:32:282272 [SOURCE_CAPTURE ]T: Error executing source loop [1022301] (streamcomponent.c:1564)
00007512: 2018-08-06T01:27:32:282272 [SOURCE_CAPTURE ]T: Stream component 'st_0_TCQA' terminated [1022301] (subtask.c:1513)
00007512: 2018-08-06T01:27:32:282272 [SOURCE_CAPTURE ]T: Free component st_0_TCQA (oracle_endpoint.c:49)
00007512: 2018-08-06T01:27:32:282272 [SOURCE_CAPTURE ]E: Error executing source loop [1022301] (streamcomponent.c:1564)
00005900: 2018-08-06T01:27:32:282272 [TASK_MANAGER ]E: Task error notification received from subtask 0, thread 0 [1022301] (replicationtask.c:2272)
00007512: 2018-08-06T01:27:32:282272 [TASK_MANAGER ]E: Stream component failed at subtask 0, component st_0_TCQA [1022301] (subtask.c:1350)
00005900: 2018-08-06T01:27:32:299272 [TASK_MANAGER ]W: Task 'GA_CA_LOAD_ENTITY_TC' encountered a fatal error (repository.c:4773)
00007512: 2018-08-06T01:27:32:282272 [SOURCE_CAPTURE ]E: Stream component 'st_0_TCQA' terminated [1022301] (subtask.c:1513)
00010108: 2018-08-06T01:27:32:313274 [SORTER ]I: Final saved task state. Stream position timestamp:2018-07-31T09:35:50, Source id 1, next Target id 1, confirmed Target id 0 (sorter.c:652)
00006428: 2018-08-06T01:27:32:318273 [SOURCE_CAPTURE ]T: Free component Utility Source (oracle_endpoint.c:49)
00005900: 2018-08-06T01:27:32:586295 [TASK_MANAGER ]I: Subtask #0 ended (replicationtask_util.c:937)
00005900: 2018-08-06T01:27:32:596296 [SERVER ]I: Stop server request received internally (server.c:2956)
00005900: 2018-08-06T01:27:32:596296 [TASK_MANAGER ]I: Task management thread terminated (replicationtask.c:3089)
00007908: 2018-08-06T01:27:34:56423 [SERVER ]I: Client session (ID 6834) closed (dispatcher.c:200)
00007908: 2018-08-06T01:27:34:64425 [UTILITIES ]I: The last state is saved to file 'C:\Program Files\Attunity\Replicate\data\tasks\GA_CA_LOAD_ENT ITY_TC/StateManager/ars_saved_state_000002.sts' at Mon, 06 Aug 2018 07:27:32 GMT (1533540452313276) (statemanager.c:601)
00007180: 2018-08-06T01:27:34:72419 [SERVER ]I: The process stopped (server.c:3079)
00007180: 2018-08-06T01:27:34:73420 [AT_GLOBAL ]I: Closing log file at Mon Aug 06 01:27:34 2018 (at_logger.c:2288)

Library Error not able to add oracle as endpoint

$
0
0
Now while creating the endpoint as Oracle .. and specifying the details it gives me below library error



  • Stream component initialization function has failed for component 'Oracle', type 'Oracle'. Failed to load.
  • Cannot load <libclntsh.so.12.1, libclntsh.so.11.1, libclntsh.so.10.1, >: Success Failed to load. Cannot load <libclntsh.so.10.1>: libclntsh.so.10.1: cannot open shared object file: No such file or directory Failed to load. Cannot load <libclntsh.so.11.1>: libclntsh.so.11.1: cannot open shared object file: No such file or directory Failed to load. Cannot load <libclntsh.so.12.1>: libclntsh.so.12.1: cannot open shared object file: No such file or directory Failed to load.



I have added Oracle home library into LD_LIBRARY_PATH in attunity and i can use attunity user can access the library file libclntsh.so.12.1. still it gives error

On unix i am logging and use "attrep" user . However i am logging in console with user admin.
Can u tell me if it due to this?
i would be creating new thread if it will not work by this reply.



Full load is working but CDC is not happening

$
0
0
Full load is working but CDC(i.e if i change a data in source target is not getting trigger) is not happening. Could you please advise

00002912: 2018-08-06T07:04:25 [UTILITIES ]I: The state is restored from file 'C:\Program Files\Attunity\Replicate\data\tasks\GR_C_LOAD_ENTI TY_T_SI/StateManager/ars_saved_state_000002.sts' saved at Mon, 06 Aug 2018 13:04:09 GMT (1533560649412044) (statemanager.c:995)
00002912: 2018-08-06T07:04:25 [TASK_MANAGER ]I: Task 'GR_C_LOAD_ENTITY_T_SI' running full load and CDC in resume mode (replicationtask.c:1048)
00008648: 2018-08-06T07:04:25 [TASK_MANAGER ]I: Task Id: ce2a2297-342c-7f40-8c09-4a8a6d096e5f (replicationtask.c:2578)
00008648: 2018-08-06T07:04:25 [METADATA_MANAGE ]I: Going to connect to Oracle server (description=(address=(protocol=tcp)(host=X.X.X.X) (port=1521))(connect_data=(sid=xxx))) with username IFL_I (oracle_endpoint_imp.c:958)
00008648: 2018-08-06T07:04:26 [METADATA_MANAGE ]I: Source endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:579)
00008648: 2018-08-06T07:04:26 [METADATA_MANAGE ]I: Target endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:585)
00008648: 2018-08-06T07:04:26 [TASK_MANAGER ]I: Creating threads for all components (replicationtask.c:1590)
00008648: 2018-08-06T07:04:26 [TASK_MANAGER ]I: Threads for all components were created (replicationtask.c:1738)
00008648: 2018-08-06T07:04:26 [TASK_MANAGER ]I: Task initialization completed successfully (replicationtask.c:2637)
00005720: 2018-08-06T07:04:26 [SOURCE_CAPTURE ]I: Use any Oracle Archived Log Destination (oracle_endpoint_imp.c:723)
00005720: 2018-08-06T07:04:26 [SOURCE_CAPTURE ]I: Oracle CDC uses LogMiner access mode (oracle_endpoint_imp.c:732)
00005720: 2018-08-06T07:04:26 [SOURCE_CAPTURE ]I: retry timeout is '120' minutes (oracle_endpoint_imp.c:879)
00005720: 2018-08-06T07:04:26 [SOURCE_CAPTURE ]I: Scale is set to 10 for NUMBER Datatype (oracle_endpoint_imp.c:901)
00005720: 2018-08-06T07:04:26 [SOURCE_CAPTURE ]I: Retry interval is set to 5 (oracle_endpoint_imp.c:909)
00005720: 2018-08-06T07:04:27 [SOURCE_CAPTURE ]I: Oracle database version is 12.1.0.2.0 (oracle_endpoint_conn.c:546)
00005720: 2018-08-06T07:04:27 [SOURCE_CAPTURE ]I: Oracle compatibility version is 12.1.0.2.0 (oracle_endpoint_conn.c:86)
00005720: 2018-08-06T07:04:27 [SOURCE_CAPTURE ]I: Standby database role is used. (oracle_endpoint_conn.c:136)
00005720: 2018-08-06T07:04:27 [SOURCE_CAPTURE ]I: SUPPLEMENTAL_LOG_DATA_PK is set (oracle_endpoint_conn.c:146)
00002984: 2018-08-06T07:04:27 [TARGET_APPLY ]I: Target endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:585)
00002984: 2018-08-06T07:04:27 [TARGET_APPLY ]I: Restore bulk state. Last bulk last record id - '0', last applied record id - '0', target confirmed record id - '0' (endpointshell.c:1413)
00002984: 2018-08-06T07:04:27 [TARGET_APPLY ]I: Working in bulk apply mode (endpointshell.c:1420)
00002984: 2018-08-06T07:04:28 [SOURCE_CAPTURE ]I: Source endpoint 'Oracle' is using provider syntax 'Oracle' (provider_syntax_manager.c:579)
00009428: 2018-08-06T07:04:28 [SORTER ]I: Start the task using saved state. Start source from stream position timestamp:2018-08-06T11:37:59 and id 1. Confirmed target id is 0, next target id is 1 (sorter.c:448)
00009428: 2018-08-06T07:04:28 [SORTER_STORAGE ]I: Swap files were loaded. Next target id to be assigned is 1. Next swap file id is 1 (transaction_storage.c:563)
00008648: 2018-08-06T07:04:28 [TASK_MANAGER ]I: All stream components were initialized (replicationtask.c:2444)
00005720: 2018-08-06T07:04:29 [SOURCE_CAPTURE ]I: Used Oracle archived Redo log destination id is '1' (oracdc_merger.c:543)
00005720: 2018-08-06T07:04:29 [SOURCE_CAPTURE ]I: The Capture process starts from the local time '2018-08-06 05:37:59' (UTC '2018-08-06 11:37:59)' (oracdc_merger.c:739)
00005720: 2018-08-06T07:04:29 [SOURCE_CAPTURE ]I: No opened transactions (oracle_endpoint_capture.c:852)
00009428: 2018-08-06T07:04:29 [SORTER ]I: Transaction consistency reached (sorter_transaction.c:262)
00008648: 2018-08-06T07:04:29 [TASK_MANAGER ]I: Starting replication now (replicationtask.c:2193)
00005720: 2018-08-06T07:04:30 [SOURCE_CAPTURE ]I: New Log Miner boundaries in thread '1' : First REDO Sequence is '180', Last REDO Sequence is '180' (oracdc_reader.c:633)
00009428: 2018-08-06T07:14:29 [SORTER ]I: Task is running (sorter.c:583)
00009428: 2018-08-06T07:24:29 [SORTER ]I: Task is running (sorter.c:583)
00009428: 2018-08-06T07:34:29 [SORTER ]I: Task is running (sorter.c:583)
00009428: 2018-08-06T07:44:30 [SORTER ]I: Task is running (sorter.c:583)
00009428: 2018-08-06T07:54:30 [SORTER ]I: Task is running (sorter.c:583)
00009428: 2018-08-06T08:04:31 [SORTER ]I: Task is running (sorter.c:583)

Partioned Tables Metadata Query

$
0
0
We recently upgraded our version of Replicate to 6.1.0.402 and have noticed an issue for a full-load task. The source table is partitioned and the task hangs on the query listed below; eventually the task suspends the table. If I run the query in SSMS, it takes over 11 minutes to return a response. So, not sure what to do. We have no control over how the table is partitioned, since it comes from a 3rd party. Is there a way to increase the timeout so that it will wait longer? This seems to be a change in this release since we can run the same task on an older version of Replicate and it runs just fine and doesn't appear to use the query.

select
pf.name AS PartitionFunctionName, c.name AS PartitionKey
FROM sys.dm_db_partition_stats AS pstats
INNER JOIN sys.partitions AS p
ON pstats.partition_id = p.partition_id
INNER JOIN sys.destination_data_spaces AS dds WITH (NOLOCK)
ON pstats.partition_number = dds.destination_id
INNER JOIN sys.data_spaces AS ds
ON dds.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes AS ps
ON dds.partition_scheme_id = ps.data_space_id
INNER JOIN sys.partition_functions AS pf
ON ps.function_id = pf.function_id
INNER JOIN sys.indexes AS i
ON pstats.object_id = i.object_id AND pstats.index_id = i.index_id AND dds.partition_scheme_id = i.data_space_id AND i.type <= 1
INNER JOIN sys.index_columns AS ic
ON i.index_id = ic.index_id AND i.object_id = ic.object_id AND ic.partition_ordinal > 0
INNER JOIN sys.columns AS c
ON pstats.object_id = c.object_id AND ic.column_id = c.column_id
WHERE pstats.object_id = OBJECT_ID(N'[schema].[tablename]', N'table')
GROUP by pf.name, c.name

DB2 LUW Source: missing db2ReadLog in Apply Changes mode

$
0
0
Hi,
I've set up Attunity Replicate Express on RHEL7,
Source: DB2 10.5,
Target: MSSQL 2014

Initial load is working fine, but as soon as it is over and Attunity is trying to query logs for changes, I'm getting an error:
110494284: 2018-08-09T15:11:57 [INFRASTRUCTURE ]E: Failed to get symbol <db2ReadLog>: /opt/odbc_cli/clidriver/lib/libdb2o.so: undefined symbol: db2ReadLog [1000124] (at_loader.c:152)
The rest of the functionality is working fine
I applied most recent Fix Pack for IBM DB2 ODBC DRIVER 10.5, still no luck

Where should I look next?

Tomcat Connection Pool to Oracle RAC

$
0
0
Hi all, Our application connects to Oracle RAC cluster using tomcat connection pool. One node in the cluster had an issue and went to hung state.
In order to ensure the application is not impacted, we would like to know the in-depth status of the connection pool. For eg: are all connections to a particular node in RAC is having an issue. From the list of health check parameters, we could not find a way to identify which group of connection connecting to a RAC is affected
If yes, we can take corrective action based on it ( either manually or automate the replacement of a connection pool to the RAC node which is working )
Let us know if anybody has faced such issue
Thanks,
Lianamelissa.

Library Error when adding Oracle as Endpoint

$
0
0
Now while creating the endpoint as Oracle .. and specifying the details it gives me below library error

  • Stream component initialization function has failed for component 'Oracle', type 'Oracle'. Failed to load.
  • Cannot load <libclntsh.so.12.1, libclntsh.so.11.1, libclntsh.so.10.1, >: Success Failed to load. Cannot load <libclntsh.so.10.1>: libclntsh.so.10.1: cannot open shared object file: No such file or directory Failed to load. Cannot load <libclntsh.so.11.1>: libclntsh.so.11.1: cannot open shared object file: No such file or directory Failed to load. Cannot load <libclntsh.so.12.1>: libclntsh.so.12.1: cannot open shared object file: No such file or directory Failed to load.




I have added Oracle home library into LD_LIBRARY_PATH in attunity and i can use attunity user can access the library file libclntsh.so.12.1. still it gives error

On unix i am logging and use "attrep" user . However i am logging in console with user admin.
Can u tell me if it due to this?
i would be creating new thread if it will not work by this reply.






Visual Studio SSIS/SSDT and Attunity OCI Error

$
0
0
Just a head's up. We struggled with getting the Attunity drivers to work correctly with existing and new SSIS/SSDT packages at our shop. We had new installations of Visual Studio 2017 with SSDT, Oracle 11g client, and Attunity 2.0 and 5.0 (32/64 bit) set up. Legacy SSIS packages would not validate due to OCI errors. New package work would not allow use of Attunity connections. Since Oracle data access via SQLPlus and Toad was not impacted, it was a problem between Visual Studio and Attunity.

We finally stumbled on our solution: Once everything was set up, we needed to go back and "touch" something in the Oracle client, via the Oracle universal installer. A co-worker removed an excess Oracle Home, in my case I de-installed the Oracle 12c client. In both cases the OCI error was resolved. Something about touching the Oracle client last in the installation/setup process caused it to play nice with Attunity in the Visual Studio environment.

Hope this might save you some grief.

Change processing for large tables

$
0
0
Hi,

Is there any limitation for change processing on DB2 LUW tables in terms of size or number of records? We have a new setup and in one task, we have added about 8 tables, 7 of them are refreshing fine without any issues. 1 table is fully loaded but no incremental changes are getting applied there. This table has about 20 million records.. attached the full load and change processing snapshots.

Regards,

VishalName:  full_load.JPG
Views: 2
Size:  64.1 KBName:  change_processing.JPG
Views: 2
Size:  54.2 KB
Attached Images
  

Confluent Platform 5.0

$
0
0
I'd like to know when Confluent Platform 5.0 (Kafka) will be supported in Replicate.
Viewing all 411 articles
Browse latest View live