Quantcast
Channel: Attunity Integration Technology Forum
Viewing all 411 articles
Browse latest View live

Attunity Oracle destination error output problem

$
0
0
My SSIS package has just one data flow task with Oracle Attunity source and Attunity destination. My source table has just one column (for testing) of data type VARCHAR2(12 CHAR). The destination table has the same column but with reduced size VARCHAR(10, CHAR). When I execute this I expect to get an error for "Truncation" as i am trying to insert 12 characters string into target column of size 10, a simple test case.

I configured and enabled data viewer on Oracle Destination Error Output to redirect rows on failure or truncation. When i execute this i can see the records redirected to the error output but incorrect ErrorCode and ErrorCode-Description.

I replaced the oracle attunity destination with OLEDB destination and executed the data flow. I can see the expected error code and description now. I have attached screenshots for reference.

Am I doing something wrong or is this expected?
Attached Images
   

Skip virtual columns in Oracle destination

$
0
0
Hi,
My oracle destination table has a virtual column. In SSIS is it possible to skip this column when using Attunity connector 4.0? I get an error message "

ORA-54013: INSERT operation disallowed on virtual columns"

Oracle as Source and Snowflake as target: __CT Table keeps growing

$
0
0
Hi,

I have been working on this Attunity for past few weeks with a POC license. My Source is Oracle and Target is Snowflake. Well, Everything looks cool and we were able to achieve Full load and Change Data capture where Primary table in Target is always in sync with table in Oracle and all the I,U,D pop up in __CT Table.

My 2 main conerns here are
1. The header__timestamp column with Datetime(6) won't show the value in milli seconds.

2018-09-18 18:46:04.000
2018-09-18 18:46:04.000
2018-09-18 18:47:59.000
2018-09-18 18:47:59.000

This won't help us because we have thousands of transactions in seconds and unless we have milliseconds up to precision of Timetsamp(FF6) , its impossible for downstream to run queries.

2. The __CT table in target(Snowflake) keeps growing. I would like to dump data in target in such a way that I do not want data more than 15 days. On 16th day, 1st day's data should be gone. How do I achieve this with limited SQL transformations in Global Transformations.

Error Log Tables Location

$
0
0
Attention gurus:

We have been using Replicate functions with Teradata as target DB. Once in a while, there are dozens of error tables (something like my_table_59007E330000_E1) being created in the same database as our production tables. These tables are marginally useful for troubleshooting, but is there a setting in Replicate to change the default database for error logs, say to a separate log DB? Alternatively, how about collecting error logs in just ONE table per object?

Thank you!

Winserver 2012R2 and SQL Server SP1-Uprading Teradata Driver to 16.2-Compatibility

$
0
0
TLDR; My main concern is the compatibility of the Attunity V4 for Teradata compatibility with Teradata Drivers and Utilities version 16.2?

Hello, I’m on a Windows Server 2012 R2 Standard Edition, and SQL Server 2016 SP1. I also have installed Teradata Driver 15.10, Teradata Transport Utilities 15.10, and Attunity V3.
I need to upgrade the Teradata Driver, Utilities to Teradata Version 16.2, and I may have to upgrade the SQL Server Attunity for Teradata Connector to Version 4, not sure yet on that one.
My question is, has someone else out here with the same system setup, performed this same upgrade without issues and no incompatibilities problems? Any pitfalls I need to be made aware of by chance?

Thank you

Installing R1 (edge) on Ubuntu

$
0
0
Which version of R1 listed under Downloads on https://attunity.force.com should I use for an Ubuntu machine? This machine will be one of two EDGEs connected to a Windows Server CENTER. Thanks

Server Requirements

$
0
0
Hello, I have recently tried the Attunity SSIS Conector for Oracle v 3.0 on my local workstation and it worked great. Now I need to plan on how I can get all this to work on a production server and I need to tell my SQL Server Enterprise team what I need. I am using the following on my local workstation, but don't need some of these apps on the server of course:

-SQL Server 2014 (Express or Developer)
-SQL Server Data Tools for Visual Studio 2013
-Attunity SSIS Connector for Oracle v 3.0
-I also have Oracle SQL Developer wich probably installed some drivers no doubt.

My goal is to pull some data out of a production oracle database and then into SQL Server for some reporting applications and also pull data out of Oracle and then into Excel spreadsheets .xlsb format.

I had to update my TNSnames.ora file with an entry for the oracle data source to get the Atunity Oracle Source data flow item to work. Im not going to have a TNSnames.ora file on a production SQL Server!

What are my options and what do I need to tell the SQL Server team I need to get this to work in a production SQL Server environment?

Help appreciated

Thanks,
Frank

Strange Issue w/ Kicking Off Tasks

$
0
0
This may not be the right place to post this question, but I'm wondering if anyone else is experiencing the same issues we are with our Replicate install.

We have been working in a Dev instance of Replicate 6.2.0.255 (previously we were on 6.1.0.XXX (I forget which version specifically, but I think it was before the current release of 6.1.0.442) and we were testing our migration process by exporting and importing jobs and we kept seeing a recurring error flare up, where the task would immediately fail and throw the error "Task Server Initialization Failed." No useful logs would be generated by this failed task.

During the course of a lot of looking around online (with no luck) and also some manual digging in our managed server (which is on Windows Server 2016), I noticed that there were subfolders for each of our tasks in the directory D:\Attunity\Replicate\data\tasks\, where we had installed the app. I thought there may be some strange caching issue happening so I tried to delete the folder for our failed task, only to have it immediately recreate itself as a blank directory with the same name. This also occurred when I renamed the folder. When I exported the failing task and edited the JSON string to show a different name and re-imported it, the task runs as intended - this leads me to believe that we have either a corrupted installation of Replicate, or that we are unearthing a caching bug.

I wanted to know if anyone else in this forum has seen this, or had thoughts on this?

Issues with Oralce Attunity 5.0 and SSIS 2.0(SSDT 15.8.1) in VS 15.8.7 Enterprise

$
0
0
I have a problem on VS 2017 15.8.7. I can't create MSORA connector as there is no option available. There is no attunity components in SSIS toolbox

If I try to open an existing package with Attunity connector I get this.
  • Severity Code Description Project File Line Suppression State Error Error loading oracle_fact_customer_order.dtsx: Failed to create COM Component Categories Manager due to error 0x80070005 "Access is denied.". oracle_fact_customer_order.dtsx 1
  • Severity Code Description Project File Line Suppression State Error Error loading oracle_fact_customer_order.dtsx: The component metadata for "Get Oracle fact_orders" could not be upgraded to the newer version of the component. The PerformUpgrade method failed. oracle_fact_customer_order.dtsx 1
  • Severity Code Description Project File Line Suppression State Error Error loading oracle_fact_customer_order.dtsx: The component is missing, not registered, not upgradeable, or missing required interfaces. The contact information for this component is "Oracle Source;Microsoft Connector for Oracle by Attunity; Attunity Ltd.; All Rights Reserved; http://www.attunity.com;6". oracle_fact_customer_order.dtsx 1


I have VS 2015 installed as well and it's working just fine. MS please stop mokey around with the tool. I just spend 2 days trying to figure this out.

Iseries DB2 journals.

$
0
0
Hi, first post so apologies in advance if I break any rules.

Just getting to grips with Replicate using Iseries journals as source. Having problems tying up deletes as there is no before imaging and obviously after image is blank.
Dont believe RRNs are stable enough and Iseries guys are reluctant to turn on before imaging due to impact on the journals. Would be grateful if anyone has any ideas or experiences that might help.

Colin

Setting IsSorted from BIML on a Teradata Source Connector.

$
0
0
I am using the Teradata Source Connector by Attunity. I am trying to build a Source Connector to a Teradata database with BIML.
I need to set the IsSorted property on my output. How do I do this from BIML? Can it be with CustomProperty on my CustomComponent?

/Andreas

MySQL source "Read next binary log event timed-out"

$
0
0
We are using MySQL db as a source and SQL Server as a destination.

Replication seems to run great for hours, with a full load taking only 11 minutes, but after a few hours it will get this timed-out error and quit replicating...

stopping & resuming the task gets the replication working again... but it needs to be done manually

ideally, I would like to resolve the timed-out issue, but is there anyway to trigger an automatic restart of the task upon errors?


error:
00004680: 2018-11-12T13:04:25 [SOURCE_CAPTURE ]I: Read next binary log event timed-out. (mysql_endpoint_capture.c:932)


All servers are running in AWS, the source is in the Oregon region and the Attunity & Destination servers are in the Ohio region (67ms latency)

any help would be appreciated
Attached Files

Get net changes occasionally does not capture some rows - why is nolock used?

$
0
0
I have a problem that every few weeks, the get_net_changes function does not pick up some change data capture rows that are within the lsn range. Then I have variances from the source system and either need to do a full load or update the data in the source so that it is captured again.

I'm wondering if it is perhaps because the functions use "with (nolock)". Is there a reason this hint is used? Would we get deadlocks without it? Couldn't this be a problem because with the hint there is the potential for missed rows or dirty reads.

But I'm hesitant to just go through the functions and remove the hint as I don't want to mess them up.

Scott

Exclude change tables in downstream replication

$
0
0
We are using Attunity to replicate from numerous on-prem sources to AWS Aurora Postgres database. In this task, we have enabled Store Changes to capture the history tables. We are also using Attunity to replicate from this Aurora database to another Aurora database as well as to Snowflake. For these steps, we do not wish to replicate the change tables from the first Aurora instance. In the Table Selection, we told it to include all tables in the schema and then exclude all tables ending in __ct.
Example:
Include caps_cmsi.% (Tables)
Exclude caps_cmsi.%__ct% (Tables)

This resulted in the job excluding all tables that contain "ct" anywhere in the name. We exported the job, edited the json to remove the last % in the exclusion, and imported the job.
Example:
Include caps_cmsi.% (Tables)
Exclude caps_cmsi.%__ct (Tables)

This caused it to exclude all tables that end in "ct". The exclusion is ignoring the double underscore. How can we get Attunity to exclude the history tables from the source while not excluding tables with "ct" in the name?

An existing connection was forcibly closed by the remote host

$
0
0
A few months back I started getting the below error when running jobs. These could be upload or download jobs. I have created new EC2 instances to run the jobs from and new cloudbeam servers and still I get this error frequently. Even on servers that have a single job that runs on a dedicated cloudbeam server. I don;t seem to be able to find any corresponding errors on either the cloudbeam or EC2 instance running the client. Generally the errors are great and working out the cause is straight forward. This one however has me stumped.

04:48:11 LFA-E-RQUERY, Cannot query file propeties on server
-LFA-E-MSGRCV, Cannot receive LFA message
-SFM-E-RECV, failed to receive message
-LFA-E-LNKERR, Link error: NET-E-RECV, network recv error
-CRP-E-RECV, recv operation failed
-CRP-E-INTMSG, W_10054 An existing connection was forcibly closed by the remote host.
04:48:11 R1-E-Failed to transfer files

W_10054 An existing connection was forcibly closed by the remote host.

$
0
0
Generally speaking, the errors I encounter are pretty easy to track down. The below error however I am intermittently getting on larger LFA jobs. This can be on an EC2 instance that has been brought up for the specific job with a dedicated cloudbeam server or on EC2 instances and cloudbeams that have run hundreds of jobs. The source in S3 hasn't changed either. I had never seen this error before, but about 6 months ago I started seeing this error. Recently it has become more common. I have been unable to relate this error to any event viewer log or anything else. Is there a common cause for this?

22:55:26 LFA-E-RTRFIL, Cannot retrieve file
-LFA-E-MSGRCV, Cannot receive LFA message
-SFM-E-RECV, failed to receive message
-LFA-E-LNKERR, Link error: NET-E-RECV, network recv error
-CRP-E-RECV, recv operation failed
-CRP-E-INTMSG, W_10054 An existing connection was forcibly closed by the remote host.
22:55:26 R1-E-Failed to transfer files

Error in using DBF as source

$
0
0
Hi all!

Getting an error when using ODBC as a source endpoint with DBF files as the source file

Used the driver, Microsoft Access dBASE Driver

SYS-E-HTTPFAIL, Command get_owner_list failed when getting the list.
SYS,GENERAL_EXCEPTION,Command get_owner_list failed when getting the list.,Failed to get owners '' RetCode: SQL_ERROR SqlState: HYC00 NativeError: 106 Message: [Microsoft][ODBC dBASE Driver]Optional feature not implemented


Also used another driver, Devart ODBC for xBase

SYS,GENERAL_EXCEPTION,Command get_owner_list failed when getting the list.,Failed to get owners '' RetCode: SQL_ERROR SqlState: HYC00 NativeError: 0 Message: [Devart][ODBC]Optional feature not implemented

Mysql Connection Error : No valid MySQL ODBC 5.x Unicode Driver is installed.

$
0
0
Hi Team,

I am getting below error after installing driver also , not able to connect to mysql.

No valid MySQL ODBC 5.x Unicode Driver is installed. The required ODBC driver or driver manager is not properly installed and configured.
I installed Attunity Express Edition for one month.

ODBC Driver:
mysql-connector-odbc-5.3.10-1.el7.x86_64
Mysql version : | version | 5.6.41
| version_comment | MySQL Community Server (GPL)
| version_compile_machine | x86_64
version_compile_os | Linux

I am doing CDC from MYSQL to Hadoop.

Please find attached screen.

Please help me on this, Thanks.
Attached Images
 

Mysql Connection Error : No valid MySQL ODBC 5.x Unicode Driver is installed.

$
0
0
Hi Team,

I am getting below error after installing driver also , not able to connect to mysql.

No valid MySQL ODBC 5.x Unicode Driver is installed. The required ODBC driver or driver manager is not properly installed and configured.

I installed Attunity Express Edition trail verison 6.2 for one month.

ODBC Driver:
mysql-connector-odbc-5.3.10-1.el7.x86_64

Mysql version : 5.6.41 MySQL Community Server (GPL)
Linux x86_64

i installed Attunity replicate Express edition trail version 6.2 in Linux Machine.

I re-start replicate after mysql ODBC driver installed.I added LD_LIBRARY entries new path as well.

But still getting same error.

Please help me on this.


I am doing CDC from MYSQL to Hadoop.

Please find attached screen.

Please help me on this, Thanks.
Attached Images
 

HDFS Target Error : A call to Kerberos utility kinit failed

$
0
0
Hi Team,

Please help me on this keytabs error. We i am testing the connection. i am getting below error.
Getting below error in Hadoop target connection.

-16861166: 2018-11-28T16:34:28 [INFRASTRUCTURE ]E: Failed to renew kerberos ticket [1002300] (at_kerberos.c:371)
-16861166: 2018-11-28T16:53:51 [SERVER ]I: Driver 'MySQL ODBC 5.2 Unicode Driver' is installed and will be used (mysql_endpoint_imp.c:548)
-16861166: 2018-11-28T16:53:51 [SERVER ]I: ODBC additional properties = '(null)' (mysql_endpoint_imp.c:614)
-16861166: 2018-11-28T16:53:51 [SERVER ]I: Connecting to MySQL through ODBC connection string: DRIVER={MySQL ODBC 5.2 Unicode Driver};SERVER=10.21.51.76;port=3306;UID=root;PWD= ***;DB=;CHARSET=binary;initstmt=SET time_zone='+00:00';Option=74448896;NO_LOCALE=1; (mysql_endpoint_imp.c:701)
-16861166: 2018-11-28T16:53:51 [SERVER ]I: Source endpoint 'Mysql' is using provider syntax 'MySQL' (provider_syntax_manager.c:610)
-16861166: 2018-11-28T16:55:59 [SERVER ]E: Missing ODBC host. [1020401] (hadoop_imp.c:696)
-16861166: 2018-11-28T16:56:08 [INFRASTRUCTURE ]E: A call to Kerberos utility kinit failed: commad arguments are '-kt /etc/security/keytabs/hdfs.headless.keytab hdfs-hcdl_np@NPHCDLRIL.COM', exit status is 127, error message is /usr/bin/kinit: relocation error: /usr/bin/kinit: symbol krb5_get_init_creds_opt_set_pac_request, version krb5_3_MIT not defined in file libkrb5.so.3 with link time reference [1002300] (at_kerberos.c:481)
-16861166: 2018-11-28T16:56:08 [INFRASTRUCTURE ]E: Failed to renew kerberos ticket [1002300] (at_kerberos.c:371)
-16861166: 2018-11-28T16:56:20 [INFRASTRUCTURE ]E: A call to Kerberos utility kinit failed: commad arguments are '-kt /etc/security/keytabs/hdfs.headless.keytab hdfs-hcdl_np@NPHCDLRIL.COM', exit status is 127, error message is /usr/bin/kinit: relocation error: /usr/bin/kinit: symbol krb5_get_init_creds_opt_set_pac_request, version krb5_3_MIT not defined in file libkrb5.so.3 with link time reference [1002300] (at_kerberos.c:481)
-16861166: 2018-11-28T16:56:20 [INFRASTRUCTURE ]E: Failed to renew kerberos ticket [1002300] (at_kerberos.c:371)
Attached Images
   
Viewing all 411 articles
Browse latest View live