Quantcast
Channel: Attunity Integration Technology Forum
Viewing all 411 articles
Browse latest View live

CDC Atunnity on MS SQL Server 2012 Enterprise edition supports Oracle12c

$
0
0
Does CDC Atunnity that comes shipped with MS SQL Server 2012 Enterprise edition supports Oracle 12C ?

We are currently on Microsoft SQL Server 2012 (SP3-CU10) (KB4025925) - 11.0.6607.3 (X64)
We've been using the CDC on MS SQL Server 2012 for Oracle 11G for over a year without any issue but we just upgraded our Oracle to version 12C. It works for a few seconds but stall immediately ABORTED status.


ORACDC205E:The Oracle CDC instance CDC was aborted.

Replication getting error and stopped

$
0
0
Hi Support Team,

Can you please help on this replication issue.



  1. What are the limitations of attunity replication.
  2. How can we replicate online transaction data from source to target
  3. During the testing I am doing the DML operations and trying to replicate in target which is not going to replicate.



In log file I seen some errors like insufficientprivilege errors. Please let me know the work around to avoid the errors.

(BLOB, CLOB and similar large object data types

$
0
0
Does anyone have or or can point me to a list of all data types Attunity interprets as a LOB data type?

clob, blob, xml, image, nvarchar, nchar, ntext, varbinary,


OTHERS?

Problems trying to execute SSIS package containing MS Connector for Oracle

$
0
0
My issue is that I have a very simple SSIS package using the Microsoft Oracle Connector which executes fine within Visual Studio 2015 on my laptop, but throws numerous errors when deployed to either SQL Server, or when attempting to run from the command line. I've tried various links I've found here and elsewhere but I still cannot get this to work.


I am a complete newbie on these tools, I am coming from an Oracle and .NET background, so I may be missing some basics here.


Laptop setup is: Visual Studio 2015, ODP.NET 64 bit, 64-bit Oracle Client, MS Oracle Connector by Attunity 5.0 64-bit, Sql Server Data Tools for VS 2015, Sql Server Management Studio 2016 - 13.0.16106.4.


On the database server: SQL Server version is 2016 (SP1) - 13.0.4001.0 (X64)
Developer Edition (64-bit) on Windows Server 2012 R2 Standard 6.3 <X64>, Oracle 64-bit client, SSIS, and MS Oracle Connector by Attunity 5.0 64-bit version.


I'm attempting to populate 3 SQL Server tables from 3 corresponding Oracle tables.


I have a very simple integration services project in VS 2015: 1 control flow item, with 3 data flows - all 3 run a simple SELECT against the same Oracle database with a connection manager to that DB, each one reads a different table, and populates a different SQL Server table (using the OLE Destination object), all 3 Oracle tables are in the same database instance, and SQL Server tables are in the same database.


The project runs fine from my laptop within the VS IDE, all 3 feeds run in parallel and populate the corresponding SQL server tables as expected.


However, I'm getting numerous error messages attempting to execute the package using other methods. Googling around has not helped to clarify things - I need to understand what these error messages mean and how to resolve them.


* DTEXEC - I copied my .dtsx file for the above project to the C:\TEMP folder server where SQL Server resides, and I ran the 64-bit dtexec utility as follows:


<B>F:\Program Files\Microsoft SQL Server\130\DTS\Binn>dtexec /file c:\temp\package.dtsx > c:\temp\dtexec_errors.txt</B>


Please see images below (or attachment) for errors I'm receiving.


https://i.stack.imgur.com/l8yeP.png
https://i.stack.imgur.com/DJHzS.png



* SSIS DB catalog - Created the SSISDB catalog under 'integration services catalogs' folder in SSMS. Within Visual Studio I right clicked the package and selected Deploy . After deployment, right clicked the package in SSMS, picked 'Execute'. Received several similar errors to what is shown in the screen shots.

Thanks in advance for any advice, pointers, assistance. Also note that from Oracle to SQL server I plan to do little to no transformation, plus this solution will eventually be used for possibly upwards of 200+ Oracle dtexec_errors.txtdtexec_errors.txttables, possible total data volume would be around 600 - 700 GB, if there is a better or easier way that can be automated and gets similar high performance I'm open to hearing it.
Attached Files

CDC Hung in Processing Status

$
0
0
Our Attunity Oracle Change Data Capture instance has been in Processing status with the details in Currently Processing unchanged for almost 12 hours now despite many changes being made on the Oracle side. I restarted the server, but this changed nothing. When I attempt to use Collect Diagnostics, I get errors: "Waiting for Oracle diagnostics. No response received from Oracle CDC instance. Continue waiting or skip the Oracle diagnostics and continue without it?" and "The snap-in is not responding. To return to MMC and check the status of the snap-in, click Cancel. If you choose to end the snap-in immediately, you will lose and unsaved data. To end the snap-in now, click End Now." I queried the xdbcdc_state and xdbcdc_trace table and there was nothing noteworthy in there. Does anyone have any ideas on what might be the problem? I do not want to reset and lose the changes that have been made in Oracle since this hung.


Additional information: When I attempt to stop the service I get an error pop saying the was an error while attempting to stop the service. If I manually stop the service from services.msc, the Attunity UI does not reflect that the service is stopped. It still shows the status as Processing.

AWS Attunity Replicate AMI connection issues

$
0
0
For the AWS Attunity Replicate AMI we have created an EC2 instance with a new IAM role and a security group.

For the ingestion from SQL Server Source to AWS Redshift Destination the AMI requires three connections namely SQL Server Source, S3 staging and Redshift Destination.
Out of these three connections I am facing issues with connecting to S3 staging -
The UI asks for an Access key and a Secret Key which we can get only if a user is created. We access the AWS account as federated users. I tried using temporary access and secret keys but the connection doesn’t seem to work. Do we need to create a user with a permanent access and secret key for this connection or is there a way around this ?

The error I get is

SYS-E-HTTPFAIL, Failed to connect Network Error has Occurred.

Looking forward to hearing back from you.

Thanks.

AWS Attunity Replicate AMI connectivity isues

$
0
0
For the AWS Attunity Replicate AMI we have created an EC2 instance with a new IAM role and a security group.

For the ingestion from SQL Server Source to AWS Redshift Destination the AMI requires three connections namely SQL Server Source, S3 staging and Redshift Destination.
Out of these three connections I am facing issues with connecting to S3 staging -
The UI asks for an Access key and a Secret Key which we can get only if a user is created. We access the AWS account as federated users. I tried using temporary access and secret keys but the connection doesn’t seem to work. Do we need to create a user with a permanent access and secret key for this connection or is there a way around this ?

The error I get is

SYS-E-HTTPFAIL, Failed to connect Network Error has Occurred.

Looking forward to hearing back from you.

Thanks.

Account Lockout Every day (multiple times)

$
0
0
Hello RepliWeb R-1 gurus. Let me give you a back story and see if you can help me out.

I have an alias account to log in to servers. Let's call it account ABC. I created some download and distribution templates using ABC account. It's an administrator on all servers, however it's personal to me. Once I tested the templates using my alias account (ABC), I then changed the properties of the templates to use the proper service accounts. A few weeks go by and it was time to change my ABC password. I change it ..... but now my ABC account gets locked out by the RepliWeb server.
Checked all templates on Run As and during the Scheduled times.....none are running as my ABC account.
Verified there were no Scheduled Tasks (Windows)....nothing there.
Ensured the settings to log in to R1 (Never save password (do not ask again)) was checked

Finally I found this in the registry:
HKey_Users\GUID\Software\RepliWeb\R1
- CenterSettings has my ABC account and password in a thread (encrypted PW)
- Password - had some encrypted too
- user - had my ABC account

I exported those keys just in case, then I cleared them out.

Am I missing something somewhere that is keeping my credentials? I keep getting locked out multiple times a day.

Distribution style and sort order in Redshift

$
0
0
Is there a way in which I can define the sort keys and distribution style for the destination table in Redshift using Attunity Replicate. I did not come across anything I can use to do that.

Assistance with constant lockout

$
0
0
My cyber threat center is showing Program Files\RepliWeb\RDS\Common\rw_srv.exe and Program Files\RepliWeb\RDS\Controller\Bin\rw_eraser.exe is always locking out my password. I had initially stored it but have since removed it. Any ideas?

Upgraded SSIS issue with Attunity 5.0

$
0
0
Getting this error:
[Oracle Source [31]] Error: The AcquireConnection method call to the connection manager xxxx failed with error code 0x80004005. There may be error messages posted before this with more information on why the AcquireConnection method call failed.

Background:
Have 40 packages in an SSIS project in VS 2012 on Windows 7. Machine failed, and new machine has windows 10 and VS 2017. Installed Attunity 4.0 and 5.0. Copied project to new machine and changed target SQL server from 2012 to 2016. Project level connection managers can all connect (MSORA and OLEDB). When run any Dataflow task bringing data from Oracle, get the above error. I have several Oracle client installs. Tried various fixes like turning on delay validation, pointing Oracle_home to different install locations and using hostname:port/service name instead of tns.

Created new SSIS project and duplicated one Dataflow task from the original project and it works fine.

Seems as though there is a mis-match somewhere in the connection as all connections test successful, but the Dataflow task fails while attempting to connect. I don't want to have to recreate all 40 packages. Note, the original project works fine on another users Windows 7 machine.

Any help would be greatly appreciated.

how does CDC react when changes to previous loaded tables are made.

$
0
0
here's what were confronted with

1 a full load was completed for multiple tables in the database. (tables contained LOB columns)
2 the task was stopped the task, using table transformations, we removed all LOB columns from all tables.
3 using the advanced run options - started task from current point in time.
4 task continues to fail each time there is an LOB column change.

the question is.....

What is driving CDC, the table/columns defined at the time of the LOAD (with lob columns)
or
does CDC recognize the changes to the tables? (which now has no lob columns) ?

This continuous failure is causing some serious issues with our team and the source DB team.

Any and all help would be greatly appreciated!

Bob
Attunity developer at Ford Motor Co.
3132484410
rgille22@ford.com

Batch Tuning - Limit file creation interval to defined time

$
0
0
I have a task set up with Oracle Database source + File target. The task is creating csv/gzip files and dropping the change files in the correct directory but it's creating a file every 1-3 minutes. I'd like to batch and only create a file every 15-25 minutes. Is this possible? I'm using the Batch tuning settings under Task Settings > Change Processing > Change Processing Tuning as below to try and obtain this. Attaching screen shot of my settings.

Apply batched changes in intervals:
Longer than (seconds): 900
But less than (seconds): 1500
Attached Images
 

Issue with incremental load from RDS

$
0
0
Hi Folks,

I'd like to report an issue we have with Attunity Replicate. We are using an instance from the AWS Marketplace (/w supplied hourly licence) per:
Code:

00003576: 2018-01-19T02:50:08 [AT_GLOBAL      ]I:  Task Server Log (V6.0.0.238 EC2AMAZ-ALOK7HJ Microsoft Windows Server 2012  (build 9200) 64-bit, PID: 2940) started at Fri Jan 19 02:50:08 2018  (at_logger.c:2470)
00003576: 2018-01-19T02:50:08 [AT_GLOBAL      ]I:  Licensed to AWS Marketplace - Attunity Replicate Hourly, permanent license, sources: (Oracle,SQLServer,MySQL,PostgreSQL,DB2LUW), targets: (Oracle,SQLServer,MySQL,PostgreSQL,Teradata,Redshift,Hadoop), all hosts  (at_logger.c:2473)

We are attempting to replicate data from a MySQL RDS instance to a Redshift instance. When we setup the job, the full load works fine, it reports finding:
Code:

00006920: 2018-01-18T10:34:31 [SOURCE_CAPTURE  ]I:  Set position to initial context 'now'  (mysql_endpoint_capture.c:3080)
00006920: 2018-01-18T10:34:31 [SOURCE_CAPTURE  ]I:  Setting position in binlog 'mysql-bin-changelog.009678' at 9178452  (mysql_endpoint_capture.c:785)

at the end of the full load, it notes:
Code:

00006920: 2018-01-19T02:45:04 [SOURCE_CAPTURE  ]I:  > ROTATE_EVENT  (mysql_endpoint_capture.c:2959)
00004872: 2018-01-19T02:46:08 [SORTER          ]I:  Final saved task state. Stream position mysql-bin-changelog.009874Am:143093:-1:143124:42408507220456:mysql-bin-changelog.009874Am:142947, Source id 4802602, next Target id 12401841, confirmed Target id 12400325  (sorter.c:655)
00004008: 2018-01-19T02:46:12 [TASK_MANAGER    ]I:  Subtask #0 ended  (replicationtask_util.c:937)
00004008: 2018-01-19T02:46:12 [TASK_MANAGER    ]I:  Task management thread terminated  (replicationtask.c:3105)
00005016: 2018-01-19T02:46:12 [SERVER          ]I:  Client session (ID 7426) closed  (dispatcher.c:200)
00005016: 2018-01-19T02:46:12 [UTILITIES      ]I:  The last state is saved to file 'C:\Program Files\Attunity\Replicate\data\tasks\Afterpay/StateManager/ars_saved_state_000001.sts' at Thu, 18 Jan 2018 15:46:11 GMT (1516290371809835)  (statemanager.c:601)

Notice the additional characters at the end of the binlog filename ? We feel like they could be the root of the problem.

Upon starting the incremental load, we see:
Code:

00000908: 2018-01-19T02:50:09 [SORTER          ]I:  Start the task using saved state. Start source from stream position mysql-bin-changelog.009874Am:143093:-1:143124:42408507220456:mysql-bin-changelog.009874Am:142947 and id 4802602. Confirmed target id is 12400325, next target id is 12401841  (sorter.c:449)
00003900: 2018-01-19T02:50:09 [SOURCE_CAPTURE  ]I:  Resume TABLE_MAP at file 'mysql-bin-changelog.009874Am', pos '142947'  (mysql_endpoint_capture.c:3173)
00003900: 2018-01-19T02:50:09 [SOURCE_CAPTURE  ]I:  Setting position in binlog 'mysql-bin-changelog.009874Am' at 142947  (mysql_endpoint_capture.c:785)
00003900: 2018-01-19T02:50:09 [SOURCE_CAPTURE  ]I:  System var 'binlog_checksum' = 'CRC32'  (mysql_endpoint_capture.c:272)
00003900: 2018-01-19T02:50:09 [SOURCE_CAPTURE  ]I:  Position was set in binlog 'mysql-bin-changelog.009874Am' at 142947  (mysql_endpoint_capture.c:810)
00003900: 2018-01-19T02:50:09 [SOURCE_CAPTURE  ]I:  Error 1236 (Could not find first log file name in binary log index file) reading binlog. Try reconnect  (mysql_endpoint_capture.c:999)

So.... obviously it just fails. We attempted to change to use the official MySQL ODBC drivers, but sadly the AWS Marketplace image isnt licensed for that feature, so we can only use the supplied driver.

The problem is a little hard to visualize in the text above as the additional characters are in fact unicode I believe. If someone from Attunity can supply an email address I'm happy to send in the full logs.

FWIW - We have considered modifying the state file with a hex editor to remove the extra data after the filename, but presumably we would be stuck if we stopped/started the process again as the state file would get overwritten with bad data again.

If anyone has any suggestions of advice, please share.

Issue with incremental load from RDS

$
0
0
Hi Folks,

I'd like to report an issue we have with Attunity Replicate. We are using an instance from the AWS Marketplace (/w supplied hourly licence) per:
Code:

00003576: 2018-01-19T02:50:08 [AT_GLOBAL      ]I:  Task Server Log (V6.0.0.238 EC2AMAZ-ALOK7HJ Microsoft Windows Server 2012  (build 9200) 64-bit, PID: 2940) started at Fri Jan 19 02:50:08 2018  (at_logger.c:2470)
00003576: 2018-01-19T02:50:08 [AT_GLOBAL      ]I:  Licensed to AWS Marketplace - Attunity Replicate Hourly, permanent license, sources: (Oracle,SQLServer,MySQL,PostgreSQL,DB2LUW), targets: (Oracle,SQLServer,MySQL,PostgreSQL,Teradata,Redshift,Hadoop), all hosts  (at_logger.c:2473)

We are attempting to replicate data from a MySQL RDS instance to a Redshift instance. When we setup the job, the full load works fine, it reports finding:
Code:

00006920: 2018-01-18T10:34:31 [SOURCE_CAPTURE  ]I:  Set position to initial context 'now'  (mysql_endpoint_capture.c:3080)
00006920: 2018-01-18T10:34:31 [SOURCE_CAPTURE  ]I:  Setting position in binlog 'mysql-bin-changelog.009678' at 9178452  (mysql_endpoint_capture.c:785)

at the end of the full load, it notes:
Code:

00006920: 2018-01-19T02:45:04 [SOURCE_CAPTURE  ]I:  > ROTATE_EVENT  (mysql_endpoint_capture.c:2959)
00004872: 2018-01-19T02:46:08 [SORTER          ]I:  Final saved task state. Stream position mysql-bin-changelog.009874Am:143093:-1:143124:42408507220456:mysql-bin-changelog.009874Am:142947, Source id 4802602, next Target id 12401841, confirmed Target id 12400325  (sorter.c:655)
00004008: 2018-01-19T02:46:12 [TASK_MANAGER    ]I:  Subtask #0 ended  (replicationtask_util.c:937)
00004008: 2018-01-19T02:46:12 [TASK_MANAGER    ]I:  Task management thread terminated  (replicationtask.c:3105)
00005016: 2018-01-19T02:46:12 [SERVER          ]I:  Client session (ID 7426) closed  (dispatcher.c:200)
00005016: 2018-01-19T02:46:12 [UTILITIES      ]I:  The last state is saved to file 'C:\Program Files\Attunity\Replicate\data\tasks\Afterpay/StateManager/ars_saved_state_000001.sts' at Thu, 18 Jan 2018 15:46:11 GMT (1516290371809835)  (statemanager.c:601)

Notice the additional characters at the end of the binlog filename ? We feel like they could be the root of the problem.

Upon starting the incremental load, we see:
Code:

00000908: 2018-01-19T02:50:09 [SORTER          ]I:  Start the task using saved state. Start source from stream position mysql-bin-changelog.009874Am:143093:-1:143124:42408507220456:mysql-bin-changelog.009874Am:142947 and id 4802602. Confirmed target id is 12400325, next target id is 12401841  (sorter.c:449)
00003900: 2018-01-19T02:50:09 [SOURCE_CAPTURE  ]I:  Resume TABLE_MAP at file 'mysql-bin-changelog.009874Am', pos '142947'  (mysql_endpoint_capture.c:3173)
00003900: 2018-01-19T02:50:09 [SOURCE_CAPTURE  ]I:  Setting position in binlog 'mysql-bin-changelog.009874Am' at 142947  (mysql_endpoint_capture.c:785)
00003900: 2018-01-19T02:50:09 [SOURCE_CAPTURE  ]I:  System var 'binlog_checksum' = 'CRC32'  (mysql_endpoint_capture.c:272)
00003900: 2018-01-19T02:50:09 [SOURCE_CAPTURE  ]I:  Position was set in binlog 'mysql-bin-changelog.009874Am' at 142947  (mysql_endpoint_capture.c:810)
00003900: 2018-01-19T02:50:09 [SOURCE_CAPTURE  ]I:  Error 1236 (Could not find first log file name in binary log index file) reading binlog. Try reconnect  (mysql_endpoint_capture.c:999)

So.... obviously it just fails. We attempted to change to use the official MySQL ODBC drivers, but sadly the AWS Marketplace image isnt licensed for that feature, so we can only use the supplied driver.

The problem is a little hard to visualize in the text above as the additional characters are in fact unicode I believe. If someone from Attunity can supply an email address I'm happy to send in the full logs.

FWIW - We have considered modifying the state file with a hex editor to remove the extra data after the filename, but presumably we would be stuck if we stopped/started the process again as the state file would get overwritten with bad data again.

If anyone has any suggestions of advice, please share.

Redshift Endpoint Setting - for performance

$
0
0
NOTES,

For our Redshift Endpoint / Advanced Tab

the Max File Size (MB) need to be set at 1000 NOT 1.

the correct number is 1000 .

Two-way deletion

$
0
0
I have the following requirements:

Copy file from centre to edge
Allow adding files to edge (that do not get deleted by repliweb)
Files deleted from centre also get deleted from edge
Files deleted from edge also get deleted from centre

A mirror job with a reinitialise time of 0 achieves the first three requirements.

Is there a way of achieving the forth? Can you have two replication jobs one each way? Or could it be scripted?

Thanks in advance
Phil

Linux AMI

$
0
0
Hi!
We're trying Attunity Replicate Windows AMI to incrementally load a Redshift db from an RDS instance. So far, it has been an interesting journey.

As our stack is Linux based, we wonder if you plan to release a Linux AMI with Attunity.

Thanks,

Andres

One to Many Configuration

$
0
0
Hey Everyone,


I was wondering how others have been handling one-to-many replication?


I understand the basics of replicating to the FileChannel and from that to the target environments. I'm conflicted on how to handle deletion of old FileChannel files with the "Delete processed files" flag unchecked on the endpoint however.

I have heard from one of the Attunity engineers that you can replicate from the source to multiple FileChannel locations, using one for each target environment. This would certainly be our preference, but I haven't been able to sort out how to configure this however.

The alternate being scripting something out to manual remove files when they are no longer needed, which I would prefer to avoid if there are alternatives.

Thank you!

fn_cdc_get_net_changes exponentially slow with high volume

$
0
0
Hi,

I'm having a problem that one of the tables I capture via fn_cdc_get_net_changes is occasionally extremely slow when the volume is high. I understand that higher volume will lead to slower performance, but it seems to be exponential.
For example, 100 rows may process in seconds, while 1000 may take 20-30 minutes. This makes me think that it is a query plan thing or something.

I'm using an SSIS data flow with the CDC Source Component, "Net with merge" mode. It's always the same table that is an issue. I don't have the exact statistics, but when I was looking at the reads one time that it was taking a long time I think it was in the tens of millions. The cleanup job runs at night and purges all but the last few hours of data. The table typically sees a few hundred thousand updates in a day, though they are generally clumped together at a few different times.

I didn't want to mess with the function of course or turn on query store on the CDC database (afraid of any additional overhead).

The table is pretty narrow, with a primary key that consists of 6 columns (vendor design) and a few numeric values. When there is a large volume of updates, it's usually the same rows being updated dozens or a hundred times in seconds. For example, maybe only 100 rows are being updated, but each row is getting 50 updates, so there are 5,000 rows in the CDC table yet I expect the Net mode to give me 100 rows.

Without providing the table definition just yet or the query plan that I've seen, is there any general recommendations for how to deal with performance in this mode? Should I consider using All instead of Net and making my own process to merge the latest values? Would that just make so much more work for SSIS since a lot of rows will be going through?

Thanks,
Scott
Viewing all 411 articles
Browse latest View live