Quantcast
Channel: Attunity Integration Technology Forum
Viewing all 411 articles
Browse latest View live

Issues while using Oracle as a source

$
0
0
I am facing a few issues while using oracle as a source:

1. I get an error for oaxml12.dll:

:26:03 [SERVER ]E: Failed to load the library 'oraxml12.dll' [720126] (system.c:448)



00005044: 2017-02-08T12:26:03 [SERVER ]I: Failed to load OraXML library, XMLType will not be supported (oracle_endpoint.c:194)


00005044: 2017-02-08T12:26:03 [INFRASTRUCTURE ]T: Alloc repo D:\data\tasks\Oracle to Hadoop dev/Oracle to Hadoop dev.repo (repository.c:5923)


00005044: 2017-02-08T12:26:03 [INFRASTRUCTURE ]T: Free repository D:\data\tasks\Oracle to Hadoop dev/Oracle to Hadoop dev.repo (repository.c:397)


00005044: 2017-02-08T12:26:03 [INFRASTRUCTURE ]T: Syntax object for 'Oracle' provider was not found in the repository (dbprops.c:415)


00005044: 2017-02-08T12:26:03 [INFRASTRUCTURE ]T: The 'Oracle' provider syntax was not found in the repository. (dbprops.c:1007)


00005044: 2017-02-08T12:26:03 [INFRASTRUCTURE ]T: The default 'Oracle' provider syntax will be used instead. (dbprops.c:1013)


00005044: 2017-02-08T12:26:03 [METADATA_MANAGE ]I: Going to connect to Oracle server (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dwcmsdev x.stage.shutterfly.com)(PORT=3115))(CONNECT_DATA=( SERVICE_NAME=KAFKATST))) with username replicate_user (oracle_endpoint_imp.c:922)


00005044: 2017-02-08T12:26:05 [METADATA_MANAGE ]T: Going to prepare the statement 'select value from v$nls_parameters where parameter = 'NLS_CHARACTERSET'' (oracle_endpoint_conn.c:582)


00005044: 2017-02-08T12:26:05 [METADATA_MANAGE ]T: Going to prepare the statement 'select value from v$nls_parameters where parameter = 'NLS_NCHAR_CHARACTERSET'' (oracle_endpoint_conn.c:592)


00005044: 2017-02-08T12:26:05 [METADATA_MANAGE ]T: Going to prepare the statement 'select 1 from dba_tablespaces where ENCRYPTED = ' '' (oracle_endpoint_conn.c:894)


00005044: 2017-02-08T12:26:05 [METADATA_MANAGE ]T: Column compress_for may be used (oracle_endpoint_conn.c:903)


00005044: 2017-02-08T12:26:05 [METADATA_MANAGE ]T: Going to prepare the statement 'select 1 from all_tables where owner = ' ' and compress_for = ' '' (oracle_endpoint_conn.c:801)


00005044: 2017-02-08T12:26:05 [METADATA_MANAGE ]T: Column compress_for may be used (oracle_endpoint_conn.c:810)


00005044: 2017-02-08T12:26:05 [METADATA_MANAGE ]T: Going to prepare the statement 'select 1 from all_views where owner='SYS' and view_name='ALL_ENCRYPTED_COLUMNS'' (oracle_endpoint_conn.c:926)


00005044: 2017-02-08T12:26:05 [METADATA_MANAGE ]T: Table all_encrypted_columns may be used (oracle_endpoint_conn.c:935)


00005044: 2017-02-08T12:26:05 [METADATA_MANAGE ]T: Going to prepare the statement 'select 1 from sys.enc$' (oracle_endpoint_conn.c:939)


2. I am also getting an error for ALTER TABLE privileges, I want to know why the user should have ALTER TABLE privileges in Oracle.
3. Supplemental logging error: All the privileges are given at a database level, should it be given at user level too?

Thanks,
Aditi

Jobs marked as "Complete" but not all files copied successfully

$
0
0
Hello,
I have recently noticed that sometimes a job will be marked as complete but only 99% of the files were transferred. I am unsure how to investigate or troubleshoot this, and would appreciate anybody who can point me in the right direction.

We use Repliweb to copy files from a Windows share to a bucket in AWS S3. The files in the Windows share are not being changed or touched during the Repliweb transfer.

In the general report, I noticed that there were some weird errors. Here is the report from an example job. (Some information has been replaced with "****")



Controller Version: R-1 6.1.149.128077 (x64) (2.186)
Build Date: May 26 2014 16:22:48
Controller Installation directory is <C:\Program Files\RepliWeb\RDS>
Controller Operating System is Microsoft Windows Server 2008 R2 Standard Edition, 64-bit Service Pack 1 (build 7601)
Controller Source Directory: <\\*********>
Satellite Target Directory: <*********> (Amazon S3)
Synchronization type: Mirror

Replication includes: Content Files

13:44:31 Starting Satellite <*********> information retrieval
Satellite access credentials: User: <*********>
Satellite installation directory is <C:\Program Files\RepliWeb\RDS>
Satellite Operating System is Microsoft Windows Server 2008 R2 Datacenter Edition, 64-bit Service Pack 1 (build 7601)
Satellite Version: R-1 6.1.149.128077 (x64) (2.186)
Build Date: May 26 2014 16:21:22
13:44:37 Finished Satellite information retrieval

13:44:37 Starting license validation
13:44:37 Finished license validation

13:44:37 Creating target directory
Directory is <*********>

13:44:41 Starting Controller and Satellite snapshot generation
Controller snapshot generation time was 2 seconds
Satellite snapshot generation time was less than a second
13:44:47 Finished Controller and Satellite snapshot generation

13:44:47 Starting Comparative snapshot generation
Comparison properties:
Synchronization type: Mirror
13:44:47 Finished Comparative snapshot generation

2 directories will be created on the target
6137 file(s) (~26515 Mbytes) will be copied from the source to the target

13:44:48 No files to delete on target before transfer

13:44:49 Starting target directories creation

13:44:51 Finished target directories creation


13:44:53 Starting file transfer
Using Large File Accelerator
Using SSL authentication and encryption
Using 20 concurrent transfer(s)
Set to abort on file access errors
Set to use maximum 4 concurrent sessions per file

13:51:02 LFA-E-MTRSFD, An error occurred during transfer. Transferred 41 out of 62 file(s) scanned. First file error was:
13:51:02 R1-E-Failed to transfer files
================================================


13:51:25 Continuing replication of "*********"

13:51:29 Recovering previous transfer to target operation
13:54:00 LFA-E-MTRSFD, An error occurred during transfer. Transferred 60 out of 141 file(s) scanned. First file error was: LFA-E-LNKERR, Link error: NET-E-SEND, network send error
-CRP-E-SEND, send operation failed
-CRP-E-SEND2, buffer size <3960389>, bytes_sent <0>, curr_bytes_sent <-1>
-SYS-E-SYSMSG, W_10053 An established connection was aborted by the software in your host machine.
13:54:00 R1-E-Failed to transfer files
================================================


13:54:22 Continuing replication of "*********"

13:54:26 Recovering previous transfer to target operation
13:56:57 LFA-E-MTRSFD, An error occurred during transfer. Transferred 60 out of 142 file(s) scanned. First file error was: LFA-E-LNKERR, Link error: NET-E-SEND, network send error
-CRP-E-SEND, send operation failed
-CRP-E-SEND2, buffer size <3936372>, bytes_sent <0>, curr_bytes_sent <-1>
-SYS-E-SYSMSG, W_10053 An established connection was aborted by the software in your host machine.
13:56:57 R1-E-Failed to transfer files
================================================


13:57:29 Continuing replication of "*********"

13:57:33 Recovering previous transfer to target operation
13:59:52 LFA-E-MTRSFD, An error occurred during transfer. Transferred 60 out of 161 file(s) scanned. First file error was: LFA-E-LNKERR, Link error: NET-E-SEND, network send error
-CRP-E-SEND, send operation failed
-CRP-E-SEND2, buffer size <3351477>, bytes_sent <0>, curr_bytes_sent <-1>
-SYS-E-SYSMSG, W_10053 An established connection was aborted by the software in your host machine.
13:59:52 R1-E-Failed to transfer files
================================================


14:00:23 Continuing replication of "*********"

14:00:28 Recovering previous transfer to target operation
14:02:57 LFA-E-MTRSFD, An error occurred during transfer. Transferred 60 out of 162 file(s) scanned. First file error was: LFA-E-LNKERR, Link error: NET-E-SEND, network send error
-CRP-E-SEND, send operation failed
-CRP-E-SEND2, buffer size <3285580>, bytes_sent <0>, curr_bytes_sent <-1>
-SYS-E-SYSMSG, W_10053 An established connection was aborted by the software in your host machine.
14:02:57 R1-E-Failed to transfer files
================================================


14:03:37 Continuing replication of "*********"

14:03:41 Recovering previous transfer to target operation
14:06:18 LFA-E-MTRSFD, An error occurred during transfer. Transferred 60 out of 121 file(s) scanned. First file error was: LFA-E-LNKERR, Link error: NET-E-SEND, network send error
-CRP-E-SEND, send operation failed
-CRP-E-SEND2, buffer size <4387405>, bytes_sent <0>, curr_bytes_sent <-1>
-SYS-E-SYSMSG, W_10053 An established connection was aborted by the software in your host machine.
14:06:18 R1-E-Failed to transfer files
================================================


14:07:03 Continuing replication of "*********"

14:07:07 Recovering previous transfer to target operation
14:09:27 Finished files transfer
6116 file(s) copied



14:09:33 No files to delete on target after transfer

14:09:34 Starting Satellite cleanup
14:09:34 Finished Satellite cleanup

14:09:34 R1-S-Replication completed successfully.

How to increase the throughput of records being read from Oracle Source DB

$
0
0
Hi Experts,

Could you please help us understand, how can we increase the throughput while reading records from the source oracle database.
The throughput we get varies higly with various peaks such as 20k 4k 12k 0 9k 0 21k 2k etc.

The source view has 23 million records and the target in Azure sqldw.
We have been able to successfully replicate the data only when we chose to create a .csv file of size 2GB on the blob storage.

The task has failed for all the file sizes between 200-1000 MB with the error ' Error in request handler'.

Your response would be really helpful.

Regards,
Akanksha

[Teradata Destination [2]] Error: An error returned from error tables. Invalid date.

$
0
0
Hello,

I am using the Microsoft connector for Teradata by Attunity 2.0 in SSIS to load a Teradata table from a Teradata table. I am using fast load. My Teradata destination component is rejecting rows if the row contains fields with an invalid date value. How can I configure the Teradata destination component to allow the row to be inserted into the target table regardless of invalid date values?

Below are the full errors I am getting from the package execution results:

[Teradata Destination [2]] Error: An error returned from error tables. Invalid date.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "Teradata Destination" (2) failed with error code 0x80004005 while processing input "Teradata Destination Input" (25). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.

Any help is greatly appreciated! Thank you in advance.

Helper scripts to Migrate tasks between environments, like DEV to QA, or QA to PROD

$
0
0
Migrating Replicate tasks from one Environment to the next can be a huge topic requiring some consulting.
To discuss this properly we have to take NAMING CONVENTIONS, and SOURCE CONTROL into account.

In the simplest form, the same task-name and end-point names are used for a given tasks no matter which environment it runs.

The 'trick' here is that each environment has it's own details behind the endpoints (EP)
For example the source-EP on the QA server points to the QA database, using QA credentials.
To 'promote' a task from DEV to QA and (onwards to PROD), all one has to do is EXPORT the task definition on one server, strip the endpoints from the JSON, and IMPORT on the next server.
The Replicate 5.1.2 makes this really easy, no longer requiring server access, using the EXPORT button on the TASKS page to TASK DESIGN page, and IMPORT on the TASKS page.
Using a reasonable editor (anything but WORD or NOTEPAD) which is JSON syntax aware (Notepad++, UltraEdit, Textpad...) it is easy enough to colapse the verbose individual sections and remove 'databases'.

And yet it still a bit tricky to get it right: "do I need to remove that comma before it or not? Yes if it was the last section as databases mostly are. Where is my closing curly?"
It is specifically tricky when developers are not allowed to touch a production server, and the final deployment step is executed by 'operations' or 'deployment' resources which are unlikely to be JSON or Replicate savvy.

The attached scripts came into existence to initially to help with just that: Cleanly, repeatably, strip the 'databases' section from a JSON task file.

Typically, more edits are desirable as most customer settle for a naming convention where tasks and endpoints are identifiable.
(making the EP's unique prevents accidental replacement, if ever an incorrect Json file is imported)
For example source "S_ORA_personnel_east_QA","T_PDW_personnel_QA" and task "personnel_east_QA" and they have the JSON file named accordingly.
Now you need to RENAME the tasks to move between systems, and easy single-location edit, but still.
And one probably has to rename the endpoint in a couple of places, and removes databases to let the 'new' task latch on to pre-defined EndPoints on the target servers.
The result might be "S_ORA_personnel_east_PROD", "T_PDW_personnel_PROD" and "personnel_east_PROD"

Note: I am a strong believer in separating responsibilities for the core tasks, and the databases.
Do not let 'random' imports destroy your carefully crafted EP definitions
Luckily they probably wouldn't work anyway as typically the encrypted password can only be decrypted on the same server against the same mk.dat.
Separate entering usernames/password for the (notably production) databases from the task design.

The attached scripts were created with those challenges in mind.

Here is the "Help" for the PowerShell implementation:

Code:

PS C:\> C:\scripts\modify_replicate_json.ps1 help


modify_replicate_json.ps1 - Hein van den Heuvel - Attunity - 18-Feb-2017


    This script is used to process a task JSON from one environment to be imported
    on an other environment, for example from DEV to QA to PROD.


    Typically the Endpoints must NOT be carried over as existing endpoint on
    the targetted server will be used with a different target database(server)
    and different credentials.


    We also expect that the endpoint name as well as the taskname will need to be
    changed according to the naming standards in place to match the new environment.


    This scripts can adapt the following elements in a Replication Definition


    - Remove Databases section ( Endpoints ) should they exist,
          failing to remove could make existing endpoint unoperational on import.
    - Sort explicit included tables by owner,name
    - Remove original UUID if present.
    - Change Task Name
    - Adaption of End-Points to provided new source and target names
    - Remove Tasks section
    - List top-level sections. Typically:
          tasks, databases, replication_environment, notifications, error_behavior, scheduler
    - Remove specified top-level section
    - Add or Replace Explicitly selected table


    IF the switch "Reference" is used, then NO contents change is done.
    The purpose of this is to generate a template going through the same
    formatting for easy comparisons with tools like WinDiff


    Options:


    -InputJson          Required input file, with single task or full Replication_Definition
    -OutputJson        Name of output Json file formatted similar to Replicate, but not exactly the same.
    -SourceEP          New Name for Source Endpoint (aka "database") IF single task.
    -TargetEP          New Name for Target Endpoint IF single task.
    -TaskName          TaskName to act on, if not the first and only task.
    -RenameTaskTo      New Name for Task IF single task.
    -SectionName        Name of json section to act on.
    -AddTables_CSV_file Name of CSV file with tables to ADD to the current tables in task
    -NewTables_CSV_file Name of CSV file with tables to REPLACE the current tables in task with


    -- Switches --


    -Reference          Generate unchanged contents with script specifix formatting for comparisons
    -NotSortTables      Stop the script from Sorting the explicitly included table list.
    -DeleteEP          Remove the "databases" section from the InputJson
    -DeleteTasks        Remove the "tasks" section from the InputJson
    -DeleteSection      Request removal of json section identified by -SectionName


    -TaskOverview      List Tasks in the json file.
    -EPOverview        List Databases (End Points) section in the json file
    -SectionOverview    List the sections in the json file, useful as quick verification.
    -help              This text

And here, much the same, for PERL (Linux: may have to obtain JSON.PM)

Code:

C:\>perl \scripts\modify_replicate_json.pl -h


  modify_task_json.pl    Hein van den Heuvel, Attunity, Feb-19 2017


Usage:
        perl \scripts\modify_replicate_json.pl [-o output] [-l tables.csv] task.json
        -h for help




    This script can be used to set the same "explicit_included_tables" to all
    the Attunity Replicate task definitions provided in the task file, based on a simple CSV table list.
    Should be easy enough to tweak this to change other sections.


    The main input file is created using the 5.1.2(or later) GUI EXPORT button or:
        # repctl exportrepository [task=...]


    The main output file usage is as source for the 5.1.2(or later) GUI IMPORT button or:
        # repctl exportrepository json_file=...


    The table file is a simple CSV list with owner,table_name[,rows] entries.
    Rows, is an optional number for "estimated_size".
    Any line not starting with a 'word' is considered comment and is ignored.
    NO header line, unless commented out.


    When no output is provided it just provides an overview listing for the
    input json file, with tasknames and matching tablenames.


    Requesting an output without providing a table list is useful to get
    the exact a reference file layout to compare against (windiff, ultra-compare,
    beyond-compare) as the pretty-print generated json which, while functionally the
    same, it will have a different layout from the original source.


    Note 1: "-d databases" for Delete EndPoint recommended when renaming EndPoints in a task.


    Note 2: Unfortunately the perl JSON.PM module does NOT preserve ORDER for(nested) hash elements.
            The output JSON will have randomly ordered elements.
            That will work fine, but it is impossible to compare the text.
            Best suggestion is to import and export to get predictable formatting.


    Arguments:
                    Input file specification. Mandatory
    Options with paramters:
            -o      Output file specification.
            -l      list file containing owner,table_names
            -n      task Name match string (Perl Regex), default all.
            -N      task Name SKIP string (Perl Regex), default none.
            -i      Case insensitive Duplicate test for ADD
            -p      property name for Property Overview (optional)
            -r      Rename Task
            -s      Rename SOURCE EndPoint
            -t      Rename TARGET EndPoint
            -d      Comma separated list sections to delete such as 'tasks' or 'databases'


    Switches:
            -A      Add tables, default is to replace.
            -T      Task Overview
            -E      EndPoint Overview (aka databases)
            -P      Properties (Sections) overview.
            -R      Reference Run, no functional changes, just formatting for file comparisons.
            -S      Do NOT sort tables (owner.name)
            -H      Display this Help text


And here is a sample output for the "end-point overview" sub function

Code:

PS C:\> C:\scripts\modify_replicate_json.ps1
  'C:\Program Files\Attunity\Replicate\Data\imports\Replication_Definition.json'
-EPOverview


    EndPoint Name            Role  Lic Type        Used By...
    --------------------      ------ --- --------    ------------
    A_sql_server_target      TARGET Yes SQL_SERVER
    DSC_FIS_BSP_STMP          TARGET Yes HADOOP      FIS_BSP_STMP_T1
    GIFTCARDS                SOURCE Yes MYSQL        GIFTCARDS_GIFTCARDS
    Hadoop_target            TARGET Yes HADOOP
    Kafka                    TARGET Yes KAFKA
    MySQL_To_FC              SOURCE Yes MYSQL        MySQL_to_MSSQL
    NETEZZA_DBGCARDS01        TARGET Yes NETEZZA      GIFTCARDS_GIFTCARDS
    New Endpoint Connection  TARGET Yes SQL_SERVER
    New Endpoint Connection 2 SOURCE Yes RMS
    Oracle_Source            SOURCE Yes ORACLE      Oracle_Source, Oracle_to_SqlServer, mark-test, oraHR-to-oraHein


Give it a whirl?

Let us know if it helped you, and what (generically usable) improvements would be desirable.

Cheers,
Hein
Attached Files

Attunity connectivity with Amazon Web Services

$
0
0
Hello Experts,

Could you please let me know whether is it possible to connect with below storages/database using Attunity? I know we have option to connect to Amazon Redshift, but sure about other options.
Could you please give your valuable inputs?

Database

  1. Amazon Redshift --> Possible
  2. Amazon Relational Database Service (RDS)
  3. Amazon Aurora
  4. Amazon DynamoDB (NoSQL Database)
  5. Amazon ElastiCache


Storage


  1. Amazon S3
  2. Amazon Elastic Block Store
  3. Amazon Elastic File System
  4. Amazon Glacier
  5. AWS Storage Gateway


Stream Processing System


  1. Apache Kafka on AWS
  2. Amazon Kinesis Streams


Thanks!
Chetan

Error in recreating external table in sqldw azure

$
0
0
Hi Experts,

We have a scenario where we occasionally come across an issue with creation of external tables in sqldw azure.

We were trying to perform data replication from Oracle to Azure SQLDW using a .csv file size of 500 MB when the task errored out with the below error.

"RetCode: SQL_ERROR SqlState: 42S01 NativeError: 2714 Message: [Microsoft][SQL Server Native Client 11.0][SQL Server]There is already an object named 'xxxxxxxxxxxxxxx' in the database. Line: 1 Column: -1".

It then tried to drop and recreate this external table but could not do so.

Please Note: The same task completed successfully for various file sizes of 250 MB, 150 MB etc.

Any pointers to the cause of this error would be helpful.

Regards,
Akanksha

Table and column list of replicated objects?

$
0
0
Fairly new to Attunity replicate, is there somewhere I can extract the meta data?? We are interested in listing out all tables and columns included in our tasks.
I see in the JSON file, only those columns added, removed, but I'd like the complete list of columns that are replicated in a given table/task.
Any help would be great!

Replicate Data: Hadoop as source - change capture options

$
0
0
The documentation mentions that Hadoop as a source end point does not support change capture. What are folks in the community doing if they need this functionality?

ACXAPI_PING function

$
0
0
Hello All,

Could you, please, give me some information on response structure for ACXAPI_PING function call for query adapter.

Kind regards
Sergejus

SQL Server 2016 to Teradata 15.10 - Attunity Connector, 32 or 64 bit?

$
0
0
Hello, I'm on a SQL Server 2016 SP1, and I connect to a Teradata 15.10, and the SSIS packages (designed in Visual Studio/SSDT) are setup to use Attunity 4.0. I have install the Teradata 64 bit drivers, and hte attunity connectors. With the job setup running the packages, when the SSIS packages that connect to Teradata, I must mark the Advanced connections in the job that calls that step (package) must be marked with 32 bit, or it will fail. Is it possible to have a 64 bit pathway between SQL Server and Teradata using Attunity? Thanks, JPQ

Replication to Distribution Job

$
0
0
Is there a way (or not) to modify/clone a replication job into a distribution job?

Moving files from Source to Target

$
0
0
I have a client that wants to moving files from a source to a target then upon completion delete those files from the the source. How would I modify the job to do that?

Creating CLI Distribution Jobs

$
0
0
I am in dire need of assistance to develop a CLI script to create distribution jobs from my existing replication jobs. I'm stuck with "command line syntax errors".

Attunity Replicate can't connect to Cloudbeam (for Azure SQL DW)

$
0
0
We are having difficulties connecting to Attunity Cloudbeam from Attunity Replicate server.

Setup:
1. Attunite Replicate is installed behind corporate firewall. Port 5746 (for Cloudbeam) and Port 1433 (for Azure SQL DW) has been opened and can verify connectivity through TELNET.
2. Attunity Cloudbeam has been provisioned in Azure VM using Marketplace Template.
3. Password has been set in Attunity Cloudbeam.
4. From Attunity Replicate can connect to Azure SQL DW. Attunity Cloudbeam and Azure Blob Storage details have been entered correctly.
5. While Testing Connection, getting the below error.


00007184: 2017-03-14T21:26:31 [SERVER ]I: Going to connect to server mbie-poc-dbsrv-dev1.database.windows.net database mbie-poc-dw-dev1 (cloud_imp.c:1405)00007184: 2017-03-14T21:26:33 [SERVER ]I: Connected to server mbie-poc-dbsrv-dev1.database.windows.net database mbie-poc-dw-dev1 successfully. (cloud_imp.c:1423)
00007184: 2017-03-14T21:26:34 [INFRASTRUCTURE ]E: Http request for url </attunitytransfer/rest/agent/pre_secure_link?request=64d0a24b-fb3d-fa4e-9ded-d664f99fde5b> failed: status = 403, message = <Invalid plugin root url; only '/agent','/transfer' root url plugins are supported> Http request failed. (at_csdk.c:806)
00007184: 2017-03-14T21:26:34 [INFRASTRUCTURE ]E: pre encryption step failed Failed to encrypt csdk link: pre-encryption step failed (at_csdk.c:1067)
00007184: 2017-03-14T21:26:34 [INFRASTRUCTURE ]E: Post connect callback failed. [1000310] Post connect callback failed. (at_http_client.c:915)
00007184: 2017-03-14T21:26:34 [INFRASTRUCTURE ]E: - [123002] Failed to connect to Attunity CloudBeam AMI. (ar_cifta.c:487)
00007184: 2017-03-14T21:26:34 [INFRASTRUCTURE ]E: -Http request for url </attunitytransfer/rest/agent/pre_secure_link?request=64d0a24b-fb3d-fa4e-9ded-d664f99fde5b> failed: status = 403, message = <Invalid plugin root url; only '/agent','/transfer' root url plugins are supported> [123002] Failed to connect to Attunity CloudBeam AMI. (ar_cifta.c:487)
00007184: 2017-03-14T21:26:34 [INFRASTRUCTURE ]E: -Http request for url </attunitytransfer/rest/agent/pre_secure_link?request=64d0a24b-fb3d-fa4e-9ded-d664f99fde5b> failed: status = 403, message = <Invalid plugin root url; only '/agent','/transfer' root url plugins are supported>; pre encryption step failed; Post connect callback failed. [1000310] [123002] Failed to connect to Attunity CloudBeam AMI. (ar_cifta.c:487)
00007184: 2017-03-14T21:26:34 [INFRASTRUCTURE ]E: failed to connect [123002] Failed to connect to Attunity CloudBeam AMI. (ar_cifta.c:487)
00007184: 2017-03-14T21:26:34 [SERVER ]E: Failed to start VM Image Client [123002] Failed to connect to Attunity CloudBeam AMI. (cloud_imp.c:1495)
00007184: 2017-03-14T21:26:34 [SERVER ]E: Failed to start VM Image Client [123002] Failed to connect to Attunity CloudBeam AMI. (cloud_imp.c:1495)

Looks like a product fault, as this has been tested in a different environment with no firewall and still fails with the same error.

Replicate Version is 5.0.3.38; Cloudbeam version is 5.0.0.28

Regards,
Govind

"Cannot create Oracle NLS environment for charset 873 and ncharset 1000"

$
0
0
When we test the connection for Oracle Endpoint.

We are getting the below message.


"Cannot create Oracle NLS environment for charset 873 and ncharset 1000"

===

Please check the following:

- Check to see if you have the oci.dll under ~\SYSTEM32 directory, if so try to remove it and test the connection again.

- if you are checking using, NLS error but cannot find any ORA-xxxx code in the ~\data\logs\repsrv.log
- ocitest tool returns garbage data
-- notes about the ocitest --
"ocitest -?" shows


Usage: ocitest [[charset=837] [ncharset=1000=OCI_UTF16ID [n_inits=0]]]

Cannot find the service.url file, make sure the service is running.

$
0
0
when using the Attunity shortcut icon, and getting the URL message :

Cannot find the service.url file, make sure the service is running.

This could be that your data folder is install on another drive/folder location.

===

To correct the issue.

right click on the Attunity Console shortcut icon.

edit the target,

so in the below example:

"C:\Program Files\Attunity\Replicate\bin\RepUiCtl.exe" SERVICE BROWSE

edit to include the data folder: in this case "R:\Program Files\Attunity\Replicate\data"

"C:\Program Files\Attunity\Replicate\bin\RepUiCtl.exe" -d "R:\Program Files\Attunity\Replicate\data" SERVICE BROWSE

Save the setting, now the Attunity Console shortcut icon should works.

CDC goes into LOGGER state when the database log switch happens

$
0
0
Hi,

We are facing a problem when trying to set the CDC service on Oracle db. The service runs fine when set up. However, it runs into a logger state as soon as the log switch over takes place.

Can you please let us know the fix for this. This is currently impacting our project to go live.

Thanks.

API library cannot be loaded. Failed to load the library 'libsapnwrfc.so'

$
0
0
Got the error on testing SAP source.

API library cannot be loaded. Failed to load the library 'libsapnwrfc.so'

ORA-018661: Literal does not match format string

$
0
0
Using SSIS in VS 2012 with Attunity driver Version 2.0 to retrieve data from Oracle 11.0 g database. SQL works in SQL Developer, but Oracle error "ORA-018661: Literal does not match format string" is being thrown using same sql from SSIS. The offending part is in the where clause. Here is part that will cause the error: "where (TO_DATE(CONFINEMENT_START,'DD-MON-YY') >= TO_DATE('01-JAN-1753','DD-MON-YYYY') or CONFINEMENT_START is null)". CONFINEMENT_START is a date, but without the TO_DATE around it misses records that should be returned. The Database NLS_DATE_FORMAT is "26-APR-17". The picture on the to_date doesn't make sense, but it is the only solution we have found that works directly in oracle.


Turns out the to_char we originally used in the SSIS package was correct, so the issue is solved for us, but there is still the issue with attunity not allowing what works in SQL Developer and SQL Plus....
Viewing all 411 articles
Browse latest View live