As of December 1, 2020, Focal Point is retired and repurposed as a reference repository. We value the wealth of knowledge that's been shared here over the years. You'll continue to have access to this treasure trove of knowledge, for search purposes only.
Join the TIBCO Community TIBCO Community is a collaborative space for users to share knowledge and support one another in making the best use of TIBCO products and services. There are several TIBCO WebFOCUS resources in the community.
From the Home page, select Predict: WebFOCUS to view articles, questions, and trending articles.
Select Products from the top navigation bar, scroll, and then select the TIBCO WebFOCUS product page to view product overview, articles, and discussions.
Request access to the private WebFOCUS User Group (login required) to network with fellow members.
Former myibi community members should have received an email on 8/3/22 to activate their user accounts to join the community. Check your Spam folder for the email. Please get in touch with us at community@tibco.com for further assistance. Reference the community FAQ to learn more about the community.
We are migrating from DB2 to Azure SQL. For this I am writing several data flows to transfer the data using extended bulk load as that provides best performance. I now note that several flows fail giving an error on the data. The failing records are shown in the log, so I examined those records and it seems clear it is caused by tab characters (x'09') within character fields.
After further analysis I see that the intermediate files that iDM is creating are tab-delimited files with unquoted strings, so it seems Obvious that if a string contains a tab character that will mess up the record.
By the way, I am on iDM version 8.2.02M (latest) and source is DB2 tables and target is Azure SQL for which we use the SQL Server ODBC adapter.
Is there any way to get around this? I have tried both a normal data flow and a direct load flow, both giving the same results. Is there some parameter to set so it will use quoted string in the tab-delimited file?
I do not have these problems using a bulk load with DB2 as target, although I must say that a few years back I did have similar problems, but after some cases I opened they adjusted the DB2 bulk load in making it use tab delimited intermediate files with quoted strings. I guess they only did it for DB2 and not yet for SQL Server ODBC adapter, so maybe I have to open case a case again to get it adjusted.
But that will take some time before there will be a better version, so any suggestion to get around this?
The only way I can now think about is to add source transformations for suspicious columns to get rid of the tab characters, but you never know if next week there will be a tab character in a column I did not yet suspect, and it is not realistic just to add this kind of source transformation on every character column.
Martin.This message has been edited. Last edited by: FP Mod Chuck,
WebFocus 8206M, iWay DataMigrator, Windows, DB2 Windows V10.5, MS SQL Server, Azure SQL, Hyperstage, ReportCaster
I also noted another issue with the unqouted strings in the tab-delimited files used in the bulkload. If I have an empty string " " in my source table it ends up as a null value in my target, and that is certainly not the same.
Martin
WebFocus 8206M, iWay DataMigrator, Windows, DB2 Windows V10.5, MS SQL Server, Azure SQL, Hyperstage, ReportCaster
I did try this last week: DB_EXPR(REPLACE("FIELD_WITH_TAB" , CHR(9) , ' ' )) And it indeed works. Next week there could however be a tab character in another column, and I do not want to use this source transformation for any character column. Neither does it solve the problem that a space value " " in the source is converted into a null value in the target, which is not the same. I did find out that using "insert from memory" does not have the same problem as the bulkload as it apparantly does not use an intermediate hold file. I opened a case, and they indicate that this should have the same performance, but I doubt that with large files. Nevertheless if they offer a bulkload option it should work properly or it should be fixed or not offered.
WebFocus 8206M, iWay DataMigrator, Windows, DB2 Windows V10.5, MS SQL Server, Azure SQL, Hyperstage, ReportCaster
In the current production Release 8.2.03 (aka 7.7.08) when using Extended Bulk Load to MS SQL Server (and selected other databases) there is an option to specify the column delimiter to something other than the default TAB character.
Also there is a new adapter specifically for MS Azure SQL Data Warehouse.
N/A
Posts: 397 | Location: New York City | Registered: May 03, 2007
That is good news. I will certainly download and test this. I just looked at our download page, but there 82M is still the latest version. When will 8.2.03 / 7.7.08 be available for download?
Martin.
WebFocus 8206M, iWay DataMigrator, Windows, DB2 Windows V10.5, MS SQL Server, Azure SQL, Hyperstage, ReportCaster
It is my understanding that 8203 will be generally available by the end of April. If you are on InfoResponse an e-mail is usually sent announcing the availability when it is officially released. On your download it will still be listed as 82M but the gen number will be different.
Thank you for using Focal Point!
Chuck Wolff - Focal Point Moderator WebFOCUS 7x and 8x, Windows, Linux All output Formats
Posts: 2127 | Location: Customer Support | Registered: April 12, 2005
I read the release notes for 8.2.03 but concerning the adapter for Azure SQL database it refers to SQL Server ODBC adapter (MSODBC) to use. I have been using this specific adapter already for several months, even within our 7.7.06M version this already worked fine for Azure SQL database. So what is new or improved on this adapter concerning Azure SQL databases? Or is it now officially announced that it can be used?
Martin.
WebFocus 8206M, iWay DataMigrator, Windows, DB2 Windows V10.5, MS SQL Server, Azure SQL, Hyperstage, ReportCaster