As of December 1, 2020, Focal Point is retired and repurposed as a reference repository. We value the wealth of knowledge that's been shared here over the years. You'll continue to have access to this treasure trove of knowledge, for search purposes only.
New TIBCO Community Coming Soon
In early summer, TIBCO plans to launch a new community—with a new user experience, enhanced search, and expanded capabilities for member engagement with answers and discussions! In advance of that, the current myibi community will be retired on April 30. We will continue to provide updates here on both the retirement of myibi and the new community launch.
What You Need to Know about Our New Community
We value the wealth of knowledge and engagement shared by community members and hope the new community will continue cultivating networking, knowledge sharing, and discussion.
During the transition period, from April 20th until the new community is launched this summer, myibi users should access the TIBCO WebFOCUS page to engage.
I am using data flows in Data Migrator and I need to read several large fixed length text files. The first line of each file contains a sequence number. In order to process all files in the correct order I need to read these sequence numbers for all the files.
At the moment I read the whole file and filter on the record type and some other attributes. Obviously this is a very inefficient process, but I cannot find any other way. I assume others have encountered the same situation and found solutions for this.
So my question is:
How can I efficiently (time-wise and resouce-wise) read the first line of a fixed length text file?
EtienneThis message has been edited. Last edited by: FP Mod Chuck,
You'll want to do more than just type the value; you could for example read in the last sequence number processed from a different file, and compare the two values.
When you are done click the Save icon to save the procedure.
In a Process flow then drag the stored procedure between Start and a Data Flow. Delete the existing arrow connecting them, and add connections from Start to the Procedure, and from the Procedure to the Data Flow.
Posts: 397 | Location: New York City | Registered: May 03, 2007
@Martin, thanks for your suggestion. I forgot to mention I already tried this.
The problem is that I need to combine some attributes on the first line with some table. What DM is doing is first performing the join with all records in the file and after that take the first line. This is caused by the fact that you can put a join before the final SQL object but not after the final SQL object and you can set "rows to retrieve" on the final SQL object only.
Etienne, If you want to remain with flows and no coding, it might be a possibility to first make a simple flow which only includes the fixed length file, set "rows to retrieve" to 1, make new target (preferably in same database as tables you have to join with later on), and then in next flow join with this new table. And use process flow to get those flows to run after each other. Martin.
WebFocus 8206M, iWay DataMigrator, Windows, DB2 Windows V10.5, MS SQL Server, Azure SQL, Hyperstage, ReportCaster
@Martin, thanks. I agree, it looks like that is the only option when you want to stick with data flows. It seems a bit silly though to do it like that.
The way our organization organized its (software) processes will result in lots of small flows and small jobs.
So we have to weigh the stored procedure alternative as described by Clif versus what you just described. I will first try if performance of the full join is acceptable. If not I will discuss with my collegues which option to choose.