As of December 1, 2020, Focal Point is retired and repurposed as a reference repository. We value the wealth of knowledge that's been shared here over the years. You'll continue to have access to this treasure trove of knowledge, for search purposes only.
Join the TIBCO Community TIBCO Community is a collaborative space for users to share knowledge and support one another in making the best use of TIBCO products and services. There are several TIBCO WebFOCUS resources in the community.
From the Home page, select Predict: WebFOCUS to view articles, questions, and trending articles.
Select Products from the top navigation bar, scroll, and then select the TIBCO WebFOCUS product page to view product overview, articles, and discussions.
Request access to the private WebFOCUS User Group (login required) to network with fellow members.
Former myibi community members should have received an email on 8/3/22 to activate their user accounts to join the community. Check your Spam folder for the email. Please get in touch with us at email@example.com for further assistance. Reference the community FAQ to learn more about the community.
I am using data flows in Data Migrator and I need to read several large fixed length text files. The first line of each file contains a sequence number. In order to process all files in the correct order I need to read these sequence numbers for all the files.
At the moment I read the whole file and filter on the record type and some other attributes. Obviously this is a very inefficient process, but I cannot find any other way. I assume others have encountered the same situation and found solutions for this.
So my question is:
How can I efficiently (time-wise and resouce-wise) read the first line of a fixed length text file?
EtienneThis message has been edited. Last edited by: FP Mod Chuck,
You'll want to do more than just type the value; you could for example read in the last sequence number processed from a different file, and compare the two values.
When you are done click the Save icon to save the procedure.
In a Process flow then drag the stored procedure between Start and a Data Flow. Delete the existing arrow connecting them, and add connections from Start to the Procedure, and from the Procedure to the Data Flow.
Posts: 397 | Location: New York City | Registered: May 03, 2007
@Martin, thanks for your suggestion. I forgot to mention I already tried this.
The problem is that I need to combine some attributes on the first line with some table. What DM is doing is first performing the join with all records in the file and after that take the first line. This is caused by the fact that you can put a join before the final SQL object but not after the final SQL object and you can set "rows to retrieve" on the final SQL object only.
Etienne, If you want to remain with flows and no coding, it might be a possibility to first make a simple flow which only includes the fixed length file, set "rows to retrieve" to 1, make new target (preferably in same database as tables you have to join with later on), and then in next flow join with this new table. And use process flow to get those flows to run after each other. Martin.
WebFocus 8206M, iWay DataMigrator, Windows, DB2 Windows V10.5, MS SQL Server, Azure SQL, Hyperstage, ReportCaster
@Martin, thanks. I agree, it looks like that is the only option when you want to stick with data flows. It seems a bit silly though to do it like that.
The way our organization organized its (software) processes will result in lots of small flows and small jobs.
So we have to weigh the stored procedure alternative as described by Clif versus what you just described. I will first try if performance of the full join is acceptable. If not I will discuss with my collegues which option to choose.