As of December 1, 2020, Focal Point is retired and repurposed as a reference repository. We value the wealth of knowledge that's been shared here over the years. You'll continue to have access to this treasure trove of knowledge, for search purposes only.
Join the TIBCO Community TIBCO Community is a collaborative space for users to share knowledge and support one another in making the best use of TIBCO products and services. There are several TIBCO WebFOCUS resources in the community.
From the Home page, select Predict: WebFOCUS to view articles, questions, and trending articles.
Select Products from the top navigation bar, scroll, and then select the TIBCO WebFOCUS product page to view product overview, articles, and discussions.
Request access to the private WebFOCUS User Group (login required) to network with fellow members.
Former myibi community members should have received an email on 8/3/22 to activate their user accounts to join the community. Check your Spam folder for the email. Please get in touch with us at community@tibco.com for further assistance. Reference the community FAQ to learn more about the community.
Let's say I have a table(Source) name "Product" with over 10k records within and Load into another table "Product_update". In next day, I have another 1250 records are updated to "Product". So total of my "Product" table contain 11,250 records and I want to append ONLY those 1250 records. How to keep track where I loaded last time(in this case would be up to record NO. 10,000 and begin append from NO. 10,001)? Or iWay Datamigrator keep track this information already?This message has been edited. Last edited by: nox,
There's an add-on to DataMigrator for most of the major relational databases called "Change Data Capture" the reads database logs, archives or journals in order to process just the changes. It automatically keeps track of the last record read from the database logs so that when run the next it can take up where it left off.
Without that it depends on how your source table is designed. If the key to the table is an automatically incrementing number, and no records are ever updated or deleted, then you could keep track of that number after running the flow and use it in a WHERE condition the next time.
N/A
Posts: 397 | Location: New York City | Registered: May 03, 2007
Without that it depends on how your source table is designed. If the key to the table is an automatically incrementing number, and no records are ever updated or deleted, then you could keep track of that number after running the flow and use it in a WHERE condition the next time.
this is similar idea that I have in mind, but how to keep track the updated 'value' is what hurting my brain. Any start up?
It sounds like Clif's scenario where the key to the table is automatically incremented.. Right.. If so you can use a DM stored procedure prior to the data flow to retrieve the HIGHEST value of the key put that in a HOLD file and do a -READFILE to put is in a variable that you use in the WHERE statement of the flow to start at that record. Something like below
TABLE FILE source BY HIGHEST 1 keyfield ON TABLE HOLD END -RUN -READFILE HOLD
The &variable name will be the same as the keyfield name in the request.
Thank you for using Focal Point!
Chuck Wolff - Focal Point Moderator WebFOCUS 7x and 8x, Windows, Linux All output Formats
Posts: 2127 | Location: Customer Support | Registered: April 12, 2005
Thanks for the advice, I end up solving this in another way by doing a join between source table and counter table(keep track of loaded records) to append into both target and counter table(delete all row prior load) WHERE source records are greater than previous counter table's number.