As of December 1, 2020, Focal Point is retired and repurposed as a reference repository. We value the wealth of knowledge that's been shared here over the years. You'll continue to have access to this treasure trove of knowledge, for search purposes only. Moving forward, myibi is our community platform to learn, share, and collaborate. We have the same Focal Point forum categories in myibi, so you can continue to have all new conversations there. If you need access to myibi, contact us at myibi@ibi.com and provide your corporate email address, company, and name.
Let's say I have a table(Source) name "Product" with over 10k records within and Load into another table "Product_update". In next day, I have another 1250 records are updated to "Product". So total of my "Product" table contain 11,250 records and I want to append ONLY those 1250 records. How to keep track where I loaded last time(in this case would be up to record NO. 10,000 and begin append from NO. 10,001)? Or iWay Datamigrator keep track this information already?This message has been edited. Last edited by: nox,
There's an add-on to DataMigrator for most of the major relational databases called "Change Data Capture" the reads database logs, archives or journals in order to process just the changes. It automatically keeps track of the last record read from the database logs so that when run the next it can take up where it left off.
Without that it depends on how your source table is designed. If the key to the table is an automatically incrementing number, and no records are ever updated or deleted, then you could keep track of that number after running the flow and use it in a WHERE condition the next time.
N/A
Posts: 397 | Location: New York City | Registered: May 03, 2007
Without that it depends on how your source table is designed. If the key to the table is an automatically incrementing number, and no records are ever updated or deleted, then you could keep track of that number after running the flow and use it in a WHERE condition the next time.
this is similar idea that I have in mind, but how to keep track the updated 'value' is what hurting my brain. Any start up?
It sounds like Clif's scenario where the key to the table is automatically incremented.. Right.. If so you can use a DM stored procedure prior to the data flow to retrieve the HIGHEST value of the key put that in a HOLD file and do a -READFILE to put is in a variable that you use in the WHERE statement of the flow to start at that record. Something like below
TABLE FILE source BY HIGHEST 1 keyfield ON TABLE HOLD END -RUN -READFILE HOLD
The &variable name will be the same as the keyfield name in the request.
Thank you for using Focal Point!
Chuck Wolff - Focal Point Moderator WebFOCUS 7x and 8x, Windows, Linux All output Formats
Posts: 2128 | Location: Customer Support | Registered: April 12, 2005
Thanks for the advice, I end up solving this in another way by doing a join between source table and counter table(keep track of loaded records) to append into both target and counter table(delete all row prior load) WHERE source records are greater than previous counter table's number.