As of December 1, 2020, Focal Point is retired and repurposed as a reference repository. We value the wealth of knowledge that's been shared here over the years. You'll continue to have access to this treasure trove of knowledge, for search purposes only.
Join the TIBCO Community TIBCO Community is a collaborative space for users to share knowledge and support one another in making the best use of TIBCO products and services. There are several TIBCO WebFOCUS resources in the community.
From the Home page, select Predict: WebFOCUS to view articles, questions, and trending articles.
Select Products from the top navigation bar, scroll, and then select the TIBCO WebFOCUS product page to view product overview, articles, and discussions.
Request access to the private WebFOCUS User Group (login required) to network with fellow members.
Former myibi community members should have received an email on 8/3/22 to activate their user accounts to join the community. Check your Spam folder for the email. Please get in touch with us at community@tibco.com for further assistance. Reference the community FAQ to learn more about the community.
So I have a need to gather 5 years of data for a data feed with a vendor. I started this query yesterday at 5:45p and as I write this it's 2:15p (so ~22 hours).
My basic question is, how do I tell if the query is still running/doing anything?
- I submitted as deferred. - I am monitoring the agent on the Server console, it is still active, not crashed and 'LASTERRNO' = 0. - I can see the 'Last Masterfile Name' is the hold file from the second step. - The output is filedef'd to a netshare. As of yet, it has not written out. - I cannot see any of my hold files in '\app\HeldFiles\'. - Fex is simple: 'Gather data, hold, join one table, gather data, hold' repeat x 3.
It's feeling like it's hung up, but I'm not sure what I can do to verify short of killing/restarting.
- ABTThis message has been edited. Last edited by: ABT,
Just curious if you have the -RUN command between each "Gather data, hold, join one table, gather data, hold"? You *may* not see your hold files until the the end of the program if you don't use -RUN
Francis
Give me code, or give me retirement. In FOCUS since 1991
Production: WF 7.7.05M, Dev Studio, BID, MRE, WebSphere, DB2 / Test: WF 8.1.05M, App Studio, BI Portal, Report Caster, jQuery, HighCharts, Apache Tomcat, MS SQL Server
what's your data source? are you using TABLEF (rather than TABLE) to "gather" the data? what's the format of your HOLD? (unspecified, alpha, focus, ...) explain the join does have you gotten and reviewed traces of the generated sql (using a severe recordlimit)?
Posts: 1925 | Location: NYC | In FOCUS since 1983 | Registered: January 11, 2005
I have also broken the program out into 5 files (by year) and added a -RUN after each END statment and run all deferred. My DBAs are gonna loooooove me. - ABT
@Waz, They've all gone home for the day, I'll have to ask tomorrow. The IBI/WebFocus server (web) console basically shows nothing. I'm not sure what else they have available, but I think I've seen them use the same basic thing but on green screen.
1. if the hold files are (going to be) big, it is advantageous to use TABLEF rather than TABLE -- TABLE puts the answer set returned by the database server through a sort, even though it's already in the final sort order; TABLEF avoids the overhead of (re)sorting (so it's appropriate if the database server will take care of sorting, or if the sort order doesn't matter). the sql trace will indicate whether or not the sort-order is 'passed thru' in the generated sql.
2. in the run that you suspect is hung, the first TABLE (with or w/o the -RUN) should almost immediately generate a .mas for holdfile1, and start populating the .ftm as soon as a row is returned by the database server. if the .mas has not turned up in the agent's work directory, time to talk to the dba.
3. before you go whole hog, always test with no retrieval (just to get the sql stnt trace) and then with a severe retrieval limit (e.g. if readlimit eq 10000 if recordlimit eq 10) at each stage. but you knew that.
Posts: 1925 | Location: NYC | In FOCUS since 1983 | Registered: January 11, 2005
Sometimes data formats created by MFDs wodn't match to the all the data retrieved from DB2 tables. i.e., Character lenght, Date formats ... validate the data formats.
Logicaly design the requirement, split them into more small modules, by Time period, Country or Zone, Produnct, Range, etc., to generate a more meaningful report.
Suggest to build a data mart, then generate reports from it.
If you have console access on the iSeries you can use WRKACTJOB to see whether your job is still running. Note that you'll be able to see all the TSCOM3 jobs and also possibly QSQSRVR jobs that are generated to run the SQL. I always run the WRKACTJOB and then sort by highest CPU, if a WebFOCUS job appears near the top, we've a problem!
Regarding your joins, * Join (Unspecified) HoldFile1 to Patient Details table on Encounter Number (Primary Key, IDX) to get name info, Save to HoldFile2. * Join (Unspecified) HoldFile2 to an Extension Table to grab one field, save to HoldFile3. * Join (Left Outer) HoldFile3 to Payments table to Sum Min/Max Payment Dates and Amount using some where criteria, Save to HoldFile4.
Note that if Patient Details, Extension and Payments are DB2 tables on iSeries, any join from a WebFOCUS FOC file into an iSeries DB2 table may perform badly as it may drag down ALL the DB2 records before it can do the join.
If you have to work this way (i.e. you haven't got decent keys to be able to join everything in one go on iSeries tables) then process the biggest table first!
If you set SET TRACEOFF = ALL SET TRACEON = SQLAGGR//CLIENT SET TRACEON = STMTRACE//CLIENT SET TRACEUSER = ON then you'll see the generated SQL
Hope this helps
Bob Jackson
WebFocus 7.6.11 IBM iSeries
Posts: 14 | Location: Leeds, England | Registered: May 16, 2005
The XRETRIEVAL parameter previews the format of a report without actually accessing any data. This parameter enables you to perform TABLE, TABLEF, or MATCH requests and produce HOLD Master Files without processing the report.
The syntax is:
SET XRETRIEVAL = {ON|OFF} where:
ON Performs retrieval when previewing a report. ON is the default value.
OFF Specifies that no retrieval is to be performed.
Set to OFF it will generate the SQL which you can see, but not take 20 zillion minutes to do it.
Ironically, I was trying to speed things up by doing the Grab -> Hold -> Join (repeat) steps. I thought it would be horribly inefficient to pre-join all the tables together and then apply the WHERE filter (I thought this is how it worked) instead of grabbing details for *just* the records I wanted.
I do have the keys present in all three and have restructured as a test to see how it goes. Thanks everyone for the feedback!
-ABT
quote:
Originally posted by bobjackson: ... Regarding your joins, * Join (Unspecified) HoldFile1 to Patient Details table on Encounter Number (Primary Key, IDX) to get name info, Save to HoldFile2. * Join (Unspecified) HoldFile2 to an Extension Table to grab one field, save to HoldFile3. * Join (Left Outer) HoldFile3 to Payments table to Sum Min/Max Payment Dates and Amount using some where criteria, Save to HoldFile4.
Note that if Patient Details, Extension and Payments are DB2 tables on iSeries, any join from a WebFOCUS FOC file into an iSeries DB2 table may perform badly as it may drag down ALL the DB2 records before it can do the join.
If you have to work this way (i.e. you haven't got decent keys to be able to join everything in one go on iSeries tables) then process the biggest table first!
Very interesting project. All good feedback here. I suppose the keys for me have been the following:
- There is no basis in reality between the WebFocus server the job resides on (the time of Agent submission, specifically) and the DB2 server's Console (via the connector). The dates/times won't match period.
- DBAs can do nefarious things like give your query a 999 priority (which gives it minimal CPU time).
- Joins in DB2 can, operationally speaking, work very inefficiently by selecting a single ID, go fetch the results from a table and start over - sequential read.
- Sometimes operations that seem like they'd take a long time (lots of joins at the top) can be shorter on large data than other organization methods.
Just a few tidbit. Thanks to all who offered feedback!