As of December 1, 2020, Focal Point is retired and repurposed as a reference repository. We value the wealth of knowledge that's been shared here over the years. You'll continue to have access to this treasure trove of knowledge, for search purposes only.
Join the TIBCO Community TIBCO Community is a collaborative space for users to share knowledge and support one another in making the best use of TIBCO products and services. There are several TIBCO WebFOCUS resources in the community.
From the Home page, select Predict: WebFOCUS to view articles, questions, and trending articles.
Select Products from the top navigation bar, scroll, and then select the TIBCO WebFOCUS product page to view product overview, articles, and discussions.
Request access to the private WebFOCUS User Group (login required) to network with fellow members.
Former myibi community members should have received an email on 8/3/22 to activate their user accounts to join the community. Check your Spam folder for the email. Please get in touch with us at community@tibco.com for further assistance. Reference the community FAQ to learn more about the community.
We have tables that up to 4 years of ledger information in them. They are very large and WebFocus seems to have difficulty returning results. My adhoc community comes from an InfoMaker background and these large reports seem to work fine in InfoMaker. I'm curious to find out how other companies handle tables that are so large..... that and I'm tired of hearing "I can do it in InfoMaker"...
Posts: 132 | Location: Kansas | Registered: November 12, 2003
I've found that the same SQL runs equally fast in WebFocus vs any other RDBMS. Have you checked the SQL you are generating? (assuming you are using a SQL-based database)
Posts: 391 | Location: California | Registered: April 14, 2003
As clarification to what N.Selph noted above, aside from checking the SQL code you are sending with WebFocus it is possible to run SQL pass-through where in you can point to the SQL engine (SQL DB2, SQL ORA) and go with SELECT A,B,C,D FROM TABLE A, TABLE B, yada yada yada, ; TABLE ON TABLE HOLD AS HOLDA END.
Make sure that you review the SQL code with your DBA and discuss index columns, best practices SQL coding, EXPLAIN (least cost determination), etc. Also, run SET XRETRIEVAL = OFF in the focexec to run the code "dry" before submitting.
Best wishes for the holidays. FOCUS rules!
Posts: 238 | Location: Atlanta, GA/Rehovot, Israel | Registered: May 06, 2003
You can do it in Focus!! You just need efficient coding. If your using Focus to generate the request vs. using SQL, make sure you don't use defines and things that don't get passed to SQL. TABLEF is also helpful to turn off the sorting. Turn on the trace to see what's being passed with XRETIREVAL OFF. There's a really old post in Focal Point call WeBFOCUS/SQL Efficiencies.
Good Luck!
Posts: 428 | Location: Springfield, MA | Registered: May 07, 2003
Carol, What if you have a defined field that needs to be used..... ? For example we have a field that in the table is 50 char long but want to select using a substring 5 char out of the middle.
I'm going to go look at the information you mentioned.
Posts: 132 | Location: Kansas | Registered: November 12, 2003
Can anyone tell me why the first query runs 5 minutes... and the second 50 minutes...?
The second one uses a defined field which allows me to set up the input in a more user friendly format than YYYYMM. So I'd really like to use the define and allow users to input month and input year ....
Any advice would be appreciated so that I can better understand how to retrieve information from our LARGE tables.
TABLE FILE MNTHBILL SUM A_REVMNTH_YEAR AS 'Revenue, Year' A_REVMNTH_MONTH AS 'Revenue,Month' SUBSIDIARY_CODE AS 'Subsidiary,Code' REVENUE_CLASS AS 'Revenue,Class' RATE_CODE AS 'Rate,Code' TOTAL_ENERGY AS 'Total,Energy' BILLING_DEMAND AS 'Billing,Demand' CUSTOMER_CHARGE AS 'Customer,Charge' ENERGY_CHARGE AS 'Energy,Charge' DEMAND_CHARGE AS 'Demand,Charge' ECON_DEV_CREDIT AS 'Econ Dev,Charge' FUEL_CHARGE AS 'Fuel,Charge' LMR_CHARGE AS 'LMR,Charge' LMR_CREDIT_KVA AS 'LMR Credit,KVA' LMR_CREDIT_KWH AS 'LMR Credit,KWH' BY SUBSIDIARY_CODE NOPRINT AS 'Subsidiary,Code' BY REVENUE_CLASS NOPRINT AS 'Revenue,Class' BY RATE_CODE NOPRINT AS 'Rate,Code' BY REVENUE_MONTH NOPRINT AS 'Revenue,Month' HEADING WHERE ( REVENUE_MONTH EQ 200410); WHERE SERVICE_TYPE EQ 'E'; END
Second Example
DEFINE FILE MNTHBILL T_YEAR/A4='&YEARIS.Year.'; T_MONTH/A2='&MONTHIS.Month.'; T_A_REVENUE_MONTH/A6=T_YEAR | T_MONTH; T_I_TEST/I6= EDIT(T_A_REVENUE_MONTH); END TABLE FILE MNTHBILL SUM A_REVMNTH_YEAR AS 'Revenue, Year' A_REVMNTH_MONTH AS 'Revenue,Month' SUBSIDIARY_CODE AS 'Subsidiary,Code' REVENUE_CLASS AS 'Revenue,Class' RATE_CODE AS 'Rate,Code' TOTAL_ENERGY AS 'Total,Energy' BILLING_DEMAND AS 'Billing,Demand' CUSTOMER_CHARGE AS 'Customer,Charge' ENERGY_CHARGE AS 'Energy,Charge' DEMAND_CHARGE AS 'Demand,Charge' ECON_DEV_CREDIT AS 'Econ Dev,Charge' FUEL_CHARGE AS 'Fuel,Charge' LMR_CHARGE AS 'LMR,Charge' LMR_CREDIT_KVA AS 'LMR Credit,KVA' LMR_CREDIT_KWH AS 'LMR Credit,KWH' BY SUBSIDIARY_CODE NOPRINT AS 'Subsidiary,Code' BY REVENUE_CLASS NOPRINT AS 'Revenue,Class' BY RATE_CODE NOPRINT AS 'Rate,Code' BY REVENUE_MONTH NOPRINT AS 'Revenue,Month' HEADING WHERE REVENUE_MONTH EQ T_I_TEST ; WHERE SERVICE_TYPE EQ 'E'; END
This message has been edited. Last edited by: <Maryellen>,
Posts: 132 | Location: Kansas | Registered: November 12, 2003
I would put the TRACE on and see if SQL is understanding what you need; Look at this example below. Anything in the request that SQL doesn't understand will greatly slow you down. It's best to get what you need from SQL and get out. Do the rest in FOCUS. It might mean another HOLD file, but sometimes more code is better. Then, remember to turn your retrieval back on!!!
-* put the trace on and turn retrieval off SET MSG=ON -SET &ECHO=ALL; SET TRACEOFF = ALL SET TRACEON = STMTRACE//CLIENT SET TRACEON = STMTRACE/2/CLIENT SET TRACEUSER = ON SET XRETRIEVAL = OFF
-* just get the data out of SQL (no defines) TABLEF FILE VBROKERCONTACTDETAILBYDESK PRINT CONTACTNAME DESK BY FIRMNAME IF CONTACTNAME CONTAINS 'MIKE' END **** will gererate **** SELECT T1."FirmName",T1."ContactName",T1."Desk" FROM "TradeMgmt".dbo."vBrokerContactDetailByDesk" T1 WHERE (T1."ContactName" LIKE '%MIKE%') ORDER BY T1."FirmName";
Posts: 428 | Location: Springfield, MA | Registered: May 07, 2003
If you want to pass the date test to SQL, you will have to modify the user's input in Dialog Manager, not DEFINE. Create a &T_I_TEST after doing the substring from the user's input, and use it in your WHERE statement.
Posts: 391 | Location: California | Registered: April 14, 2003
Testing against a defined field is extremely inefficient.
You may be better doing 2 passes at the data. Firstly hold the DEFINED field value and then test against it in a pass of the HOLD file data - thus now testing againt a REAL field.
For example (remember I've not looked at your MASTER FILE!) - the first table must TABLE data at the lowest level of data, DEFINE works on a record by record basis and therefore to represent a DEFINED field at anything other than the data's lowest level will give spurious results, but you get the idea. You may even utilise TABLEF in the first pass of data, you should be happy with the results....
TABLEF FILE MNTHBILL SUM T_I_TEST A_REVMNTH_YEAR A_REVMNTH_MONTH SUBSIDIARY_CODE REVENUE_CLASS RATE_CODE TOTAL_ENERGY BILLING_DEMAND CUSTOMER_CHARGE ENERGY_CHARGE DEMAND_CHARGE ECON_DEV_CREDIT FUEL_CHARGE LMR_CHARGE LMR_CREDIT_KVA LMR_CREDIT_KWH BY SUBSIDIARY_CODE BY REVENUE_CLASS BY RATE_CODE BY REVENUE_MONTH N WHERE SERVICE_TYPE EQ 'E'; ON TABLE HOLD AS FIRSTPASS END -RUN
TABLE FILE FIRSTPASS SUM A_REVMNTH_YEAR AS 'Revenue, Year' A_REVMNTH_MONTH AS 'Revenue,Month' SUBSIDIARY_CODE AS 'Subsidiary,Code' REVENUE_CLASS AS 'Revenue,Class' RATE_CODE AS 'Rate,Code' TOTAL_ENERGY AS 'Total,Energy' BILLING_DEMAND AS 'Billing,Demand' CUSTOMER_CHARGE AS 'Customer,Charge' ENERGY_CHARGE AS 'Energy,Charge' DEMAND_CHARGE AS 'Demand,Charge' ECON_DEV_CREDIT AS 'Econ Dev,Charge' FUEL_CHARGE AS 'Fuel,Charge' LMR_CHARGE AS 'LMR,Charge' LMR_CREDIT_KVA AS 'LMR Credit,KVA' LMR_CREDIT_KWH AS 'LMR Credit,KWH' BY SUBSIDIARY_CODE NOPRINT AS 'Subsidiary,Code' BY REVENUE_CLASS NOPRINT AS 'Revenue,Class' BY RATE_CODE NOPRINT AS 'Rate,Code' BY REVENUE_MONTH NOPRINT AS 'Revenue,Month' WHERE REVENUE_MONTH EQ T_I_TEST ; END
I will try your suggestion ejaf, it also worked much better using the dialogue manager Nselph recommended. I knew there had to be better ways to do this than what I was coming up with. Thanks everyone
Posts: 132 | Location: Kansas | Registered: November 12, 2003
I still think you need to pull the time period out of the large table in the first pass of the data. Talk to your DBA, I'm sure they can suggest a way to get the data you need. If you can't get Focus to generate the correct statement to be passed, you can always use passthru. Then your request will fly. Maybe you've already solved you're problem?
Happy Holidays! Carol
Posts: 428 | Location: Springfield, MA | Registered: May 07, 2003
Sorry for the lengthy post, but I am having a serious performance issue with my report...
I am working on a large report with several calculated columns (around 20 columns). I t does a lot of gross margin and volume and mix calculations with current and last year cost, sales, and qty data.
We report on a Business Group level, drill down to a Division level and then to the Part level. We have our mainframe DB2 table a lready summed by Group and Division, so that query is fast. When we drill down to the part level, some of our Divisions are quite large and the query times out and doesn't come back.
Since this report is for the Presidents of each group, they might frown upon this!
I changed the report to first load to a HOLD file, and when I traced it, the results came back very fast with how many records where in the hold file and I received the AGGREGATION DONE ... message. Good so far.
The part of the report that takes this file and processes down to the part level is crawling for our larger divisions. We summed the Part data in our DB2 table and a large division may have 5 0,000 different parts summed for the month....still not enough records, in my opinion, to make WebFOCUS processing THIS slow.
I noticed in the above posts that using DEFINES really effects performance and I have some of those.
DEFINE FILE HOLD<br />rpt_sect/A30 =
IF E05 EQ '1' THEN 'Part # Specific Invoices' ELSE IF E05 EQ '2' THEN 'Non-Part #
Sell Price Impact' ELSE IF E05 EQ '3' THEN 'IntraCompany Transfer' ELSE 'Total Div Net Sales';
<br />Sales_V/D12 = E06 - E09;
<br />cy_units/D11 = E08;<br />END
The report separates the parts into three types of parts and the first define reads a Section Code field to determine what part of the report that part belongs to, and I assign it a Title for reporting purposes. The other two defines are just repeated columns that have different styling options so I can get rid of those.
Then my program has several (around 20) calculated fields for processing for the report.
Then I have a lot of styling for the report.
Is it the DEFINES, all the COMPUTES, or the Styling that is effecting my report the most?
ANY help would be greatly appreciated!
If you need any more information, PLEASE let me know!
Thanks, TimThis message has been edited. Last edited by: <Mabel>,
You might need to post your code or post more information. You're saying you can get everything out of DB2 efficiently, but Focus is choking? When you drill down are you going against DB2? Are you using MATCH? Are you sorting on long alpha fields that you may not need to sort on? Just a couple of things to look at. Also, it's a "no-no" to join DB2 to FOCUS. Just a few thoughts that could slow you down.
Posts: 428 | Location: Springfield, MA | Registered: May 07, 2003
The general rule for dealing with any RDBMS is: make sure the db is doing all the aggregation etc and returns an answer set as small as possible which you hold in a temp file. Then, let Focus do the rest of the work on that temp file. Just before leaving work today I managed to tune a query by following the rules above, getting execution time from 9+ minutes to 10 seconds.
/H�kan
Posts: 319 | Location: Stockholm, Sweden | Registered: February 04, 2004
The problem is the last report in my drilldown flow that gives me the part level detail. The Group and Division and Part level records are already aggregated in one DB2 table. I go after what level I want by specifying a Summary Code: G - Group, D - Division, and P - Part. I can query directly against the DB2 table to get the Group and Divison summaries because the amount of records are limited. The Part summaries have the most data, but I changed my report to pull those to a hold file first by using the Group and Division parameters. The trace on the hold file comes back in less than 10 seconds stating it has 35,000+ records in the hold file. I am using this hold file to do my calculations and create my WF Report. This is where it seems to shut down for the larger Groups and Divisions. Even a small division takes close to a minute to process 350 records. That is definitely not efficient!
Carol: I will send you some code to see if you see anything that may be the culprit.
As H�kan as implied, tuning your query is of prime importance. Don't forget the FSTRACE files particularly FSTRACE4 - although I would limit the query (especially if it is a large recordset) as you only want the trace file.
Hand the FSTRACE4 to your DBA to perform a DB2 EXPLAIN upon it (unless you have the DB2 authority).
I once inherited (in 1994 - yes I feel old) a lengthy running proc that took three hours (yes hours) to run and, like H�kan, shortened it to 3 seconds just by changing the FOCUS code after a DB2 Explain. The code? Well basically -
TABLE FILE db2file SUM CNT.column_name END
Yes, it was just to get a record count, but changing it to
TABLE FILE db2file WRITE CNT.column_name END
completely altered the SQL eminating from FOCUS to a more efficient SELECT CNT.* FROM etc.. instead of a tablespace scan of several millions of rows.
Posts: 5694 | Location: United Kingdom | Registered: April 08, 2004
So did I back then, I just changed the code using several different combinations it in the hope that it would improve the SQL parsed, eventually it did. I sure could have used XRETRIEVAL back then as well!!!
The improvement changed the view that the DBAs held of FOCUS slightly, although it always was a resource hungry language on mainframe (much the same as DB2 itself was).
I couldn't say whether the same effect is current as I haven't accessed mainframe FOCUS for a few years or so and the parser may have changed significantly, but it would be nice to see the SQL from either method to confirm.
Don't ask what version it was either because the old grey cells won't tell me
Posts: 5694 | Location: United Kingdom | Registered: April 08, 2004
another thing. You must NOT try to join between a relational table and a not-relational table. The DBMS can't handle that, so it returns ALL rows to Focus which will do the join. Check the messages you get from DB2, it will most likely say something like NON-RELATIONAL SEGMENT FOUND IN JOIN. Either hold the the table in DB2 format (though we're having a problem with that) or extract the DB2 tables to Focus and join them.
Posts: 319 | Location: Stockholm, Sweden | Registered: February 04, 2004
Hi Tim, Thanks for sending the code. Can you please send me the gnjp8cfg.fex? From the sounds of this it sounds like that's where you're choking. Please correct me if I'm wrong.
Thanks, Carol
Posts: 428 | Location: Springfield, MA | Registered: May 07, 2003
Tim, I haven't seen stylesheets take long response times. And, I see you're joining a Focus hold file to a DB2 table. You said it comes back in seconds. ARe you positive? Can you eliminate the styling in the drilldown report and see if it still clocks?
After this code add a GOTO: ON CSP_CUSTOMER_NUM -GOTO NOSTYTLE
and
After: ENDSTYLE (add this label) -NOSTYLE
this should produce an ugly HTML version but see if it still clocks. Then you can eliminate the thought that styldsheet is taking time. I've not had lots of defines or computes kill response time. Using the Report Painter sometimes builds extraneous code.
Good Luck!
Posts: 428 | Location: Springfield, MA | Registered: May 07, 2003
I will use your suggestions to try and pinpoint where the report is being affected. The one drilldown on a Part to a Customer file does come back quick, even though it is a DB2 table. The customer file is aggregated by the parameters being sent, so the retrieval is fast, and usually returns under 20 records.
Even when I take this drilldown out, the report still clocks for the larger divisions.
I will try your suggestions!
Thank you everyone for the time spent on this! If I can close in on anything, I will post my results.
Tim, One last thought, now I hear the first part of the report can take time. If you're code TABLE FILE MKT_COS_SUM_GRP is against a HOLD file or FOCUS file then ignore this message. If it's against a DB2 table, then create another pass against the DB2 table using TABLEF (which turns of FOCSORT) and just PRINT all your fields to the hold file. NO BY fields. Then say ON TABLE HOLD and do you're entire request against that HOLD file. It's more code but the results will amaze you.
Have a good weekend!
Posts: 428 | Location: Springfield, MA | Registered: May 07, 2003
I have tried almost everything (the everything would include the thing I need to do to fix this!)
I think I have exhausted my efforts, but here is my case: Drill down flow: Group Level(DB2 query, fast) ->Division Level(DB2 query, fast) -->Part Level(DB2 to Hold File, Processing Hold File, slow/no response for large Divs) --->Customer Level(DB2 query, fast)
I did some response time analysis on running my Part Level Report (aka: The Culprit) I took a mid-sized division (6,725 records in the Hold File, not a lot of data!) and processed the report using different report variations. My response times are included below. I am starting to wonder if this is server or system related?
Report Response
Variation
Times (MM:SS)
Report As Is 02:10, 02:17
No Drill to Cust Rpt 02:00, 02:05<br />Styling on Drill only 01:15, 01:10
No Drill, No Styling 01:00, 00:58
As Is, 1/2 the Computes 01:32, 01:25
No Drill, No Styling 00:36, 00:33
1/2 the Computes
-
My best response time was cutting half my COMPUTE statements out (half the columns reported) and eliminating styling.
Does this look like this is simply reporting server related in generating HTML output? There isn't enough allocated space on the server to handle processing? n all these variations, the amount of records being processed out of the Hold File did not change! I am not server savvy, so I am grasping at straws.
Any feedback?
Thanks, Tim This message has been edited. Last edited by: <Mabel>,
Haven't done drill down, but have faced issues with DB2 lots of rows. Rules to live by: Select on real fields wherever possible, use computes and where total to reduce set if need be. As mentioned, TABLEF if you don't need the internal sort of by fields. Pull two DB2 table data to hold files and then join on the hold files. Make sure your app server has lots of memory.
Recent experience:
0 NUMBER OF RECORDS IN TABLE= 49184 LINES= 49184 0 NUMBER OF RECORDS IN TABLE= 4240 LINES= 4240 0 NUMBER OF RECORDS IN TABLE= 32159 LINES= 32159 0 NUMBER OF RECORDS IN TABLE= 659 LINES= 659 0 NUMBER OF RECORDS IN TABLE= 2721 LINES= 2721
Took about 10 seconds to bring back data and then create a compound excel 2k report. Larger data is all of a certain type of test taken, smaller files were applicants, highschool and enrollment information for a college. Our test score table has 333,522 records in it which is not millions, but poor selection can really slow us down.
Posts: 1317 | Location: Council Bluffs, IA | Registered: May 24, 2004
Hi TIm, I agree with Leah's response. I've never had response time problems with drill down or stylesheets. I always use TABLEF and I always pull my data from DB2 or SQL and put it in a hold file. Then do the join. If you're joining a HOLD file or FOCUS file to DB2 or SQL, response time will kill you in most cases. You could be doing a TableScan on the DB2 table. I also never use the GUI tool to create code, it's not efficient. My belief is it's not Server related or System related, it's just not efficient coding.
As a last try, try pulling the DB2 data and putting it in a hold file. My guess is you'll be a happy camper!
Posts: 428 | Location: Springfield, MA | Registered: May 07, 2003
I am reading my DB2 table to a Hold File, using a set of parms to get only the recs I need. I eliminated the joins and I am still having response issues.
I made a call to meet with our IBI rep to see if he can help me out. It will probably turn out to be something simple, but I will post whatever fix I find!