Focal Point Banner


As of December 1, 2020, Focal Point is retired and repurposed as a reference repository. We value the wealth of knowledge that's been shared here over the years. You'll continue to have access to this treasure trove of knowledge, for search purposes only.

Join the TIBCO Community
TIBCO Community is a collaborative space for users to share knowledge and support one another in making the best use of TIBCO products and services. There are several TIBCO WebFOCUS resources in the community.

  • From the Home page, select Predict: WebFOCUS to view articles, questions, and trending articles.
  • Select Products from the top navigation bar, scroll, and then select the TIBCO WebFOCUS product page to view product overview, articles, and discussions.
  • Request access to the private WebFOCUS User Group (login required) to network with fellow members.

Former myibi community members should have received an email on 8/3/22 to activate their user accounts to join the community. Check your Spam folder for the email. Please get in touch with us at community@tibco.com for further assistance. Reference the community FAQ to learn more about the community.


Focal Point    Focal Point Forums  Hop To Forum Categories  WebFOCUS/FOCUS Forum on Focal Point     Master File performance

Read-Only Read-Only Topic
Go
Search
Notify
Tools
Master File performance
 Login/Join
 
<Navin>
posted
Hi All

I have joined a set of master files using synonym editor.Those synonyms have close to 20 million records.The performance is too slow .Is there a way to achieve the performance of it.I have searched a lot for that but i cant get one document or any related articles. IF anyone knows the way/document/article/an idea to achieve the performance please pass it across.

I cant achive that by sqlpassthru'.We are looking for ways to achive the performance by master file joins.
 
Report This Post
Virtuoso
posted Hide Post
Naveen,

I suspect you mean that you have one (1) master file with many segments, each pointing to a relational table. And that the underlying tables have all in all 20M records. And that you would want to read 20M records in a few seconds.
Not very feasible. But:
You could build a sort of data warehouse.
You could change the structure to a hierarchy FOCUS file so that when using screening conditions you address only a small subset.
Or you could invest in a supercomputer.


Daniel
In Focus since 1982
wf 8.202M/Win10/IIS/SSA - WrapApp Front End for WF

 
Posts: 1980 | Location: Tel Aviv, Israel | Registered: March 23, 2006Report This Post
Gold member
posted Hide Post
Hi Naveen and Daniel,
A technique I used successfully when processing over a billion records was to simulate an index search by creating a higher level segment composed of part of the key. For example, if the search field is SSN, compose a higher level segment as the last two digits of SSN. Since the last four digits of SSN is sequentially assigned, in a large enough sample, data should be evenly distributed across all hundred values (00 - 99).

In another case, this technique was also used to partition data into ten partitions (0 - 9). The data maintenance and reporting processes would stage their activity to take advantage of this setup and would first setoff ten jobs then reassemble the extracted data into one file as the last step in the process.

Hope this helps.
Nick


WebFOCUS 7.7.03 & 8.0.7
Windows
HTML, Excel, PDF, etc.
Also, using Maintain, etc.
 
Posts: 83 | Registered: February 26, 2009Report This Post
Master
posted Hide Post
You might try building an Oracle view and building you synonym on that.


Pat
WF 7.6.8, AIX, AS400, NT
AS400 FOCUS, AIX FOCUS,
Oracle, DB2, JDE, Lotus Notes
 
Posts: 755 | Location: TX | Registered: September 25, 2007Report This Post
  Powered by Social Strata  

Read-Only Read-Only Topic

Focal Point    Focal Point Forums  Hop To Forum Categories  WebFOCUS/FOCUS Forum on Focal Point     Master File performance

Copyright © 1996-2020 Information Builders