As of December 1, 2020, Focal Point is retired and repurposed as a reference repository. We value the wealth of knowledge that's been shared here over the years. You'll continue to have access to this treasure trove of knowledge, for search purposes only.
Join the TIBCO Community TIBCO Community is a collaborative space for users to share knowledge and support one another in making the best use of TIBCO products and services. There are several TIBCO WebFOCUS resources in the community.
From the Home page, select Predict: WebFOCUS to view articles, questions, and trending articles.
Select Products from the top navigation bar, scroll, and then select the TIBCO WebFOCUS product page to view product overview, articles, and discussions.
Request access to the private WebFOCUS User Group (login required) to network with fellow members.
Former myibi community members should have received an email on 8/3/22 to activate their user accounts to join the community. Check your Spam folder for the email. Please get in touch with us at community@tibco.com for further assistance. Reference the community FAQ to learn more about the community.
jb did you ever discover the max of a .ftm file ? Mine have begun to exceed 2gig and i'm getting a read error now...when trying to read the whole file. i looked at the doc that Francesco found..and it listed only FOC or xfoc files... i'm on solaris/unix/oracle thing as well... and having ageda
In Focus since 1979///7706m/5 ;wintel 2008/64;OAM security; Oracle db, ///MRE/BID
Posts: 3811 | Location: Manhattan | Registered: October 28, 2003
Originally posted by susannah: ...Mine have begun to exceed 2gig and i'm getting a read error now...when trying to read the whole file.
If your mean read via TABLE: I believe Focus still has a 2g limit on the internal matrix -- a limitation related to the size limit for simple Focus files. So PRINT * on a flat file > 2gb will lead to an internal matrix of similar size.
But you should still be able to do some reporting w/o bumping against the 2g ceiling -- Summarization (SUM) and selection (IF/WHERE) can reduce the 'height' of answer set, and eliminating columns from the request reduces the 'width'. And of course if you can make do with TABLEF, there's no limit (the internal matrix just holds a single row).
Or did you mean something else?
Posts: 1925 | Location: NYC | In FOCUS since 1983 | Registered: January 11, 2005
hmmm... thanks for offering to help... i'm extracting a 2.5 gig flat file .ftm when i TABLE it to build my XFOCUS files, the bottom bunch of records don't get read...i get an ERROR READING FILE filename but the fex runs, and the XFOCUS files get created...just missing data. 2) so today, i extracted to a 2.5 gig ON TABLE HOLD FORMAT XFOCUS... and the sucker worked... when i TABLE it to build my XFOCUS files, all the records get read. ... so for the moment, my ageda is gone, thanks Dave...but soon to come back since i don't know exactly whats going on...and the files are getting bigger, and will be 5 gig by year end.
In Focus since 1979///7706m/5 ;wintel 2008/64;OAM security; Oracle db, ///MRE/BID
Posts: 3811 | Location: Manhattan | Registered: October 28, 2003
1. You refer to these as .ftm files. Do they originate with a WF server?
2. I suppose there could be some integrity issue at the file-system level. To test that hypothesis beforehand, insert a quick scan to verify WF can read all the lines:
TABLEF FILE xxx (note TABLEF!) WRITE CNT.FLD1 MAX.FLD1 MAX.FLD2 etc. ON TABLE HOLD END (and check the line count against your source)
3. Divide and conquer: Partition the incoming files to reasonable size (at the sourcem or as a prelude step)
Posts: 1925 | Location: NYC | In FOCUS since 1983 | Registered: January 11, 2005
yes, they originate with a WFRS... its a big extract from Oracle. the extract is 2.1 gig TABLE FILE reads that extract and creates XFOCUS databases. Theproblem is, its too big to read. so now, i have to create my extract, not as ON TABLE HOLD {format alpha/binary} but rather as ON TABLE HOLD FORMAT XFOCUS it works... i can read and process that file. but i don't like it. On a windows server, no issues. On a unix server, issues. We now hypothsize that its a unix opsys setting that limits file sizes ... but you know what happens when you present that idea to anyone in a red hat...
In Focus since 1979///7706m/5 ;wintel 2008/64;OAM security; Oracle db, ///MRE/BID
Posts: 3811 | Location: Manhattan | Registered: October 28, 2003
Chances are your resultant Xfocus file is composed of inately hierarchical data. Design the master accordingly as a multiple segment structure, and load it up one segment at a time, using a series of HOLD files pulled from the 2.1GB monster flat extract. That tends to cut down on the width (fields) and/or depth (rows) of each HOLD, so you avoid flirting with the 2GB internal matrix limitation of TABLE.
If you want to avoid cataloging the master, you can create it on the fly --- do a HOLD FORMAT XFOCUS with multiple verbs to create the master (with IFs to ensure just a few records), then use CREATE to wash away that data --- and proceed as above.
I'll be happy to discuss particulars offline.
-Jack
Posts: 1925 | Location: NYC | In FOCUS since 1983 | Registered: January 11, 2005
What I remember of my unix days - long ago and far away ... - : Limiting file sizes on unix is done by using the 'ulimit -f' setting. To see what value the setting is currently use 'ulimit -f' to see the user setting, and 'ulimit -fH' to see the OS enforced setting.
GamP
- Using AS 8.2.01 on Windows 10 - IE11.
in Focus since 1988
Posts: 1961 | Location: Netherlands | Registered: September 25, 2007
i asked Ginny and she says, similar to what you said, Gamp, that there is an "'Enhanced JFS' where JFS is Journal File System.'"
i just tried what you suggested, Gamp, and got 'unlimited' as the result... i tried ulimit -fH and got 'invalid number' so the H is probably different now. but the ulimit -f gave a very interesting answer... so not i'm thinking its not unix...
In Focus since 1979///7706m/5 ;wintel 2008/64;OAM security; Oracle db, ///MRE/BID
Posts: 3811 | Location: Manhattan | Registered: October 28, 2003