As of December 1, 2020, Focal Point is retired and repurposed as a reference repository. We value the wealth of knowledge that's been shared here over the years. You'll continue to have access to this treasure trove of knowledge, for search purposes only.
Join the TIBCO Community TIBCO Community is a collaborative space for users to share knowledge and support one another in making the best use of TIBCO products and services. There are several TIBCO WebFOCUS resources in the community.
From the Home page, select Predict: WebFOCUS to view articles, questions, and trending articles.
Select Products from the top navigation bar, scroll, and then select the TIBCO WebFOCUS product page to view product overview, articles, and discussions.
Request access to the private WebFOCUS User Group (login required) to network with fellow members.
Former myibi community members should have received an email on 8/3/22 to activate their user accounts to join the community. Check your Spam folder for the email. Please get in touch with us at community@tibco.com for further assistance. Reference the community FAQ to learn more about the community.
Greetings all, After consulting manuals and looking over Focal Point, can't seem to find an answer to the question I'm about to put out, however I don't think its a very difficult problem. I am trying to do an initial load of a FOCUS db using MAINTAIN. Here is the Master of the file I want to load: FILE=PAYFEED1, SUFFIX=FOC, DATASET='C:\IBI\APPS\BASEAPP\PAYROLL_DATA\PAYFEED1.foc' SEGNAME=POSEMP, SEGTYPE=S6, $ FIELD=FY, ALIAS=FY, FORMAT=A4, FIELDTYPE=I, $ FIELD=GOVT_UNIT, ALIAS=GU, FORMAT=A5, $ FIELD=DES_POS_TP, ALIAS=POSTYPE, FORMAT=A2, $ FIELD=POSITION_NUM, ALIAS=POSNO, FORMAT=A6, $ FIELD=EMPLOYEE_NUM, ALIAS=EMPNO, FORMAT=A5, $ FIELD=TITLE_CODE, ALIAS=TCODE, FORMAT=A6, $ FIELD=EMPL_NAME, ALIAS=NAME, FORMAT=A30, $ FIELD=TITLE, ALIAS=TITLE, FORMAT=A40, $ FIELD=POS_DT_PAID, ALIAS=PAID_DT, FORMAT=MDYY, $ FIELD=STEP_DATE, ALIAS=STEPDATE, FORMAT=MDYY, $ FIELD=STEP, ALIAS=PAYSTEP, FORMAT=A2, $ FIELD=HOURLY_RATE, ALIAS=RATE, FORMAT=P7.2, $ FIELD=PROJ_RATE, ALIAS=NEXTRATE, FORMAT=P6.2, $ SEGNAME=EXPENSES, SEGTYPE=S10, PARENT=POSEMP, $ FIELD=DEPT_ID, ALIAS=DEPTID, FORMAT=A10, $ FIELD=PAYPERIOD, ALIAS=CHECKDATE, FORMAT=MDYY, $ FIELD=EARNING_CODE, ALIAS=EC, FORMAT=A3, $ FIELD=FUND_CH, ALIAS=FUND_CODE, FORMAT=A5, $ FIELD=FISCAL_YEAR, ALIAS=FY2, FORMAT=A2, $ FIELD=MONTH, ALIAS=MO, FORMAT=M, $ FIELD=ACCT_CD, ALIAS=VISION_LINE_ITEM, FORMAT=A6, $ FIELD=EXPENSE_CDS, ALIAS=OPERATING, FORMAT=A1, $ FIELD=PRJ_GRNT_CH, ALIAS=PROJECT, FORMAT=A15, MISSING=ON, $ FIELD=PROGRAM_CH, ALIAS=PROGRAM, FORMAT=A5, MISSING=ON, $ FIELD=PAYPER_HRS, ALIAS=HOURS, FORMAT=P10.2, $ FIELD=PAYPER_AMT, ALIAS=PAID, FORMAT=P10.2M, CURRENCY=USD, $ DEFINE KEYCODE/A1 = 'C';$ ---- Here is the code I am using to load: FILEDEF PAYR081 DISK C:\IBI\APPS\PRCOSTS\PAYR081.FTM SET ALL=PASS -SET &ECHO=ALL; CREATE FILE PAYFEED1 END MAINTAIN FILE PAYFEED1 AND PAYR081 FOR ALL NEXT PAYR081.FY INTO PAYSTACK COMPUTE I/I5 = 1; REPEAT PAYSTACK.FOCCOUNT FOR ALL INCLUDE PAYFEED1.FY PAYFEED1.DEPT_ID FROM PAYSTACK(I) COMPUTE I=I+1; ENDREPEAT END ----- The pertinent issues: The PAYR081, the source file has a Master with fields of exactly the same names, formats and order. Both Masters pass Checks, and the procedure just above runs with no errors.
The problem is with segment instances and I've been playing around for some time to solve it. PAYR081 has 226,631 records. I can only guess that my initial load (this being a FOR ALL NEXT with a subsequent INCLUDE on both segments) would include all records. I have been finding that what I do get varies depending on key fields. For the child (target) segment (EXPENSES), if I set up as Segtype S4 it will yield 174,739 segment instances. The way I have it now (S10) results in 226,669 with 1,555 in POSEMP and 225,114 in EXPENSES. Incidentally yesterday, I had the field order different with PRJ_GRNT_CH and PROGRAM_CH up further in the segment and fewer keys. Got the results then as I am now with those fields being considered keys. One of my concerns though is that the great majority of record instances for both these fields is "no data", so started using MISSING=ON but no favorable results.
I did a further check by doing a TABLE request first on PAYR081 to sum both numeric fields PAYPER_HRS and PAYPER_AMT and comparing with a similar request on PAYFEED1. I am losing data in the FOCUS file. The PAYPER_AMT comparison is $40,975,002.05 vs $40,891,076.80.
I have wondered if I need to do a MATCH as opposed to a NEXT.....or if I should be using MODIFY instead of MAINTAIN. As you can surely tell by now I cannot brag of any database building expertise. I'd appreciate any hints! I am doing all this in WF 5.3.2 installed on Windows XP. Thank You very much.
Posts: 22 | Location: Waterbury VT USA | Registered: November 15, 2005
Firstly I would INFER a stack for PAYFEED1 and copy the data into that.
Second, the INCLUDE should not be in a repeat if using FOR ALL, you are loading ALL multiple times.
I would use
.
Infer PAYFEED1.FY PAYFEED1.DEPT_I into incStack;
.
For all copy from payStack(1) to incStack(1);
For all include PAYFEED1.FY PAYFEED1.DEPT_ID from incStack(1)
.
.
Alan. WF 7.705/8.007
Posts: 1451 | Location: Portugal | Registered: February 07, 2007
If the Masters truly are exactly the same, you can get away with something called Loose Referencing. For grins and giggles include a Key Field from each segment in the NEXT and ALL the keys for the INCLUDE statements and try this:
MAINTAIN FILE PAYFEED1 AND PAYR081 FOR ALL NEXT PAYR081.FY PAYR081.DEPT_ID INTO PAYSTACK FOR ALL INCLUDE PAYFEED1.FY PAYFEED1.GOVT_UNIT PAYFEED1.DES_POS_TP PAYFEED1.POSITION_NUM PAYFEED1.EMPLOYEE_NUM PAYFEED1.TITLE_CODE PAYFEED1.DEPT_ID PAYFEED1.PAYPERIOD PAYFEED1.EARNING_CODE PAYFEED1.FUND_CH PAYFEED1.FISCAL_YEAR, PAYFEED1.MONTH PAYFEED1.ACCT_CD PAYFEED1.EXPENSE_CDS PAYFEED1.PRJ_GRNT_CH PAYFEED1.PROGRAM_CH FROM PAYSTACK END
Mark
Posts: 663 | Location: New York | Registered: May 08, 2003
Hi Mark, This is totally wild. I tried your code exactly and only got 4,028 results (the specifics given earlier in the day will give some significance to this). I then restored the compute, repeat, endrepeat etc., and after doing this I'm exactly back to where I was this AM in terms of result quantity of records (and segment instances) as well as for total Hours and $. Mind you that is fairly close and I wish I could say "Close Enough for Government Work" even though this is what it's for.
Also I tried Alan's solution earlier. This made sense all right (in my head) but returned neither error message nor results! That is why, even though he advised against it, I ended up restoring the loop again as a try.
Is it possible that this is something better suited to Modify? Are there such cases?
Thank You!! Bill
Posts: 22 | Location: Waterbury VT USA | Registered: November 15, 2005
EIther Maintain or Modify can work. Neither is more suitable, it is a question of personal choice.
I think that you will have to do some debugging, and limit the process to, say, 20 records - For 20 Next .... into stack. Type out FocCount Copy from stack to inferred stack Type out FocCount
Then maybe a sort on the stack into key order.
Then try a repeat to load
REPEAT inferredStack.FocCount cntr=1; INCLUDE ....... from inferredStack(cntr) Check FocError ENDREPEAT cntr=cntr+1;
I wonder if the fields in the stack do truly match.
Then check the output from the input data.
Alan. WF 7.705/8.007
Posts: 1451 | Location: Portugal | Registered: February 07, 2007
Try Alan's idea of doing 100 records at a time. Is it possible that somewhere in the data there are records with matching keys? My technique would stop at record 4,029 if there was a matching key there.
While this would be slower, you could try:
MAINTAIN FILE PAYFEED1 AND PAYR081 INFER PAYFEED1.FY PAYFEED1.DEPT_ID INTO FEEDSTACK FOR ALL NEXT PAYR081.FY PAYR081.DEPT_ID INTO PAYSTACK COMPUTE I/I7=1; STACK CLEAR FEEDSTACK COMPUTE FEEDSTACK.FY =PAYSTACK(I).FY; COMPUTE FEEDSTACK.GOVT_UNIT =PAYSTACK(I).GOVT_UNIT;
-* YOU NEED A COMPUTE LINE FOR EVERY FIELD IN THE STACK
INCLUDE PAYFEED1.FY PAYFEED1.GOVT_UNIT PAYFEED1.DES_POS_TP PAYFEED1.POSITION_NUM PAYFEED1.EMPLOYEE_NUM PAYFEED1.TITLE_CODE PAYFEED1.DEPT_ID PAYFEED1.PAYPERIOD PAYFEED1.EARNING_CODE PAYFEED1.FUND_CH PAYFEED1.FISCAL_YEAR, PAYFEED1.MONTH PAYFEED1.ACCT_CD PAYFEED1.EXPENSE_CDS PAYFEED1.PRJ_GRNT_CH PAYFEED1.PROGRAM_CH FROM FEEDSTACK
IF FOCERROR NE 0 THEN BEGIN TYPE "RECORD NUMBER i IS A DUPLICATE"; -* Put a carot in front of the i in the type line. I am getting an error when I try to show it here.
COMPUTE I=I+1; ENDREPEAT END
This loads ALL the records into PAYSTACK and then copies them over, one at a time into FEEDSTACK. You cannot use a STACK COPY as that ONLY works when the stacks are created from the same Master. Create a COMPUTE line for each of the 25 fields. After the INCLUDE, type out the FOCERROR if the include is bad. I will bet that a couple of the rows of data are duplicates.
Mark
Posts: 663 | Location: New York | Registered: May 08, 2003
Mark is, of course, correct. You cannot COPY from non-identical stacks. (It really would be useful to be able to do that Mark, if the layouts are the same).
Anyway, my error. Senility is here.
I knew I do something similar, so hooked out the code and found a different approach than I thought.
What I do is INFER the stack to be used to load the data, from the file to be loaded. Run a fex to populate from ftm, then load
MAINTAIN FILE CAR
Case Top
Infer CAR.COUNTRY CAR.CAR CAR.MODEL into loadStack;
EXEC fex into loadStack;
loadData():
EndCase
Case loadData
Logic to load data
EndCase
END
The fex
APP FI INPUT app/INPUT.FTM
TABLE FILE INPUT
PRINT COUNTRY CAR MODEL
ON TABLE PCHOLD
END
I have this approach working for v. large data sources.
Alan. WF 7.705/8.007
Posts: 1451 | Location: Portugal | Registered: February 07, 2007
I am greatly appreciative of your help. Through this I am finding duplicates. This has of course resulted in my asking such as what, where when and why etc.... basic questions on the data being supplied to me. What I found was that there is in fact an additional key field in the original source data but not in the data received (not present in this subject data being loaded), therefore the apparent duplicates.
This has been extremely enlightening and leading in the right direction. I guess this raises questions to be answered before I proceed further. Again Thank You!
Bill
Posts: 22 | Location: Waterbury VT USA | Registered: November 15, 2005