My query is regarding the maximum limit of a mainframe focus dataset to hold the data. this focus dataset is NOT defined with multi- volumes.
So problem is that when a fixed number of segments get inserted in this dataset. It gets fatal error while writing to the file.
(FOC198) FATAL ERROR IN DATABASE I/O. FOCUS TERMINATING ERROR WRITING
BCCBSPL2, PAGE 188971, CODE 0xaa000003
Than we perform rebuild process to spilt the data from abended dataset to new focus dataset on the key field and take new focus dataset in use.
CYLS allocation to this datasets as following:
Record format . . . : F
Record length . . . : 16384
Block size . . . . : 16384
1st extent cylinders: 628
Secondary cylinders: 350
Allocated cylinders: 4,200
Allocated extents. : 5
Used cylinders . . : 4,200
Used extents . . . : 5
I found multi volume concept that can reduce the occurrence of this space abend.
Is there any method that can remove this space ABEND permanently?This message has been edited. Last edited by: Kerry,
Windows, All Outputs
I haven't been on mainframe in quite a while, but, I guess that you have hit the limit of the FOCUS database.
How big is the FOCUS file in bytes ?, is it about 2gb
Can you post the master? There may be ways to re-segment the file to use storage more efficiently.
Using XFOCUS rather than FOCUS might also help.
I'm geting confused in my old age, XFOCUS is a licensed product isn't it ?
Thanks all for your reply..
Sorry to say but you did not get the clear picture of my problem. So describing it again in more detail.
Actually what happens, we have a mainframe batch job (Runs Daily) that takes input from a sequential file having 10 billion records. This focus batch job process all the records and load to a focus DB dataset. after every a month or two months it gives the 'FATAL ERROR IN DATABASE I/O. FOCUS TERMINATING ERROR WRITING'. When this error comes we create a new focus DB dataset with allocation as mentioned in my first post and process the input data against new focus DB dataset. But it again occurs after a or two months and we repeat the step to create new dataset.
Previously we were not using the multi-volume concept with our focus datasets. Now I have implemented this multi-volume concept with all new focus datasets. Surely it will help to reduce the 'ERROR WRITING' problem.
So my queries are mentioned below
What is the optimal primary and secondary cylinders allocation for a focus dataset so that we can perform Rebuild Process with out any problem ?
Or is there any method that can be used to remove the this problem permanently ?
Windows, All Outputs
It may depend on what "processing" you are doing. If the MODIFY (or MAINTAIN) program deletes segment instances ("records") and adds others, the space occupied by the deleted instances generally will not be reclaimed, so it becomes dead space and the file expands in size, even when the net number of segments remains relatively steady. -- If that is the case, a periodic REBUILD might solve the problem.
A lot also depends on your Focus file design. You may be able to achieve significant improvement in storage efficiency by redesigning the segment structure.
I've seen the situation, you find yourself in, before.
It sounds like you need to build a purge routine, that runs periodically, to replace what you are now doing anyway manually.
Build/discover a business rule that goes something like: Delete all database records older than one year.
One you do that, you can then build your FOCUS purge routine.
TABLE FILE name PRINT .. IF DATEOFDATA LT date ON TABLE HOLD.. END MODIFY FILE name FIXFORM.. MATCH.. ON MATCH DELETE .. DATA ON HOLD.. END
You would add/build database backups, rebuilds, and audit trails, as needed.
|Powered by Social Strata|