Focal Point Banner


As of December 1, 2020, Focal Point is retired and repurposed as a reference repository. We value the wealth of knowledge that's been shared here over the years. You'll continue to have access to this treasure trove of knowledge, for search purposes only.

Join the TIBCO Community
TIBCO Community is a collaborative space for users to share knowledge and support one another in making the best use of TIBCO products and services. There are several TIBCO WebFOCUS resources in the community.

  • From the Home page, select Predict: WebFOCUS to view articles, questions, and trending articles.
  • Select Products from the top navigation bar, scroll, and then select the TIBCO WebFOCUS product page to view product overview, articles, and discussions.
  • Request access to the private WebFOCUS User Group (login required) to network with fellow members.

Former myibi community members should have received an email on 8/3/22 to activate their user accounts to join the community. Check your Spam folder for the email. Please get in touch with us at community@tibco.com for further assistance. Reference the community FAQ to learn more about the community.


Focal Point    Focal Point Forums  Hop To Forum Categories  iWay Software Product Forum on Focal Point    (Solved) Unable to retrieve log files on some processes.

Read-Only Read-Only Topic
Go
Search
Notify
Tools
(Solved) Unable to retrieve log files on some processes.
 Login/Join
 
Member
posted
I get the following message when I try to retrieve log files on certain processes. It does not happen on all processes. Any help or suggestions would be appreciates. Thanks.

03/30/2009 08:26:32 EDASERVE (FOC541) SU. CENTRAL DATABASE MACHINE ERROR: S_REQ_NAME
03/30/2009 08:26:32 EDASERVE (FOC236) LINKED FILE DOES NOT HAVE A MATCHING KEY FIELD OR SEGMENT: S_REQ_NAME
03/30/2009 08:26:32 EDASERVE BYPASSING TO END OF COMMAND
03/30/2009 08:26:32 EDASERVE No log information for this request.
03/30/2009 08:26:32 EDASERVE DataMigrator Report "allocation_staging_table_load1" successfully retrieved from server.

This message has been edited. Last edited by: Dawn,
 
Posts: 13 | Registered: April 18, 2008Report This Post
Guru
posted Hide Post
Here are two suggestions assuming you are running DataMigrator 7.6.

It's possible that the FDS server isn't running. On the Web Console go to Workspace> Configuration, Special Services > right click FDS and if Start is an option select it.

It's also possible that ETLLOG has grown too large. On the Web Console go to Procedures> DataMigrator Utilities > Manage Log and Statistics > Recreate


N/A
 
Posts: 397 | Location: New York City | Registered: May 03, 2007Report This Post
Member
posted Hide Post
I started the FDS server and recreated the log file. After reruning the process I was able to get a log. Thank you!

We regularly recreate our log so I suspect it was the FDS server. But is there any way to lower the log level so the log file does not fill up so fast?
 
Posts: 13 | Registered: April 18, 2008Report This Post
Guru
posted Hide Post
Well that would depend on what the log is filling up with. Here's an article I recently wrote on the subject.

DataMigrator writes logs from the scheduler and flows that are run. Each record of ETLLOG is 250 bytes, and a typical log for a flow that loads a small number of rows doesn't get any errors adds about 30 records to log file.

At that rate you should be able to run over 200,000 DataMigrator flows before the log files up.


So what does fill up the log?

  • Scheduler messages. By the default the scheduler wakes up every sixty seconds and when it does it writes a line to the log file. If you've changed this default to (say) every 6 seconds, then you are getting ten times as many records from the scheduler.


  • Commit and check messages. By default DataMigrator issues a commit every 1000 rows. Every time this happens that adds two lines to the log like this:


    REFERENCE...AT TRANS 1000 Commit forced at:
    1000 for 1000 row(s)

    However if a table is loading a million rows, that's adding 2,000 rows to the log. But for a table that large committing every 1,000 rows may be more often than is needed. Increasing the commit size (Target Properties ► Commit every ___ rows) reduces the number of rows written to the log and may improve throughput.

    But what if you want to keep the commit size small but just want to get rid of the messages? You can do that with the following two commands in a pre-extract stored procedure.


    SET MESSAGE=OFFSET EMGSRV=OFF

    But a warning: some error messages are also disabled, so if there are problems with the flow, you may need to remove these settings to diagnose them.


  • Rejected row messages. When the "key matching logic" option "include duplicates" is used with a relational database target, duplicate rows still can't be inserted into a table if there is a unique constraint. Instead the Relational datababase rejects the rows. But when it does, it returns error messages which the server sends to the log. For a single row that gets a constraint violation there are about five rows written to the log. For example, from MS SQL Server:


    (FOC1400)SQLCODE IS 2601 (HEX: 00000A29) XOPEN: 23000
    : Microsoft OLE DBProvider for SQL Server: [23000] Cannot insert duplicat: e key rowin object 'dmrpts' with unique index 'dmrpts'. [01000] The sta: tement has been terminated.(FOC1416) EXECUTE ERROR : DMRPTS

    If a log is filling up with these types of messages, then something is wrong. Did you forget to truncate the target table first? Do you have enough key columns specified to uniquely identify each row? Do you need to use key matching logic to update the target table?

    However if there are thousands of rows being rejected you probably dont' need to see an error message for each and every one of them. In fact you can cause the job to fail after a certain number of rows are rejected. To do so go to Flow Properties ►Execution ► Stop processing after ____ DBMS errors.

    It's possible I suppose that rejected rows are an expected part of the load and you don't need to see them all. In that cases it's possible to let the job continue and just suppress the messages. To do so Include in profile or stored procedure the ilne:


    SET DBMSMSGLIMIT=nnnn


N/A
 
Posts: 397 | Location: New York City | Registered: May 03, 2007Report This Post
Member
posted Hide Post
Thank you for your help. I will share this info with my associates.
 
Posts: 13 | Registered: April 18, 2008Report This Post
Platinum Member
posted Hide Post
We are having the same issues and errors as Dawn experienced above. We have recreated ETLLOG & ETLSTATS and restarted FDS but it keeps saying 'failed to start' as soon as we run a flow. Also the event viewer on the server says 'Faulting application hlisnk.exe, version 0.0.0.0, faulting module ntdll.dll, version 5.2.3790.4455, fault address 0x00011952.'
Any suggestions ????


_______________________
*** WebFOCUS 8.1.05M ***
 
Posts: 196 | Location: London, UK | Registered: December 06, 2005Report This Post
Platinum Member
posted Hide Post
Refer to case that I opened.....
Case: 61892543
Summary: FDS Keeps Stopping

We think too may APP MAP commnands and " marks around some directories didn't help plus the order of users in admin.cfg.

We have got his working now but are none the wiser as to what did the trick !! Scary stuff....


_______________________
*** WebFOCUS 8.1.05M ***
 
Posts: 196 | Location: London, UK | Registered: December 06, 2005Report This Post
Guru
posted Hide Post
The edaprint that was uploaded to the hottrack case has the mesage "ETLLOG field DATE not found." Could you check there is no ettlog.mas in your app path and recreate the log and statistics tables.

The edaprint also shows that the server app path has 144 application directores that the scheduler is scanning. While some are empty a few have over a hundred flows. A long app path will slow processing somewhat. It might help to reduce the number of application directories in the server's application path.


N/A
 
Posts: 397 | Location: New York City | Registered: May 03, 2007Report This Post
Platinum Member
posted Hide Post
Thanks Cliff for the suggestions etc. We have checked there are no rogue masters (there weren't) and have reduced the list of app map commands and voila - all ok - nothing documented on this though. We will watch and see what happens - hopefully now all OK.


_______________________
*** WebFOCUS 8.1.05M ***
 
Posts: 196 | Location: London, UK | Registered: December 06, 2005Report This Post
Virtuoso
posted Hide Post
I need to bookmark this thread. I come back to it about once every six months.



 
Posts: 1012 | Location: At the Mast | Registered: May 17, 2007Report This Post
Virtuoso
posted Hide Post
Alright, a follow-up a decade later. I want MORE messages in my log. In particular I want the final commit message -- Commit forced at: 34880 for 34880 row(s) for every table in a big long run so that I can see the final tally. For big tables that gets cropped. So I created a procedure before the flow that did this --

SET DBMSMSGLIMIT=1000

. . . and that didn't seem to change the default from 20. Presumably there's a setting somewhere now that we're up to version 8.2?

That's all I can find in the manual and these forums, and a global search for "=20" in the ibi subdirectory (the default value) didn't bring a result either.



 
Posts: 1012 | Location: At the Mast | Registered: May 17, 2007Report This Post
Virtuoso
posted Hide Post
Found it -- sched_log_lines in the edaserve.cfg file sets the number of lines written to the log, requires a service restart. I have my lost lines!



 
Posts: 1012 | Location: At the Mast | Registered: May 17, 2007Report This Post
  Powered by Social Strata  

Read-Only Read-Only Topic

Focal Point    Focal Point Forums  Hop To Forum Categories  iWay Software Product Forum on Focal Point    (Solved) Unable to retrieve log files on some processes.

Copyright © 1996-2020 Information Builders