Focal Point
Actual to Usage Translation HYYMDS works in Oracle but not SQL Server

This topic can be found at:
https://forums.informationbuilders.com/eve/forums/a/tpc/f/7971057331/m/3901014542

September 05, 2007, 06:05 AM
hammo1j
Actual to Usage Translation HYYMDS works in Oracle but not SQL Server
Can anyone explain this

SQL Server
  FIELD=HWLASTSCANDATE, ALIAS=HWLastScanDate, USAGE=HYYMDS,ACTUAL=HYYMDS, MISSING=ON, TITLE='HW,Last,Scan,Date',DESCRIPTION='HW Last Scan Date', $


Check file works fine, but when it comes to printing the data.

(FOC1383) UNSUPPORTED DATETIME FORMAT FOR FIELD  : HWLASTSCANDATE



Oracle - runs fine
$  FIELD=ACT_EARLY_FINISHH, ALIAS=EARLY_FINISH, USAGE=HYYMDS, ACTUAL=HYYMDS, MISSING=ON, TITLE='Planned,Early,finish', DESCRIPTION='Planned Early finish', $




Server: WF 7.6.2 ( BID/Rcaster) Platform: W2003Server/IIS6/Tomcat/SQL Server repository Adapters: SQL Server 2000/Oracle 9.2
Desktop: Dev Studio 765/XP/Office 2003 Applications: IFS/Jobscope/Maximo
September 05, 2007, 06:56 AM
Tony A
John,

Try the format without the microseconds, that should work OK
USAGE=HYYMDs,ACTUAL=HYYMDs

T



In FOCUS
since 1986
WebFOCUS Server 8.2.01M, thru 8.2.07 on Windows Svr 2008 R2  
WebFOCUS App Studio 8.2.06 standalone on Windows 10 
September 06, 2007, 12:12 PM
hammo1j
Tony the 'S' is time down to the nearest second, 's' is milli and 'm' is microsecond.

I know the HYYMDs for USAGE and ACTUAL works because that is the default provided by the synonym generator.


This gets more peculiar when you consider the internal formats the big 2 databases use.

The date time in SQL server is stored in a BINARY(8). The first 4-byte being the elapsed number days since SQL Server's base date of 19000101. The Second 4-bytes Store the Time of Day Represented as the Number of Milliseconds After Midnight. SQL Server does stores the clock-ticks since midnight. Each clock-tick is equivalent to 3.33 milliseconds. That's also the reason why the DATETIME datatype has an accuracy of one three-hundredth of a second

Oracle has a date format which is actually a date time recorded to the nearest second in 7 bytes. Oracle timestamp is ?? bytes and accurate to the microsecond.

Thus SQL server is actually more precise.

You would expect default for Oracle to be
USAGE=HYYMDS,ACTUAL=HYYMDS seconds and less precise.

You would expect default for SQL Server to be
USAGE=HYYMDs,ACTUAL=HYYMDs (milli seconds and less precise)

Infact its vice versa - has someone coded them the wrong way?

Can someone explain how the IBI timestamps work internally?

There are 8,10 or 12 bytes versions of IBs offerings...

This message has been edited. Last edited by: hammo1j,



Server: WF 7.6.2 ( BID/Rcaster) Platform: W2003Server/IIS6/Tomcat/SQL Server repository Adapters: SQL Server 2000/Oracle 9.2
Desktop: Dev Studio 765/XP/Office 2003 Applications: IFS/Jobscope/Maximo