Last Update: July 14, 2009

What's New

  • Added answer on reference frame of attitude rates
  • Toolkit 5.2 problems and fixes are available
  • Moved the questions fixed in Toolkit 5.2 to Toolkit 5.1.1 FAQ. These problems only appear in the November 1996 Toolkit 5.1.1 and are all fixed in the April 1996 Toolkit 5.2.
  • Moved the questions fixed in Toolkit 5.1.1 to Toolkit 5.1 FAQ. These problems only appear in the May 1996 Toolkit 5.1 and are all fixed in the November 1996 Toolkit 5.1.1.
  • Moved the questions fixed in the Release A SCF Toolkit to Toolkit 5.0 FAQ. These problems only appear in the July 1995 Toolkit 5.0 and are all fixed in the May 1996 Toolkit 5.1.

Please send questions to sdps-support@earthdata.nasa.gov.

1.  How do I link my source code with the Toolkit?

In C, use a line of the form

$CC $CFLAGS -I$PGSINC -L$PGSLIB main.c -lPGSTK

In FORTRAN 77 or 90,

$F77 $F77_C_CFH main.f $PGSLIB/libPGSTK.a

(You may need to add additional libraries such as your standard math library.)

It is recommended that you use the same compiler flags as were used to test the Toolkit, as given in the Toolkit test driver makefiles. In C, these flags enforce ANSI compliance, and turn on optimization; in FORTRAN, they enable the linker to find Toolkit functions correctly (see below).
Toolkit compiler flag environment variables are set up by sourcing file $PGSBIN/pgs-dev-env.csh, as explained in the installation readme file $PGSHOME/README.

It is not required that the same compiler flags be used in order to use the Toolkit; however, you may want to use the Toolkit flags for reference in constructing your own flags, particularly if you have an unsupported platform.
Exception: Compiler option $F77_C_CFH must be used by FORTRAN users on the following platforms:

  • HP -- $F77_C_CFH="+ppu"
  • IBM -- $F77_C_CFH="-qextname"

If these flags are not used on these platforms, the linker will be unable to find functions in the Toolkit library.

This information also appears in the Toolkit Users Guide, sec. 5.3, "Link Instructions."

2. How do I use the Toolkit to do I/O in my program?

First, you must enable the Toolkit to find your file.
You do this by first putting an entry in your Process Control file (PCF), as explained in the PC section of the Toolkit Primer.
(Temporary files are the exception; entries for these are created and placed in the PCF automatically when you use Toolkit functions PGS_IO_Gen_Temp_Open and _OpenF).
All files listed in the PCF have a logical ID for use with Toolkit functions.
That is, your code will access files by a logical name (handle). This logical handle will correspond to one or more physical file handles. At your facility, this pysical handle is your choice. At the DAAC production facility, the system will determine the physical handle.

Next you open the file.
If the file is an HDF file, you then use function PGS_IO_PC_GetReference to retrieve the physical filename, which you pass to HDF function Hopen.
If the file is not HDF, you use the logical ID of your PCF entry as input to Toolkit functions PGS_IO_Gen_Open and _OpenF for permanent files, or _Temp_Open and _Temp_OpenF for temporary and intermediate files. You get back a file handle for use with native I/O functions.

To do the actual I/O, you may use HDF read/write functions, and native C or FORTRAN read/write functions, as appropriate.
There are no Toolkit functions that do the I/O directly.

HDF files are closed using HDF function Hclose; other files are closed using PGS_IO_Gen_Close and _CloseF.

3.  Can we have access to a database from our source code?

This access has been requested by three instrument groups. We are currently studying the implications for production system design and performance. We are also considering data production scenarios as part of a trade study, e.g. on a case by case basis, will science processes be better served by relational databases or by flat indexed files?. We will develop a response to the community.

Note: It has been determined that ASTER will have access to a database at EDC.

4.  Can we expect Level 0 data in granulized form?

We currently don't have an answer to this. Level 0 granulization could be done by EDOS or by the SDPS ingest function. (By granulization, we mean a single file containing the range of data required by a Level 1 process.) The issue is being taken up with EDOS.

In the current scenario, the Toolkit could get access to physical files which are not scientific granules, i.e. many physical files per requested range of data. We have designed the Toolkit interface to do as much of the work of granulization as possible.

The Toolkit will provide software which will allow opens of files in a loop and software for breaking out data packets from the files. Granules can be assembled using these tools. Code fragments will be supplied in the Toolkit which perform a prototype granulization and which can be modified by developers to construct files of the requested spatial or temporal range. If we get a single granule per file, the same tools can be used. The loop then goes away, since only a single open is required.

Currently the Toolkit provides access by time span.
Access by orbit number is being considered.

5.  Will the Toolkit be certified on my operating system?

Platforms on which the July 2009 TK5.2.16 delivery is certified are listed here

For the platforms on which the April 97 TK5.2 delivery is certified click here. here

These are the platforms on which the November 96 TK5.1.1 delivery is certified:

                                  ***** C O M P I L E R S *****
PLATFORM              O/S             C        F77      F90
--------              ---          --------  -------  --------
SUN Sparc          Solaris 2.4     3.0.1     3.0.1      --    
SUN Sparc          SunOS 4.1.4     3.0.1     3.0.1    NAG 2.1
HP 9000/735        HP-UX  9.05     9.75      9.16       --
SGI Indigo/R4000   IRIX    5.3     3.19      4.0.2    NAG 2.1
DEC 3000/300       OSF/1   3.2     3.0       3.6        --
IBM RS6000         AIX   3.2.5     2.1.4.0   3.2.2.0    --

SGI Power          IRIX64 6.2      6.2       6.2      6.2
Challenge

If your platform is not included on this list, then it is not officially supported for the Toolkit. If you have problems installing on a non-supported platform, please send the information to sdps-support@earthdata.nasa.gov; we will investigate as time allows.

6.  Does the Toolkit provide shell level access to process control functionality?

Yes, this is now available, through the PGS_PC_*Com tools.

7.  How do I access the Toolkit primer (a simplified version of the users guide) online? Can I get a hardcopy?

Click here to access the online version of this document.

There is also a Postscript version.

8.  Will the arithmetic trap tool, deleted from Toolkit 5, ever be delivered?

The purpose of this tool ( PGS_SMF_SetArithmeticTrap() ) is to allow a graceful exit from an arithmetic fault and to allow the user to examine the conditions. After much research, we found that we could not implement this tool in a POSIX compliant manner across all platforms in our development suite. There are too many differences in vender Unix implementations of signal traps. This POSIX compliance is critical to the toolkit mandate of portability.

We did find, however, that all the systems (except the IBM), in POSIX.1, enabled their own exception handling. For example, upon divide-by-zero, INF (or some other default character was inserted). The user can check for the presence of this character. Each system did it differently (i.e. different switches need to be set). It is also possible to turn off this handling and force a core dump upon an arithmetic exception.

We will continue to investigate the situation in light of the POSIX.4 implementation, however, at the present time, we cannot find a uniform way to implement the signal trap tool. For details on the problems encountered, please read the signal handling investigation summary.


9.  Will you support certain options of the FORTRAN 90 OPEN statement that are not supported in ANSI FORTRAN 77?

In addition to full ANSI FORTRAN 77 support, the following FORTRAN 90 functionality is supported in TK5:
(1) Support for APPEND mode (POSITION keyword);
(2) Support for RECL keyword for sequential files.

At this time no other non-F77 features of F90 are supported.

10.  What's the story with Temporary files? Aren't they deleted automatically?

Temporary files exist for the duration of one PGE only; Intermediate files may have a longer duration. This question is concerned with Temporary files.

The Toolkit now has a shell function for optionally wrapping your PGE at the SCF, named PGS_PC_Shell.sh.
If you are using PGS_PC_Shell.sh to wrap your PGE, then your Temporary files are automatically deleted at PGE termination.

At the SCF, if you are not wrapping your PGE with the shell, you should delete your Temporary files manually before each of your test runs. A suggested way to do this is to put all your temporary files in a single directory, then delete all the contents of this directory in your test script before each run.
In addition, you need to start with a fresh Process Control file, i.e., the PCF at the beginning of any run should have no entries in the TEMPORARY I/O section.

In the production environment, Temporary files are always deleted automatically at the end of your PGE (since PGS_PC_Shell.sh is used there).

11.  Will the SDP toolkit work in 64-bit architectures. If so, on what platforms?

Toolkit 5.2 was developed and tested on IRIX 6.2 with version 7.0 of the SGI compilers.

12.  What are the issues with compiling and linking on the SGI Power Challenge?

(This information still applies to IRIX 6.2) Recently SGI introduced a new operating system, IRIX 6.1 for their Challenge Series Platforms. IRIX 6.1 is a new product aimed at supporting supporting 64 bit code on an SMP architecture. An additional goal is backward compatibility with the IRIX 5.3 - 32 bit operating system. The complexity of IRIX 6.1 may create some problems for users, particularly in that SGI documentation can be slightly misleading. The comments below are meant to clarify some configuration issues, particularly those involving object library types and compiler compatibility.

SGI object library types

IRIX 6.1 supports three types of object libraries:

Object Type 
SGI  Lib LocationSGI Compiler Option Compatibility
old 32 bit

/lib and /usr/lib-32IRIX 5.3 and 6.1
new 32 bit/lib32 and /usr/lib32-n32IRIX 6.1
64 bit/lib64 and /usr/lib64-64IRIX 6.1

Object types can NOT be mixed during linking (this restriction applies to both the new and old linker), but executables created from any of the three object types can be executed by IRIX 6.1. Care should be taken to both set proper flags for compilation and link with the appropriate libraries.

Compiler options and environmental variables required by the linker

In general compiler flags -32, -n32, and -64 correspond to old 32, new 32, and 64 bits object types, however not all compilers support all options (ATTENTION: SGI documentation suggests that SGI F90 compiler supports old style 32 bit object types - this is incorrect, currently SGI F90 supports ONLY 64 and new 32 bit objects).

CompilerUsual NameOld 32New 3264setenvlinker
CCCC-32-n32-64-native
KCCKCC-32-64-native
F77F77-32-n32-64-

native

F90F90--n32-64-native
NAG F90F90

ccargs " -32"

ldargs " -32"

--

SGI_ABI 

-32

native
Gun Ada95-

default use 

-option

--

SGI_ABI

-32

gnatbl
Verdix Ada83-default--

SGI_ABI

-32

native

Note: In all cases, care should be taken to link with HDF and Toolkit libraries of the appropriate format.


13.  Are more earth models available than appear in the Geolocation ATBD?

The file "$PGSDAT/CSC/earthfigure.dat" has been greatly extended beyond what is described in the ATBD. It is available here for inspection and criticism. It is relatively easy for us to enhance this file with more models. It is FAR BETTER to centralize changes here than if it is done differently at different processing centers. If done here and delivered, all users will have access to each other's ellipsoids.

Therefore, we urge you to look over the models and comment. One minor issue is the names. Since we find nearly identical spellings (such as WGS84 and WGS-84) used for the same model, we have included some duplicates - the same model under two names. Otherwise, the system might fail to find the model and revert to the default (WGS84).

If you are used to designating a model with a slightly different syntax, please advise us so that we can include the alternate name. We generally list the most common models first and the rarer last, to the best of our knowledge, to speed the search process for the majority of users.

14.  When PGS_IO_L0_SetStart is run after December 27, 1997, it returns error code 11815 "Unable to find requested packet". What's the problem?

THE PROBLEM:

The cause is that simulated L0 files with times after December 27, 1997 are too far in the future. The L0 tool PGS_IO_L0_SeekPacket (called by PGS_IO_L0_SetStart) is currently set up to reject input times not matched with leap second files. This will not be a problem in actual production since the data will be in sync with leap second files. This problem of the PGSIO_E_L0_SEEK_PACKET returns was not seen when using PGS_IO_L0_GetPacket since GetPacket does not do any time conversions (either itself or through lower level calls).

This is a "feature" not a bug. The tool in question goes through a file and extracts packet headers, checks their time stamp and moves on if necessary until the requested packet (or one as close as possible--but after--the input time) is found. If the input time is not a valid time, the search will fail and the file pointer will be left pointing to the first packet in the file. The problem is that only times for which ACTUAL leap seconds have been defined are considered valid. This is ONLY true for the set start time tool, since the others don't require time to accomplish their respective tasks.

The predicted leap seconds feature was allowed early on in the toolkit for testing purposes but as actual processing draws near it becomes less and less desirable to leave the predicted leap seconds feature in the toolkit. The L0 tools, which were designed later than the time tools, do not allow predicted leap seconds since in actual processing this should NEVER occur. That the L0 simulator will allow times in the predicted leap second range is perhaps an oversight on our part.

THE SOLUTION:

Of course it is not our desire to prevent anyone from running tests with data simulated for times occurring during the actual mission, it is just that we can't leave this simulation ability in the tools that have no other purpose than to process mission specific data files (e.g. L0 tools as opposed to TD or CSC tools which anyone could use for any generic geolocation purpose). So, two solutions are extant:

1) make runs with data simulated using times before 12-27-1997. (Recreate your input simulated L0 files so that the data times are earlier than the last 'ACTUAL' leap second entry in your $PGSDAT/TD/leapsec.dat file.)

2) edit the leap second file (the ASCII file leapsec.dat in $PGSDAT/TD) and replace the word "PREDICTED" with the word "ACTUAL" up to the time you would like to test for (if you check the file, the course of action will be obvious if this instruction is not currently clear). But remember, this altered file is ONLY for testing and should NOT be altered in any real production environment!

It is always safer (from our standpoint) if you alter a private copy that you point to with the PCF. There's a header line that you could annotate to say "GUESSES-NOT FOR REAL DATA PROCESSING" or something similar - to prevent slip-ups.

Actually, our Toolkit ephemeris simulator is not accurate enough for serious mission planning, so the most realistic tests are with past dates anyway. If you have a better orbit generator and are pretty sure that it will meet launch specs and really need to simulate future dates - you can alter the files, etc.

When it comes down to real data processing, remember, leap seconds are announced at least 5.5 months ahead (the nominal is 6 but there could be delays in posting), and our UT1 data (for GHA) will be updated often enough (weekly) that only the most stringent users would need to reprocess. So there's no problem anticipated in real time.

If, however, you are experiencing this problem with data using times before 12-27-1997, something is seriously awry, please let us know. Also please be aware that the second solution given above will only work for local testing since it requires you to alter a file which is an integral part of the toolkit.

DETAILS:

For those who are interested, a detailed analysis of just where the problem is occurring:

Using a debugger, I used the test driver PGS_IO_L0_Driver_c, from which PGS_IO_L0_SetStart is called. PGS_IO_L0_SetStart in turn makes a call to PGS_IO_L0_SeekPacket. The returnStatus from a call to PGS_TD_EOSAMtoTAI (from within PGS_IO_L0_SeekPacket) was 11815, which translates to the message you're all too familiar with:

PGS_1:11815,PGSIO_E_L0_SEEK_PACKET,NULL,Unable to find requested packet

I then stepped through PGS_TD_EOSAMtoTAI and then below to PGS_TD_UTCjdtoTAIjd and found that the lowest level offending returnStatus was 27652 which translates to

PGS_3:27652,PGSTD_W_PRED_LEAPS,NULL,predicted value of TAI-UTC used (actual value unavailable)

What's causing the problem is that a warning message is set when you try use a time for which PREDICTED, not ACTUAL, fields exist in your $PGSDAT/TD/leapsec.dat file. If you do not edit your SCF version of leapsec.dat, you can use simulated L0 times up to the next predicted leap second, but not beyond. This corresponds to the following in $PGSDAT/TD/leapsec.dat file

[...]

1993 JUL 1 =JD 2449169.5 TAI-UTC= 28.0000000 S + (MJD - 41317.) X 0.0000000 S ACTUAL

1994 JUL 1 =JD 2449534.5 TAI-UTC= 29.0000000 S + (MJD - 41317.) X 0.0000000 S ACTUAL

1996 JAN 1 =JD 2450083.5 TAI-UTC= 30.0000000 S + (MJD - 41317.) X 0.0000000 S ACTUAL

1997 JAN 1 =JD 2450449.5 TAI-UTC= 31.0000000 S + (MJD - 41317.) X 0.0000000 S PREDICTED

1998 JAN 1 =JD 2450814.5 TAI-UTC= 32.0000000 S + (MJD - 41317.) X 0.0000000 S PREDICTED

[...]

15.  I'm running more than one PGE and my PCF and Log Files are corrupted?

The Toolkit was designed for one PGE to run at a time. The solution to this problem requires separating the PCFs and Log Files for the PGEs.

One way to do this is to create completely separate directory structures for each PGE where all the dynamic Toolkit files would reside (which is how operations will take place in the DAAC environment)

A more minimal solution is to put each PCF in a different directory and set the PGS_PC_INFO_FILE environment variable to point to the correct PCF for each PGE. To prevent the Log Files from interfering, the entries in the PCFs can simply point to different file names for LogStatus, LogReport and LogUser for each PGE.

In Toolkit 5.2, two PCFs in the same directory will not interfere with each other. But you still need to set the PGS_PC_INFO_FILE environment variable to point to the correct PCF for each PGE and create separate log files.

16.  What is the reference frame for attitude rates?

The attitude rates provided by the SDP Toolkit are stated to be angular rates in the body frame, listed in the order x-axis (nominal roll), y-axis (nominal pitch), and z-axis (nominal yaw), in radians per second. But it was not specified in our documentation what the reference frame is relative to which the angular velocity is defined, which is then projected on the given axes.

At the recommendation of Dan Marinelli (ESDIS), Larry Klein, Toolkit lead, with the concurrence of the staff, has decided that we shall henceforth use the INERTIAL FRAME as the reference for the angular velocity. For AM1, and so far as is known, later spacecraft, that frame is J2000.

For TRMM, we actually deliver the rates representing the angular velocity in the orbital frame. (This is sometimes called using rates with the orbital rate "stripped".)

That was an early decision, conditioned somewhat by what we received . We are now committed to the inertial frame for AM1 and later spacecraft, and we regret any inconvenience occasioned by the somewhat different interface for TRMM. The relevant documentation will be changed to show these facts.

Peter D. Noerdlinger

  • No labels