Installation:

Jonathon SooHoo please update

How To Configure LANCE:

In order keep Public NRT granules for a maximum of number of days (X) in the DataPool so they are available for direct download with HTTPS:

  1. Make sure that only NRT ESDTs are installed into the LANCE system by querying AmCollection.collectiondatatype
    1. If necessary, use the ESDT Maintenance GUI to modify the Descriptor file 
      1. Change the CollectionDataType object to 'NEAR_REAL_TIME'
      2. Re-install the collection
  2. After the ESDT is installed, use the DataPool Maintenance GUI to configure the "Granule Retention in Days" for the ESDT
    1. This configuration should be a maximum of 14 days.
      1. The Granule Retention in Days is saved as AmCollection.granretentiondays
    2. Only relevant for NRT collections 
      1. NRT collections are those with AmCollection.CollectionDataType == 'NEAR_REAL_TIME'
    3. Currently the Level 0 datatypes are set to 10 days and the Level 1 and 2 are set to 14 days
  3. Bulk delete has the -expirednrt as an option to delete all granules whose age exceeds AmCollection.granretentiondays
    1. Configure this in the cron to run once per day
  4. Configure MAX_ORPHAN_AGE in EcDlCleanupFilesOnDisk.CFG to be the minimum allowed number of days (3).  This will clean up any files in the datapool date directories that are not being tracked in the database. 

Allow granules to stay on order for an additional number of days (Y)

  • Configure PULL_GRAN_DPL_TIME in the OMS GUI → Configuration→ Media→ HTTPS
  • Also configure MAX_ORDER_AGE for EcDlCleanupFilesOnDisk.CFG to be the same as Y+1
  • Configure OmGUI→ Configuration→ All → ARCHIVE REQUEST AGE
    • This number should be larger than PULL_GRAN_DPL_TIME
    • Suggested value is 30
  • Configure OmGUI→ Configuration→ All → ARCHIVE INCOMPLETE REQUEST AGE
    • This number should be larger than PULL_GRAN_DPL_TIME
    • Suggested value is 60

If you are unable to find these values in the OmGUI you can use the following SQL to update them:


Updating OMS Cleanup Configuration with SQL
update OmConfig set value=10 where name = ’PULL_GRAN_DPL_TIME’
update OmConfig set value=30 where name =’ARCHIVE_REQUEST_AGE’
update OmConfig set value=60 where name =’ARCHIVE_INCOMPLETE_REQUEST_AGE’

What scripts to run on LANCE to ensure data is cleaned up:

The primary script that runs to cleanup data is EcDsBulkDelete.pl.

As an operator, you also have to archive old OMS requests by logging in to l6dbl50 machine and then running:

  • /custom/ecs/OPS/CUSTOM/dbms/postgres/OMS/scripts/EcOmCleanupOldData.pl OPS

  • You can also run this with the <FORCE> option.  This will archive requests older than 'Archive Incomplete Request Age' even though they are not in a terminal state

Discussion on what numbers to choose for X (datapool retention) and Y (order retention)

Discussion on determination of parameters values

  • Why would I order and wait 10 days to get real time data?

The amount of disk space required for LANCE is dependent on these two numbers.  

14 days of retention is the maximum LANCE retention requirement.  

Reducing the 14 days may be possible because the standard granules take less than that time (dependent on processing level 1 or Level 2 (which would take longer))

Is level 0 data distribution in LANCE something we need to incorporate into disk space requirements?

LANCE data itself needs session based Level 0 data, so that is already sized in

Is there another set of Level 0 data?

We need to know if data was added beyond what LANCE was sized for


What is the threshold of free space that you want?

20%, 50%?

What percentage of the granules are ordered? 

Should we do our calculation based on 100% orders at the max datapool retention date?


  • If you want to include non-NRT collections there may be a need for code changes to SDPS
    • Could change the descriptor 

Useful Queries

Database configuration for AmCollection at LaRC to verify collectiondatatype and the granule retention days (Y):

AmCollection Query
select shortname, collectiondatatype, granretentiondays, publishbydefaultflag from AmCollection
 
     shortname      | collectiondatatype | granretentiondays | publishbydefaultflag 
---------------------+--------------------+-------------------+----------------------
 MI1B2_TERRAIN_NRT   | NEAR_REAL_TIME     |                14 | Y
 MI2TC_CMV_BFR_NRT   | NEAR_REAL_TIME     |                14 | Y
 MISL0AFS            | NEAR_REAL_TIME     |                10 | N
 MISL0AAS            | NEAR_REAL_TIME     |                10 | N
 MISL0ANS            | NEAR_REAL_TIME     |                10 | N
 MISL0BAS            | NEAR_REAL_TIME     |                10 | N
 MISL0CAS            | NEAR_REAL_TIME     |                10 | N
 MISL0CFS            | NEAR_REAL_TIME     |                10 | N
 MI1B2_ELLIPSOID_NRT | NEAR_REAL_TIME     |                14 | Y
 MIB2GPR             | NEAR_REAL_TIME     |                10 | N
 MIBCORRR            | NEAR_REAL_TIME     |                10 | N
 AM1ANCS             | NEAR_REAL_TIME     |                10 | N
 MISL0BFS            | NEAR_REAL_TIME     |                10 | N
 MISL0DAS            | NEAR_REAL_TIME     |                10 | N
 AM1ATTNR            | NEAR_REAL_TIME     |                10 | N
 AM1EPHNR            | NEAR_REAL_TIME     |                10 | N
 ActSched            | SCIENCE_QUALITY    |                   | N
 MISL0DFS            | NEAR_REAL_TIME     |                10 | N
 MIB2TRPT            | SCIENCE_QUALITY    |                   | N
 MITRPTPR            | NEAR_REAL_TIME     |                10 | N
 MI2CMVDR            | NEAR_REAL_TIME     |                14 | N
 MI2TC_CMV_HDF_NRT   | NEAR_REAL_TIME     |                14 | Y
 MIBTRPER            | NEAR_REAL_TIME     |                10 | N
 MIBTRPTR            | NEAR_REAL_TIME     |                10 | N
 MITRPEPR            | NEAR_REAL_TIME     |                10 | N
 Browse              | SCIENCE_QUALITY    |                   | N
  

Find the amount of time that the granules will be kept around after being put on order (X). Configured in the OMS GUI.

Pull Gran Dpl Time Query
select name, value from OmConfig where name = 'PULL_GRAN_DPL_TIME';
        name        | value 
--------------------+-------
 PULL_GRAN_DPL_TIME | 10


Find the oldest granules that are still in the system:

Oldest granule query
select granuleid, shortname, registrationtime, isorderonly from amgranule order by registrationtime
 


To find granules that are currently in an invalid state because the system thinks they are on order but they have no actions in OmActionQueue:

Granules that need a new orderonly flag
select granuleid, isorderonly, sizembecsdatagranule, shortname, registrationtime from amgranule g 
where isorderonly = 'Y' and registrationtime =                                                      
and not exists(select 1 from omactionqueue where requestid = (select requestid from OmRequestGranule rg where rg.ecs_granuleid = g.granuleid))




The results from the oldest granule query should all be sooner than the datapool retention time (X) plus the pull retention time (Y) added together.  If you find records that are older you may have to figure out why they were not cleaned up using the ecs_granuleid in OmRequestGranule:

AmCollection Query
select * from OmRequestGranule where ecs_granuleid = 352928;

 requestid  | granid | requestgrstatus | explanationcode |         lastupdate         | previousstatus | billinginfo | ecs_granuleid |  granulesize   |  esdttype   | grantype |       in
sertdatetime       | archiveid | gransource | priorgrstatus | childgranid | externalcatalogitemid | userstring | intendedusage | iscleanedup 
------------+--------+-----------------+-----------------+----------------------------+----------------+-------------+---------------+----------------+-------------+----------+---------
-------------------+-----------+------------+---------------+-------------+-----------------------+------------+---------------+-------------
 0000062413 |  62502 |              40 |             161 | 2016-10-29 15:07:12.975414 |                |             |        352928 | 0.206298828125 | AM1ANCS.001 | SC       | 2016-10-
29 14:39:47.801997 |           | D          |               |             |                       |            |               | 
(1 row)


With the RequestId you can determine whether there are any outstanding actions, including cleanup (ActionType = 'PL') actions.  


Finding all the OMS actions for a request
select * from OmActionQueue where requestid = '0000062413';

For this particular example there were no actions in the OmActionQueue for this requests.  It is a candidate for OMS cleanup.  


AmCollection Query
select * from OmActionQueue where requestid = '0000062413';


Another important thing to check is the isorderonly flag

AmCollection Query
select isorderonly from AmGranule where granuleid = 352928;
isorderonly 
-------------
Y


The only thing to do here is to manually change the isOrderOnly flag.  If it is 'Y', change it to 'H', if it is 'B', change it to null.  

AmCollection Query
update AmGranule set isOrderOnly = 'H' where isOrderOnly = 'Y' and granule id = 352928;
update AmGranule set isOrderOnly = null where isOrderOnly = 'B' and granule id = 352928;



After doing so, the next time you run EcDsBulkDelete it should be cleaned up. 





Another issue that could cause the granule to not be cleaned up is a lock in DlOMSGranules:


Check to make sure there aren't "Locks" on the granules
select * from dlomsgranules;
 ecsid  | owner | granuletype | lockid |         lastupdate         
--------+-------+-------------+--------+----------------------------
 815868 | U     | SC          |        | 2017-08-14 03:00:07.015344


This lock should be manually removed because it is so old and no longer relevant.



Configure Cron Jobs

Cron on LANCE
#
#  07/17/2012 GAC Remove old *.std.out files daily
#
40 2 * * * /bin/csh -c "find /usr/ecs/OPS/CUSTOM/logs/ -name '*.std.out.2*' -exec rm -f {} \;"
41 2 * * * /bin/csh -c "find /usr/ecs/TS1/CUSTOM/logs/ -name '*.std.out.2*' -exec rm -f {} \;"
#42 2 * * * /bin/csh -c "find /usr/ecs/TS2/CUSTOM/logs/ -name '*.std.out.2*' -exec rm -f {} \;"
 
# move logs to maintain 9G /custom file system
0 1 * * * /bin/find /custom/ecs/OPS/CUSTOM/logs -type f -name "*.20*" -mtime +5 -exec mv {} /workingdata/emd/OPS/logs/l6dpl50/ \;
10 1 * * * /bin/find /custom/ecs/OPS/CUSTOM/logs -type f -name "*.log20*" -mtime +5 -exec mv {} /workingdata/emd/OPS/logs/l6dpl50/ \;
20 1 * * * /bin/find /custom/ecs/OPS/CUSTOM/logs -type f -name "EcDlUnpublish.log.*" -mtime +5 -exec mv {} /workingdata/emd/OPS/logs/l6dpl50/ \;
#
#  07/17/2012 GAC Remove old *.std.out files daily
#
40 2 * * * /bin/csh -c "find /usr/ecs/OPS/CUSTOM/logs/ -name '*.std.out.2*' -exec rm -f {} \;"
41 2 * * * /bin/csh -c "find /usr/ecs/TS1/CUSTOM/logs/ -name '*.std.out.2*' -exec rm -f {} \;"
#42 2 * * * /bin/csh -c "find /usr/ecs/TS2/CUSTOM/logs/ -name '*.std.out.2*' -exec rm -f {} \;"
 
# move logs to maintain 9G /custom file system
0 1 * * * /bin/find /custom/ecs/OPS/CUSTOM/logs -type f -name "*.20*" -mtime +5 -exec mv {} /workingdata/emd/OPS/logs/l6dpl50/ \;
10 1 * * * /bin/find /custom/ecs/OPS/CUSTOM/logs -type f -name "*.log20*" -mtime +5 -exec mv {} /workingdata/emd/OPS/logs/l6dpl50/ \;
20 1 * * * /bin/find /custom/ecs/OPS/CUSTOM/logs -type f -name "EcDlUnpublish.log.*" -mtime +5 -exec mv {} /workingdata/emd/OPS/logs/l6dpl50/ \;
30 1 * * * /bin/find /custom/ecs/OPS/CUSTOM/logs -type f -name "*LOG.20*" -mtime +5 -exec mv {} /workingdata/emd/OPS/logs/l6dpl50/ \;
40 1 * * * /bin/find /custom/ecs/OPS/CUSTOM/logs -type f -name "*.log.t*" -mtime +5 -exec mv {} /workingdata/emd/OPS/logs/l6dpl50/ \;
 
0 2 * * * /bin/find /custom/ecs/TS1/CUSTOM/logs -type f -name "*.20*" -mtime +5 -exec mv {} /workingdata/emd/TS1/logs/l6dpl50/ \;
10 2 * * * /bin/find /custom/ecs/TS1/CUSTOM/logs -type f -name "*.log20*" -mtime +5 -exec mv {} /workingdata/emd/TS1/logs/l6dpl50/ \;
20 2 * * * /bin/find /custom/ecs/TS1/CUSTOM/logs -type f -name "EcDlUnpublish.log.*" -mtime +5 -exec mv {} /workingdata/emd/TS1/logs/l6dpl50/ \;
30 2 * * * /bin/find /custom/ecs/TS1/CUSTOM/logs -type f -name "*LOG.20*" -mtime +5 -exec mv {} /workingdata/emd/TS1/logs/l6dpl50/ \;
40 2 * * * /bin/find /custom/ecs/TS1/CUSTOM/logs -type f -name "*.log.t* -mtime +5 -exec mv {} /workingdata/emd/TS1/logs/l6dpl50/ \;
 
0 3 * * * /bin/find /workingdata/emd/OPS/logs/l6dpl50 -type f -mtime +30 -delete
0 4 * * * /bin/find /workingdata/emd/TS1/logs/l6dpl50 -type f -mtime +30 -delete
 
#####   OPS cron jobs
###
### ARCHIVE DELETION - every day
### Bulk Delete     - 2am - identify granules
### Unpublish       - 3am - unpublish identified granules for public data types
### DeletionCleanup - 4am - physical deletion
 
00 2 * * * (export ECS_HOME=/usr/ecs; /usr/ecs/OPS/CUSTOM/utilities/EcDsBulkDelete.pl -physical -mode OPS -log /usr/ecs/OPS/CUSTOM/logs/EcDsBulkDelete_LANCEDaily.log -expirednrt)
 
# unpublish when DFA N goes to H or Y; EcDlUnpublishStart.pl -mode OPS -aim -offset 24
00 3 * * * /bin/csh -c "/tools/bin/perl -I/usr/ecs/OPS/CUSTOM/utilities -I/tools/perl/lib/5.16.2 -I/tools/perl/lib/site_perl/5.16.2 /usr/ecs/OPS/CUSTOM/utilities/EcDlUnpublishStart.pl -mode OPS -aim -offset 24"
 
00 4 * * * (export ECS_HOME=/usr/ecs; /usr/ecs/OPS/CUSTOM/utilities/EcDsDeletionCleanup.pl -mode OPS -log EcDsDeletionCleanup.log -batch 10000 -grbatch 100 -xmlbatch 1000 -databatch 10000 -logbatch 100 < /home/cmshared/EcDsDeletionCleanup.input)
 
## AIM CleanUp
15 3 * * * /bin/csh -c "/tools/bin/perl -I/usr/ecs/OPS/CUSTOM/utilities -I/tools/perl/lib/5.16.2 -I/tools/perl/lib/site_perl/5.16.2 /usr/ecs/OPS/CUSTOM/utilities/EcDsAmCleanEventHistory.pl OPS -days 14"
 
##      EMS CRON JOB
50 01 * * * (export ECS_HOME=/usr/ecs;/usr/ecs/OPS/CUSTOM/utilities/EcDbEMSdataExtractor.pl -mode OPS)
 
##      Remove EDOS PAN files older than 14 days
00 1 * * * find /usr/ecs/OPS/CUSTOM/data/DPL/2/2 -name '*\.PAN' -mtime +14 -exec rm -f {} \;
00 1 * * * find /usr/ecs/OPS/CUSTOM/data/DPL/2/2 -name '*\.EAN' -mtime +14 -exec rm -f {} \;
 
##      Remove DPL temp files older than 14 days
25 0 * * * find /usr/ecs/OPS/CUSTOM/temp/DPL -daystart -mtime +14 -exec rm -f {} \;
 
 
### orphan/phantom/temp/ cleanup of DPL with EcDlCleanupFilesOnDisk.pl
### Mon
00 3 * * 1 /bin/csh -c "/tools/bin/perl -I/usr/ecs/OPS/CUSTOM/utilities -I/tools/perl/lib/5.16.2 -I/tools/perl/lib/site_perl/5.16.2 /usr/ecs/OPS/CUSTOM/utilities/EcDlCleanupFilesOnDisk.pl OPS -fix"
00 22 * * 0 /home/cmshared/cron/dpl_rotate_log.ksh > /dev/null 2>&1
#
 
#####   OPS cron jobs
 
###     TS1 cron jobs
## AIM Cleanup
10 3 * * * /bin/csh -c "/tools/bin/perl -I/usr/ecs/TS1/CUSTOM/utilities -I/tools/perl/lib/5.16.2 -I/tools/perl/lib/site_perl/5.16.2 /usr/ecs/TS1/CUSTOM/utilities/EcDsAmCleanEventHistory.pl TS1 -days 15"
 
## DPL - EcDlCleanupFilesOnDisk - ALL collection groups, first day of each month
00 5 1 * * /bin/csh -c "/tools/bin/perl -I/usr/ecs/TS1/CUSTOM/utilities -I/tools/perl/lib/5.16.2 -I/tools/perl/lib/site_perl/5.16.2 /usr/ecs/TS1/CUSTOM/utilities/EcDlCleanupFilesOnDisk.pl TS1 -outputDir `date +\%b\%d`"
 


Check what subscription are in the database:

AmCollection Query
select * from ecnbsubscription
;
 subscriptionid | eventtype | esdt_id  | versionid | userid  | nummatchexps |  status  |      startdate      |   expirationdate    | logicalop | bundlingorderid | oldexpirationdate | themename |     updatetime      | numnonspatial 
----------------+-----------+----------+-----------+---------+--------------+----------+---------------------+---------------------+-----------+-----------------+-------------------+-----------+---------------------+---------------
              6 | INSERT    | AM1ANCS  |         1 | misrnrt |            0 | Inactive | 2016-01-26 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-01-10 10:01:00 |             0
              7 | INSERT    | MISL0AAS |         1 | misrnrt |            0 | Inactive | 2016-01-26 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-01-10 10:02:00 |             0
              9 | INSERT    | MISL0AFS |         1 | misrnrt |            0 | Inactive | 2016-01-26 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-01-10 10:08:00 |             0
             11 | INSERT    | MISL0ANS |         1 | misrnrt |            0 | Inactive | 2016-01-26 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-01-10 10:08:00 |             0
             12 | INSERT    | MISL0BAS |         1 | misrnrt |            0 | Inactive | 2016-01-26 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-01-10 10:09:00 |             0
             14 | INSERT    | MISL0CAS |         1 | misrnrt |            0 | Inactive | 2016-01-26 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-01-10 10:09:00 |             0
             13 | INSERT    | MISL0BFS |         1 | misrnrt |            0 | Inactive | 2016-01-26 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-01-10 10:10:00 |             0
             15 | INSERT    | MISL0CFS |         1 | misrnrt |            0 | Inactive | 2016-01-26 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-01-10 10:11:00 |             0
             16 | INSERT    | MISL0DAS |         1 | misrnrt |            0 | Inactive | 2016-01-26 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-01-10 10:12:00 |             0
             17 | INSERT    | MISL0DFS |         1 | misrnrt |            0 | Inactive | 2016-01-26 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-01-10 10:12:00 |             0
             32 | INSERT    | AM1ANCS  |         1 | misrnrt |            0 | Active   | 2018-01-31 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-02-02 11:21:00 |             0
             33 | INSERT    | MISL0AAS |         1 | misrnrt |            0 | Active   | 2018-01-31 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-02-01 13:59:00 |             0
             34 | INSERT    | MISL0AFS |         1 | misrnrt |            0 | Active   | 2018-01-31 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-02-02 11:18:00 |             0
             35 | INSERT    | MISL0ANS |         1 | misrnrt |            0 | Active   | 2018-01-31 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-02-02 11:18:00 |             0
             36 | INSERT    | MISL0BAS |         1 | misrnrt |            0 | Active   | 2018-01-31 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-02-02 11:19:00 |             0
             37 | INSERT    | MISL0BFS |         1 | misrnrt |            0 | Active   | 2018-02-01 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-02-02 11:19:00 |             0
             38 | INSERT    | MISL0CAS |         1 | misrnrt |            0 | Active   | 2018-02-01 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-02-02 11:19:00 |             0
             39 | INSERT    | MISL0CFS |         1 | misrnrt |            0 | Active   | 2018-02-01 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-02-02 11:20:00 |             0
             40 | INSERT    | MISL0DAS |         1 | misrnrt |            0 | Active   | 2018-02-01 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-02-02 11:20:00 |             0
             41 | INSERT    | MISL0DFS |         1 | misrnrt |            0 | Active   | 2018-02-01 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-02-02 11:20:00 |             0
             18 | INSERT    | ActSched |         1 | misrnrt |            0 | Active   | 2016-01-26 00:00:00 | 2030-12-30 00:00:00 | AND       |                 |                   |           | 2018-02-06 09:39:00 |             0

Potential NCRs:

Granules with isOrderOnly set but there are no cleanup actions for any OMS request that uses that granule.  \

  • ECS-19
  • Consider allowing operator to cancel orders in a terminal state to reset the enque time for their cleanup action.

  • Need a way to "clean" the isOrderOnly flag.
    • Found FixOrderOnlyB.ksh 
      • Still using sybase. Needs to be ported to Postgresql
      • It may be somewhere else.  Attempt to grep for -i orderonly on OMS and AIM procs didn't uncover it. 
      • Work around: reset IsOrderOnly for granules that are older than 60 days

        Reset isOrderOnly for granules older than 60 days
        --Check if there are any actions for these old granules
        select * from OmActionQueue where requestid in 
                (select requestid from OmRequestGranule where ecs_granuleid in 
                        (select granuleid from AmGranule where deleteeffectivedate < date_trunc('day', NOW() - interval '60 days')))
         actionqueueid | actiontype | requestid  | notificationid |        enqueuetime         |         lastupdate         | completiontime | retrycount | acti
        onstatus | dispatchinfo | pid | actioninfo | granid |         pickedtime         
        ---------------+------------+------------+----------------+----------------------------+----------------------------+----------------+------------+-----
        ---------+--------------+-----+------------+--------+----------------------------
                903820 | D          | 0000180770 |                | 2017-12-16 10:54:55.206859 | 2018-03-07 07:40:31.764917 |                |          0 |     
             190 |              |   0 |            |        | 2018-03-07 07:40:31.764917
                903821 | D          | 0000180771 |                | 2017-12-16 10:54:55.214133 | 2018-03-07 07:40:31.764917 |                |          0 |     
             190 |              |   0 |            |        | 2018-03-07 07:40:31.764917
                903833 | D          | 0000180777 |                | 2017-12-16 10:55:28.276286 | 2018-03-07 07:40:31.764917 |                |          0 |     
             190 |              |   0 |            |        | 2018-03-07 07:40:31.764917
                903839 | D          | 0000180780 |                | 2017-12-16 10:55:56.316824 | 2018-03-07 07:40:31.764917 |                |          0 |     
             190 |              |   0 |            |        | 2018-03-07 07:40:31.764917
                903827 | D          | 0000180774 |                | 2017-12-16 10:55:16.245511 | 2018-03-07 07:40:31.764917 |                |          0 |     
             190 |              |   0 |            |        | 2018-03-07 07:40:31.764917
                903831 | D          | 0000180776 |                | 2017-12-16 10:55:26.26704  | 2018-03-07 07:40:31.764917 |                |          0 |     
             190 |              |   0 |            |        | 2018-03-07 07:40:31.764917
                903823 | D          | 0000180772 |                | 2017-12-16 10:54:56.224362 | 2018-03-07 07:40:31.764917 |                |          0 |     
             190 |              |   0 |            |        | 2018-03-07 07:40:31.764917
                903810 | D          | 0000180764 |                | 2017-12-16 10:54:23.145129 | 2018-03-07 07:40:31.764917 |                |          0 |     
             190 |              |   0 |            |        | 2018-03-07 07:40:31.764917
                903811 | D          | 0000180765 |                | 2017-12-16 10:54:23.153295 | 2018-03-07 07:40:31.764917 |                |          0 |     
             190 |              |   0 |            |        | 2018-03-07 07:40:31.764917
        
        
        This implies that OMS may still be working on these granules. 
        
        
        --Check request status and last update of reqeusts whose granules where deleted more than 60 days ago:
        
        
        select requestid, requeststatus, lastupdate from OmRequest where requestid in 
                (select requestid from OmRequestGranule where ecs_granuleid in 
                        (select granuleid from AmGranule where deleteeffectivedate < date_trunc('day', NOW() - interval '60 days')))
        ;
         requestid  |     requeststatus     |         lastupdate         
        ------------+-----------------------+----------------------------
         0000180769 | Operator Intervention | 2018-01-10 11:43:14.6856
         0000180770 | Transferring          | 2018-03-07 08:00:33.837401
         0000182580 | Operator Intervention | 2018-01-09 11:49:38.451781
         0000180855 | Operator Intervention | 2018-01-09 13:36:37.815094
         0000180771 | Transferring          | 2018-03-07 07:45:33.821057
         0000180777 | Transferring          | 2018-03-07 08:15:33.845771
         0000180780 | Transferring          | 2018-03-07 08:20:33.845853
         0000180766 | Operator Intervention | 2018-01-10 11:43:14.67882
         0000180774 | Transferring          | 2018-03-07 07:55:33.824171
         0000180778 | Operator Intervention | 2018-01-09 12:06:26.50953
         0000180776 | Transferring          | 2018-03-07 08:25:33.862368
         0000182583 | Operator Intervention | 2018-01-09 11:51:37.438639
         0000180772 | Transferring          | 2018-03-07 08:05:33.838776
         0000180767 | Operator Intervention | 2018-01-10 11:43:14.688263
         0000180764 | Transferring          | 2018-03-07 07:50:33.823596
         0000182582 | Operator Intervention | 2018-01-09 11:51:37.665336
         0000180765 | Transferring          | 2018-03-07 08:10:33.845731
         0000180773 | Operator Intervention | 2018-01-10 11:42:39.087804
         0000180768 | Operator Intervention | 2018-01-10 11:43:14.710901
         0000182584 | Operator Intervention | 2018-01-09 11:51:37.445075
         0000182581 | Operator Intervention | 2018-01-09 11:51:37.453587
        
        
        So the operator should deal with the ones that are in operator intervention through the OMS GUI.  They should be able to close the intervention and fail the request.  This should cause the request to clean up.  For those that are transferring we need to check the OMS logs to see why the transfers are not completing.
        
        
        
        
        
        
        
        
        update AmGranule set isOrderOnly='H' where isOrderOnly = 'Y' and 
        deleteEffectiveDate < date_trunc('day', NOW()-interval '60 days')
        
        update AmGranule set isOrderOnly=null where isOrderOnly = 'B' and 
        deleteEffectiveDate < date_trunc('day', NOW()-interval '60 days')
  • NCR8054364  - Release 9.0 OPS LANCE: Config Parameters missing from OrderManager Gui HTTPS section


Emergency measures to reduce disk utilization

ASDC deciced they only need to keep orders around for 10 days rather than 30. 
These steps will reset the cleanup of orders that had previously been made. 

1) Stop the OrderManager Server

2) Run the following SQL


update OmActionQueue set enqueuetime = NOW() where enqueuetime > date_trunc('day', NOW() + interval '12 days') and completiontime is null;


 
3) Restart OMS 

4) Check to see when the actions run with the following SQL:


select count(*) from OmActionQueue where enqueuetime > date_trunc('day', NOW() + interval '12 days') and completiontime is null; 


As soon as the number returned by this stops going down, run bulk delete 

This will remove granules that were put on order more than 12 days ago and are older than 14 days  






See what is using space
select trunc(sum(sizembecsdatagranule)) as total_disk_usage from AmGranule;
 total_disk_usage 
------------------
          1712858

select trunc(sum(sizembecsdatagranule)) as size from AmGranule where lastupdate < date_trunc('day', NOW() - interval '25 days');
       size       
------------------
 6266

Determine how much we can save if we delete stuff older than 24 days:


select (select trunc(sum(sizembecsdatagranule)) as size from AmGranule where archivetime < date_trunc('day', NOW() - interval '25 days'))/(select trunc(sum(sizembecsdatagranule)) as size from AmGranule);
     ?column?      
-------------------
 0.103585850333785





This is 10% of the disk, so we can make a substantial dent in our disk utilization emergency by making sure that orders are cleaned up after 10 days, even though when they were placed they were configured to be around for 30 days. 

We will use a value of 26 days, just to be safe.


select (select trunc(sum(sizembecsdatagranule)) as size from AmGranule where archivetime < date_trunc('day', NOW() - interval '26 days'))/(select trunc(sum(sizembecsdatagranule)) as size from AmGranule);
      ?column?      
--------------------
 0.0824617198076654

This should reclaim 8.2% of the disk.


The strategy here is to update the time that the system will cleanup OMS requests.  This is done by modifying the enqueue time in the OmActionQueue to the current time so that the cleanup actions will fire when OMS is brought back up.


update OmActionQueue set enqueuetime = NOW() where enqueuetime > date_trunc('day', NOW() + interval '12 days') and completiontime is null;






 
















If the disk utilization for orders older than 24 days was insignificant the next step would be to:
Look at what is using up the space:


select shortname, sum(sizembecsdatagranule) from AmGranule group by shortname order by sum desc;
      shortname      |       sum        
---------------------+------------------
 MI1B2_ELLIPSOID_NRT | 376486.256391525
 MISL0ANS            | 333862.988452911
 MI1B2_TERRAIN_NRT   | 191277.642167091
 MISL0BAS            | 100896.826342583
 MISL0AAS            | 100508.515230179
 MISL0CFS            | 100380.402791023
 MISL0DAS            | 100365.632698059
 MISL0AFS            | 100342.755948067
 MISL0DFS            | 100316.540130615
 MISL0CAS            | 100221.300213814
 MISL0BFS            | 100110.135480881
 MIB2GPR             | 1761.14462184906
 MIBTRPER            | 795.395680427551
 MITRPEPR            | 778.432396888733
 MIBTRPTR            | 745.603711128235
 MITRPTPR            | 727.343948364258
 AM1ANCS             | 293.661956787109
 MI2CMVDR            | 278.613508224487
 AM1EPHNR            | 217.884552001953
 AM1ATTNR            | 99.2681884765625
 MI2TC_CMV_HDF_NRT   | 97.5362920761108
 ActSched            | 41.3908891677856
 MIBCORRR            | 31.4012632369995
 MI2TC_CMV_BFR_NRT   | 15.5505867004395


select date_trunc('day', lastupdate), sum(sizembecsdatagranule) as size from AmGranule  group by date_trunc having sum(sizembecsdatagranule) > 1  order by date_trunc
;
     date_trunc      |       size       
---------------------+------------------
 2016-11-09 00:00:00 | 1068.53969573975
 2017-11-09 00:00:00 | 821.630773544312
 2017-11-10 00:00:00 | 673.018880844116
 2017-12-27 00:00:00 | 55.9319076538086
 2018-01-03 00:00:00 | 980.926141738892
 2018-01-10 00:00:00 | 456.427528381348
 2018-02-02 00:00:00 | 24.5822019577026
 2018-02-03 00:00:00 | 1079.34123325348
 2018-02-06 00:00:00 | 1086.81524276733
 2018-02-12 00:00:00 | 17.3320064544678
 2018-02-13 00:00:00 | 377.825834274292
 2018-02-14 00:00:00 | 2233.12373733521
 2018-02-18 00:00:00 | 22994.9519605637
 2018-02-19 00:00:00 | 36613.0728378296
 2018-02-20 00:00:00 | 37700.4035015106
 2018-02-21 00:00:00 | 37026.0485563278
 2018-02-22 00:00:00 | 38106.3540945053
 2018-02-23 00:00:00 | 72196.0024833679
 2018-02-24 00:00:00 | 74587.4116649628
 2018-02-25 00:00:00 | 80161.3121757507
 2018-02-26 00:00:00 | 78451.8429994583
 2018-02-27 00:00:00 |  111220.64949131
 2018-02-28 00:00:00 |  102766.72535038
 2018-03-01 00:00:00 | 103777.788504601
 2018-03-02 00:00:00 | 142113.871623993
 2018-03-03 00:00:00 | 53887.7836256027
 2018-03-04 00:00:00 | 73223.4372873306
 2018-03-05 00:00:00 | 211389.224298477
 2018-03-06 00:00:00 | 117399.831981659
 2018-03-07 00:00:00 | 135186.494995117
 2018-03-08 00:00:00 | 112392.542368889
 2018-03-09 00:00:00 | 60688.8373594284

select date_trunc('day', lastupdate), sum(sizembecsdatagranule) as size from AmGranule where lastupdate < date_trunc('day', NOW() - interval '25 days') group by date_trunc having sum(sizembecsdatagranule) > 1  order by date_trunc
;
     date_trunc      |       size       
---------------------+------------------
 2016-11-09 00:00:00 | 1068.53969573975
 2017-11-09 00:00:00 | 821.630773544312
 2017-11-10 00:00:00 | 673.018880844116
 2017-12-27 00:00:00 | 55.9319076538086
 2018-01-03 00:00:00 | 980.926141738892
 2018-01-10 00:00:00 | 456.427528381348
 2018-02-02 00:00:00 | 24.5822019577026
 2018-02-03 00:00:00 | 1079.34123325348
 2018-02-06 00:00:00 | 1086.81524276733



  • No labels

1 Comment

  1. user-4812b

    If the disk usage goes above 87% we need to do some more triage.  This is the maximum disk usage if all granules are ordered on day 14.