You are here: Home User Guide Batch Scripting

Batch Scripting

The above OData and OpenSearch URIs can be combined to create complex queries to be executed in non-interactive scripts using programs like cURL and Wget.

Query via cURL

Using cURL it is possible to create a script to login to the Data Hub via the following command line:

curl -u {USERNAME}:{PASSWORD} <URI_QUERY>

where:

  • -u : option to specify user and password to use when fetching
  • <URI_QUERY> is a valid OData URI or OpenSearch URI.

 

Query via Wget

It is possible to use the wget command to create batch scripts:

wget --no-check-certificate --user={USERNAME} --password={PASSWORD} --output-document={FILE} <URI_QUERY>

where {USERNAME} is the valid account username, {PASSWORD} is the corresponding authentication password value and {FILE} is the name of the file where to print the output of the query. If ‘-’ is used as {FILE}, documents will be printed to standard output.

The following example shows how to make an OpenSearch query using Wget. The query searches for all the products in the Data Hub archive. The first 25 results are printed in a file named query_results.txt:

  • wget --no-check-certificate --user={USERNAME} --password={PASSWORD} --output-document=query_results.txt "https://data.sentinel.zamg.ac.at/search?q=*&rows=25"

The following example shows how to make an OpenSearch query using Wget for searching products filtered by product type and ingestion date :

  • wget --no-check-certificate --user={USERNAME} --password={PASSWORD} --output-document=query_results.txt "https://data.sentinel.zamg.ac.at/search?q=ingestiondate:[NOW-1DAY TO NOW] AND producttype:SLC&rows=1000&start=0&format=json"

 

Download via Wget

It is also possible to download the products from the Data Hub archive using wget .

The following example shows how to download a single product, identified by its own Data Hub universally unique identifier {UUID}, using an OData URI:

  • wget --no-check-certificate --continue –user={USERNAME} --password={PASSWORD} "https://data.sentinel.zamg.ac.at/odata/v1/Products('{UUID}')/$value"

The option --continue is very useful when downloads do not complete due to network problems. Wget will automatically try to continue the download from where it left off, and repeat this until the whole file has been retrieved.

The following example shows how to download the manifest file of a Sentinel-1 product using and Odata URI with Wget. onlyidentified by the universally unque identifier {UUID}:

  • wget --no-check-certificate --user={USERNAME} --password={PASSWORD} "https://data.sentinel.zamg.ac.at/odata/v1/Products('{UUID}')/Nodes('{PRODUCT_FILENAME}')/Nodes('manifest.safe')/$value"

where {UUID} is the value of the the universally unique identifier of the product, and {PRODUCT_FILENAME} is the filename of the product.

 

Scripts Examples

dhusget script

dhusget.sh is a simple demo script illustrating how to use OData and OpenSearch APIs to query and download the products from any Data Hub Service. It allows:

  1. Search products over a pre-defined AOI
  2. Filter the products by ingestion time, sensing time
  3. Filter the products by mission (Sentinel-1, Sentinel-2, Sentinel-3), instrument and product type
  4. Save the list of results in CSV and XML files
  5. Download the products
  6. Download the manifest files only
  7. Perform the MD5 integrity check of the downloaded products

 

You can download the script here

It requires the installation of wget.


USAGE:
# dhusget.sh [LOGIN OPTIONS]... [SEARCH QUERY OPTIONS]... [SEARCH RESULT OPTIONS]... [DOWNLOAD OPTIONS]...

LOGIN OPTIONS:

  • -d <DHuS URL> : URL of the Data Hub Service to be polled;
  • -u <username> : data hub username;
  • -p <password> : data hub password provided after registration;


SEARCH QUERY OPTIONS:

  • -m <mission name> : Sentinel mission name
  • -i <instrument name> : instrument name
  • -t <time in hours> : search for products ingested in the last <time in hours> (integer) from the time of execution of the script. (e.g. '-t 24' to search for products ingested in the last 24 Hours)
  • -f <file> : search for products ingested after the date and time provided through the input <file>. The file is updated at the end of the script execution with the ingestion date of the last succesfull downloaded product.
  • -c <coordinates i.e.: lon1,lat1:lon2,lat2> : coordinates of two opposite vertices of the rectangular area of interest
  • -T <product type> : product type of the product to search (available values are: SLC, GRD, OCN, RAW, S2MSI1C) ;


SEARCH RESULT OPTIONS:

  • -l <results> : maximum number of results per page [1,2,3,4,..]; default value = 25
  • -P <page> : page number [1,2,3,4,..]; default value = 1
  • -q <XMLfile> : write the OpenSearch query results in a specified XML file. Default file is './OSquery-result.xml'
  • -C <CSVfile> : write the list of product results in a specified CSV file. Default file is './products-list.csv'

DOWNLOAD OPTIONS:

  • -o <option> : what to download; possible options are:
    • 'manifest' to download the manifest of all products returned from the search or
    • 'product' to download all products returned from the search
    • 'all' to download both

odata-demo script

The script odata-demo is a demo script performing the following selective actions:

  1. List the collections
  2. List <n> products from a specified collection
  3. List first 10 products matching part of product name
  4. List first 10 products matching a specific ingestion date
  5. List first 10 products matching a specific aquisition date
  6. List first 10 products since last <n> days, by product type and intersecting an AOI
  7. Get product id from product name
  8. Get polarisation from a product id
  9. Get relative orbit from a product id
  10. Download Manifest file from a product id
  11. Download quick-look from a product id
  12. Download full product from its id

 

You can download the script here

It requires the installation of xmlstarlet .

Usage:
# odata-demo.sh [OPTIONS]

The options are:

  • -h, --help displays a help message
  • -j, --json use json output format for OData (default is xml)
  • -p, --passwd=PASSWORD use PASSWORD as password for the Data Hub Server
  • -s, --server=SERVER use SERVER as URL of the Data Hub Server
  • -u, --user=NAME use NAME as username for the Data Hub Server
  • -v, --verbose display curl command lines and results
  • -V, --version display the current version of the script