Antelope Tutorial

Written by Jake Walter and Zhigang Peng

Return to Jake Walter's webpage

This website contains a brief tutorial on archiving data collected in the field from an RT-130, converting to miniseed, forming SEED volumes for archival at IRIS, and then setting up a preliminary database for identification and location of local earthquakes. This is a very good resource to consult in setting up the database

1.1 The Antelope software

  • First obtain the software through a license request. For outside users, you also need to install the PASSCAL utilities. Consult those documentation for setting up your environment, etc.
  • In the Georgia Tech Geophysics lab, add the following lines to your ~/.bashrc file to get Antelope and necessary PASSCAL utilities:

     source /opt/antelope/4.11/setup.sh   %Or: source /usr/local/geophysics/antelope/4.11/setup.sh
     export PASSCAL=/usr/local/geophysics/passcal
     export PASSOFT=${PASSCAL}
     export PATH=${PATH}:${PASSCAL}/bin:${PASSCAL}/other/bin
    

1.2 Good data organization practices

  • If you are collecting data in the field. It is always a good practice to make multiple backup copies of the raw data, straight from the datalogger. For example, I will sometimes have multiple external hard drives and also keep data on the original Compact Flash cards, if possible.
  • Keeping that in mind, we typically create a new directory where we store all the Antelope database material. Create a new directory and within that directory, create the following directories. FYI, we will always run commands from this point in the file structure.

     mkdir raw_data mseed logs day_volumes
    

1.3 Network batch file and parameter file

  • Next, we create a batch file, which describes the metadata of our network, including station names, associated dataloggers, station location, and sensor information. THIS IS ONE OF THE MOST IMPORTANT STEPS! A single entry for a station looks like this:

    net CR Costa Rica
    
    sta ACHA 9.8280 -85.2476 0.1350 Alvaro Chavez, CR
    time 1/1/2009 00:00:00
    datalogger rt130 A024
    sensor cmg3esp_100sec 0 t3757
    axis Z 0 0 - 1 1
    axis N 0 90 - 2 1
    axis E 90 90 - 3 1
    samplerate 50sps
    channel Z BHZ
    channel N BHN
    channel E BHE
    add
    
    close ACHA 12/31/2014
    

  • For this tutorial, download this file and then run this command to get a parameter file from the information contained in the batch file.

     batch2par batchexample > example.par
    

    • You need to replace the column that has some text like rs50spsprs; with 1;.

  • Next, download this file and place it in the raw_data directory. These are files that we extracted from the Reftek dataloggers and are tarred together using the PASSCAL utility Neo. Fist, untar the files in that directory, and then moving back up, make a list of the raw files.

     ls */*tar > listfiles
    

1.4 Convert Reftek files to miniseed

     rt2ms -F listfiles -p example.par -R -L -o mseed
    

    • Note that you now have miniseed files in the mseed directory. You can view these with PQL.


     cp $ANTELOPE/data/pf/log2miniseed.pf .
     setenv PFPATH $ANTELOPE/data/pf:./   %for tcsh users
     export	PFPATH=log2miniseed.pf       %for bash users
    

    • Before moving on, open the log2miniseed.pf file and edit it so that the line reads
      wfname day_volumes/%{sta}/%{sta}.%{net}.%{loc}.%{chan}.%Y.%j

  • Copy all the log files from mseed to logs and for each log file, run this command (example is for datalogger A03D):

     log2miniseed -n YZ -s A03D logs/2012.249.00.00:00.A03D.log
    

1.5 Build the database

  • Now we are ready to build our database, which is the backbone for associating all the continuous data files, storing arrival times of eartquakes, etc. The command here does this:

     dbbuild -b exampledb batchexample
    


  • Now let's do some exploring by typing below and clicking on the various buttons that appear. Explore!
     dbe exampledb
    

    • Try going to Site > Graphics > Map and look at the network
  • Now let's create a day-long miniseed files from the shorter length files
     miniseed2days -v -U -w "day_volumes/%{sta}/%{sta}.%{net}.%{loc}.%{chan}.%Y.%j" mseed/
     miniseed2db -v day_volumes/* exampledb
     
  • Building a database from IRIS data You can accomplish the same thing as the above steps for data from IRIS in 2 simple steps, after submitting an IRIS breqfast request and requesting a dataless SEED volume (metadata) and miniseed data (the miniseed file does not contain station metadata, this is the reason for the dataless SEED volume).
     miniseed2days -v -U -w "day_volumes/%{sta}/%{sta}.%{net}.%{loc}.%{chan}.%Y.%j" raw_data_ftp/seismic/*
    

  • At this point, you should be able to look at waveforms, after typing dbe exampledb, go to Wfdisc > Graphics > Waveforms.

1.6 Creating a SEED volume for archival at the IRIS DMC

  • Next is a series of commands to check everything and create a SEED volume for archival at IRIS. Enter them line-by-line and check to see that they worked. Consult documentation provided by IRIS here.

     dbfix_calib exampledb
     dbversdwf -tu exampledb
     dbverify -tj exampledb >& dbverify.out
     mk_dataless_seed -v -o CR.12.YYYYDDDHHMM.dataless exampledb           
     seed2db -v CR.12.YYYYDDDHHMM.dataless          
    

1.7 Building a database from data downloaded from the IRIS DMC

  • Building a database from IRIS data You can accomplish the same thing as the above steps for data from IRIS in 2 simple steps, after submitting an IRIS breqfast request and requesting a dataless SEED volume (metadata) and miniseed data (the miniseed file does not contain station metadata, this is the reason for the dataless SEED volume).
     miniseed2days -v -U -w "day_volumes/%{sta}/%{sta}.%{net}.%{loc}.%{chan}.%Y.%j" raw_data_ftp/*
     seed2db exampledb.dataless exampledb
    

2.1 Automatic detection and location of local events

  • We will now use the database we setup to detect and locate local events. Within the same working directory, enter the following commands:

     mkdir pf
     cp $ANTELOPE/data/pf/dbdetect.pf pf
     cp $ANTELOPE/data/pf/dbgrassoc.pf pf
     cp $ANTELOPE/data/pf/ttgrid.pf pf
     cp $ANTELOPE/data/pf/dbevproc.pf pf
    

  • Next, we need to edit the pf/ttgrid.pf preference file to create a local grid for the gridsearch algorithm centered around our network. Remove the sections dealing with teleseism and regional events, as we are only interested in local events. After you have edited it to your satisfaction, enter:

     ttgrid -pf pf/ttgrid.pf -time all exampledb > pf/ttgrid
    

    • Have a look at the grid you just created. Is it of sufficient resolution? Enter this to view a map:

     displayttgrid pf/ttgrid local
    

  • Next, edit the file pf/dbdetect.pf, which is the automatic detection parameter file.

     dbdetect -pf pf/dbdetect.pf exampledb exampledb
    

  • Now we will use the detections on different components and associate them, P versus S, and according to individual events. First, edit the pf/dbgrassoc.pf file and at a minimum, change the number of minimum stations to 4 (or a value appropriate for your network). When you are ready, run this command:

     dbgrassoc -pf pf/dbgrassoc.pf exampledb exampledb pf/ttgrid
    

  • Finally, we can compute the magnitude of events with this:

     dbevproc -p pf/dbevproc.pf exampledb exampledb
    

2.2 User revision of arrival times and relocating

  • We can edit the automatic picks, as they can sometimes be imprecise.

     dbloc2 exampledb
    
  • This will open various windows, including a new command window and a control panel that wil show a preliminary location including residuals of the actual picks versus traveltimes predicted by the model used. Click on the button "View Waveforms" to examine picks for the first event (If you have issues opening dbloc2, try to delete any temp files and/or the .last_id file,etc.)
  • The first step is to select the "Vertical channels" and also, in the new command window type:
     ph P
    
  • This will allow us to pick