The individual steps of the pipeline can be divided in two main parts : - creation of calibration files (called by cirsi_flat.csh, cirsi_dark.csh and cirsi_nlin.csh) - data reduction (called by cirsi.csh) -------------------------------------------------------------- ---- CREATION OF CALIBRATION FILES -------------------------------------------------------------- 1) nlinmap.pl -- compute non-linearity correction coefficients usage: nlinmap.pl saturation exptime_ref [run_begin run_end] example: nlinmap.pl 40000 4 4789 4834 input: loop-combined domeflats, eg, irx_04789_c1.fits (with exptime_ref exposures assumed to be in between each other exposure times for calibration) output: non-linearity correction coefficients file, eg, nlincoeff.dat (one line per chip, containing a1 and a2 coefficients) data points used for regression (including points after saturation limit), eg, feOfm.c1.dat (format : (fm,feOfm) with fm mesured flux, feOfm flux expected over flux mesured) method: computes for each chip the coefficients a1 and a2 so that Fcorr = Fm*(1+a1*Fm+a2*Fm^2), with Fm mesured flux, Fcorr flux corrected for non-linearity. For this, it computes the mean flux of each image (fm), then for an exposure time different from exptime_ref, computes the expected flux (fe) by exposure time scaling of the reference exptime_ref flux. Use linear regression to compute a1 and a2 on data points (fm,feOfm), using only mesured flux under saturation value. 2) dark.pl -- create dark map usage: dark.pl runbeg run end example: dark.pl 5001 5003 input: raw data, eg, irx_05001_c1_001.fits output: dark map per chip, eg, dark.c1.fits method: median combine the dark images 3) domeflat.pl -- create gain map from domeflat observations usage: domeflat.pl runbeg_on runend_on runbeg_off runend_off [nsig] example: domeflat.pl 5875 5878 5879 5882 input: dome flat observations with lamp on + off, eg, irx_05875_c1_002.fits output: flatfield per chip, eg, flat.c1.fits gain / bad pixel map per chip, eg, gain.c1.fits method: median combine the dome lamp on loops, and those with lamp off, and takes the difference to form the flatfield per chip. loop 1 is not used because of the reset anomaly. normalize the flatfield to create gain maps, with bad pixels in the gain map set to 0.0 based on their deviation from the local background in the gain map (the parameter nsig sets the deviation required to be considered a bad pixel). notes: use skyflat.pl if you don't have good dome flats: skyflat.pl 5792 5827 4) normalize_gain.pl -- normalize the gain maps wrt to chip 1 usage: normalize_gain.pl example: normalize_gain.pl input: flatfields and gain maps, flat.c[1-4].fits, gain.c[1-4].fits output: normalized gain maps, gain.c[1-4].fits (overwritten) method: normalize_gain.pl scales the gain maps wrt chip 1 using the signal level (modes) of the flatfield per chip. -------------------------------------------------------------- ---- DATA REDUCTION -------------------------------------------------------------- 1) dithers.pl -- identify dither sets in the data set usage: dithers.pl run_begin run_end example: dithers.pl 5792 5827 input: reads RA/DEC keywords from FITS headers in raw data (loop 1 only) output: writes ASCII list of dither sets to file "dithers" method: a dither set is defined as consecutive frames with position offsets smaller than 1/4 of the chip size. 2) lindarkflatten.pl -- calibrate the data with non-linearity, dark, flatfield usage: lindarkflatten.pl run_begin run_end example: lindarkflatten.pl 5792 5827 input: raw data, eg, irx_05792_c1_001.fits output: calibrated data, eg, irx_05792_c1_001.fits method: lindarkflatten.pl calls lindarkflat.c to correct from non-linearity using nlincoeff.dat, to subtract the dark and to multiply the data by the inverse of the gain map. Each loop is processed separately. the data are read from IRDR_DATADIR and flattened data are written to the current directory (overwriting if these are the same directory). the idea is to have only one copy on disk of data read from tape, in some default location like /data/cirsi/19991023, even if multiple people are running multiple pipelines on it. notes: if you don't want to apply non-linearity correction and/or dark subtraction, you can use instead darkflatten.pl, linflatten.pl or flatten.pl 3) combineloops.pl -- combine loops (first pass) usage: combineloops.pl firstpass|secondpass [run_begin run_end] example: combineloops.pl firstpass 5792 5827 input: flattened data, eg, irx_05792_c1_00[1-3].fits output: loop-averaged data, eg, irx_05792_c1.fits method: coadd the loops at each run. the images are zero offset to the average of their modes before coaddition. notes: the FITS keyword NCOMBINE is set. if there is only 1 loop at a given run and chip (eg, some loops were dropped during data acquisition), an output file is still created with NCOMBINE = 1. If < 5 loops are available then photon noise clipping is used, if >= 5 loops then sigma clipping (using median abs. deviation). 4) sky.pl -- first pass sky subtraction and reset correction usage: sky.pl firstpass|secondpass half-width [run_begin run_end] example: sky.pl firstpass 2 5792 5827 input: loop-combined flattened data files, eg, irx_05792_c1.fits output: sky subtracted data files, eg, irx_05792_c1.fits.skysub method: sky.pl calls cube_median.c to create a sky frame for each image by median combining the nearest 2*half-width frames in the observation sequence. sky.pl then calls sky.c to calculate the sky level (image mode), subtract the sky frame, add back in the sky level (a constant), and use reset.c to do reset correction. It is not necessary to fine tune the half-width because the first pass images are only used to locate objects for the second pass analysis (and are then discarded). Reset ramp correction: subtract the row mode from each row, then subtract the column mode from each column, both working quadrant by quadrant. For chip 2 and 4, the column corrections are done first (the reset ramp is along rows in chip 1,3 and columns in chips 2,4). The background level (image mode before reset correction) is added back in. 5) offsets.pl -- determine the offsets between dither frames in a dither set usage: offsets.pl run_begin run_end [minarea thresh] [position_err] example: offsets.pl 5792 5800 15 3.0 runcmd.pl offsets.pl 15 3.0 input: Sky-subtracted data, eg, irx_05792_c1.fits.skysub output: Thresholded data files, eg, irx_05792_c1.fits.skysub.objs ASCII files with dither offsets, eg, offsets.r5792to5800.c1 method: offsets.pl uses SExtractor (the command sex should be in your path) to create images with non-object pixels set to 0, then uses offsets.c to cross-correlate the object pixels and output the translation offset between each dither frame in the set and the first frame in the set. (the minarea and thresh arguments to offsets.pl are the SExtractor DETECT_MINAREA and DETECT_THRESH). The RA, DEC, and SCALE keywords in the FITS headers are used to obtain a first guess of the offsets. The dither offset guess from the header keywords is expected to be accurate to within POSITION_ERR (default 65 pixels) set in offsets.c, but if that fails then offsets.c will assume a dither offset of zero and search in a region MAXDITHER (default 150 pixels) set in correlate.h. notes: If there is a high surface density of objects, use a high detection threshold (> 10 sigma) and large object area (eg, 25 pixels) to make faster. This will also avoid the error "increase MAXNLIST" which is caused by a larger surface density of objects being detected than expected (MAXNLIST is a memory allocation parameter). The 4th column of the output is the match fraction (the fraction of object pixels that overlap with the predicted offset). A fraction larger than ~ 0.4 probably means the cross-correlation worked. You can also compare the offsets from different chips to make sure it worked (they should agree to better than a half pixel or so). Edit offsets.r* by hand to fix failed cross-correlations (can use offset measured from different chip). 6) coadd.pl -- first pass coaddition of each dither set usage: coadd.pl ditherset_runbegin ditherset_runend example: coadd.pl 5792 5800 runcmd.pl coadd.pl input: Gain and bad pixel map per chip, eg, gain.c1.fits Offsets list per chip, eg, offsets.r5792to5800.c1 Sky subtracted data files per run per chip, irx_05792_c1.fits.skysub output: Weight map per run per chip, eg, irx_05792_c1.weight.fits Coadded dither set, eg, irx.r5792to5800.c1.fits Coadded weight map, eg, irx.r5792to5800.c1.weight.fits method: coadd.pl calls wmap.c to make the weight map for each image (the weight map is NLOOP * INT_TIME * gainmap / ImageVariance). coadd.pl then calls shift_cube_mean.c to register the images and weight maps with the first frame in the dither set (borders are added of width given by the smallest multiple of 64 that fits all the offsets) and average them using a clipped, weighted mean. Each pixel of the shifted image is given by the weighted average of the four contributing pixels of the input image, with weights given by the fractional area of overlap. Bad pixels are not included, hence the output pixel will have the same mean signal level even if it overlaps bad pixels. The weights will account for the bad pixel because weight maps are shifted by weighted sum. Deviant pixels are clipped before coaddition. The clipping algorithm used depends on the number of valid pixels at that position in the stack of images (see the description given in 5) above in the combine loops step). 7) mask.pl / shiftmask.pl -- produce master object masks and unregister to make frame object masks usage: mask.pl ditherset_runbegin ditherset_runend grow|nogrow [minarea thresh] shiftmask.pl ditherset_runbegin ditherset_runend example: mask.pl 5792 5800 grow 25 1.5 runcmd.pl mask.pl grow 25 1.5 shiftmask.pl 5792 5800 runcmd.pl shiftmask.pl input: coadded dither sets and weight maps, eg, irx.r5792to5800.c1.fits and irx.r5792to5800.c1.weight.fits offsets lists, eg, offsets.r5792to5800.c1 gain maps, eg, gain.c1.fits output: master object masks, eg, irx.r5792to5800.c1.fits.objs individual object masks, eg, irx_05792_c1.fits.skysub.mask method: mask.pl runs sextractor with -CHECK_IMAGETYPE OBJECTS to produce an object mask (image with non-object pixels set to 0), then calls dilate_mask.c (if "grow" is specified on the command line) to expand the object mask regions by a multiplicative factor (default 50%). this produces the master object mask for each coadded dither set (extension .objs). shiftmask.pl runs mask.c to create an object mask for each frame in the dither set by unregistering the master object mask. notes: Try to set a low detection threshold to detect the extended faint wings of galaxy profiles, even if you need to require a large object area to avoid many false detections (eg, 25 pixels, or maybe even larger for du Pont data). Look at the output master object mask (*.objs) to make sure the object detection didn't produce garbage or fill the image with object detections (raise the threshold a bit if necessary). For high surface density fields, you probably shouldn't grow the object mask regions (this is more important for galaxies anyway). Specify "nogrow" instead of "grow" when running mask.pl. 8) sky.pl -- second pass sky subtraction (and reset correction) usage: sky.pl firstpass|secondpass half-width [run_begin run_end] example: sky.pl secondpass 3 5792 5827 input: Loop-combined flattened data files, eg, irx_05792_c1.fits output: Sky subtracted data files, eg, irx_05792_c1.fits.skysub method: Running sky frame subtraction is done as described in 5) above, but object masking is done when creating sky frames and loops are processed separately. Sky frames are created by a weighted, masked, clipped mean of the neighboring 2*half-width frames. Reset correction is done as described in 5) but with object masking also. A reasonable half-width is 3 frames. If there is significant sky fringing or other ugly background structure, reducing the half-width might help. If the background looks good, you could try increasing the half-width to 4 or maybe even 5 frames. notes: This step is slow, a faster version will be made eventually. 9) combineloops.pl -- combine loops (second pass) usage: combineloops.pl secondpass [run_begin run_end] example: combineloops.pl secondpass 5792 5827 input: flattened, sky-subtracted data, eg, irx_05792_c1_001.fits.skysub output: loop-averaged data, eg, irx_05792_c1.fits.skysub method: See 5) above. 10) deshadow.pl -- remove shadows associated with extended objects usage: deshadow.pl run_begin run_end example: deshadow.pl 5792 5827 input: flattened, sky-subtracted data, eg, irx_05792_c1.fits.skysub Bad pixel / object masks, eg, irx_05792_c1.fits.skysub.mask output: Overwrite data with shadow-removed data, eg, irx_05792_c1.fits.skysub method: Apply reset_correction.c to the input loop-combined data files and overwrite the corrected data to disk. Applying reset correction to loop-combined data removes the shadowing problem (horizontal and perpendicular tracks centered on extended objects). If your data doesn't contain many galaxies and you don't notice a shadowing problem then you can skip this step. 11) coadd.pl -- second pass coaddition of each dither set usage: coadd.pl run_begin run_end example: coadd.pl 5792 5800 runcmd.pl coadd.pl input: Gain and bad pixel map per chip, eg, gain.c1.fits Offsets list per chip, eg, offsets.r5792to5800.c1 Sky subtracted data files per run per chip, irx_05792_c1.fits.skysub output: Weight map per run per chip, eg, irx_05792_c1.weight.fits Coadded dither set, eg, irx.r5792to5800.c1.fits Coadded weight map, eg, irx.r5792to5800.c1.weight.fits 12) astrometry.pl -- Astrometry Attach accurate FITS header WCS information to the combined dither frames: astrometry.pl catalog irx.r*c?.fits where catalog is either "apm","usno" or "2mass". If you select apm then UKST will be used if your field is at DEC < 0, and POSS will be used if your field is at DEC > 0. If you get the error message "Empty sky catalog" then it probably means that there is no APM data for your field (select usno instead) or that the APM Online catalog is misbehaving and Mike Irwin can probably help (mike@ast.cam.ac.uk). If you select 2mass, you must have previously download the 2mass catalogue corresponding to your field from Gator (http://irsa.ipac.caltech.edu/applications/Gator/), using output column selection so that the first columns are: ra dec j_m j_msig h_m h_msig k_m k_msig astrometry.pl will first call initwcs.c to initialize the WCS header info in each combined dither set (read the telescope RA, DEC and use the known chip offsets and rotations to create an initial, approximate WCS). if the built-in chip offsets are wrong (eg, a different chip numbering was used), then you can create a file called "chipoffsets" containing chip_number, ra_offset, dec_offset, and rotation (deg), eg, for INT data: 1 -0.128 0.128 0.0 2 -0.128 -0.128 0.0 3 0.128 -0.128 0.0 4 0.128 0.128 0.0 Run coadd.pl again if necessary to get irx.r*c?.fits files that haven't been corrupted by astrometry.pl using incorrect chip offsets. Next, astrometry.pl runs SExtractor to detect objects, and the x,y coordinates are matched to the online APM catalog using WCSTools to produce a precise coordinate transformation and WCS header. The dither set with initial WCS is saved with the extension initwcs (eg, irx.r*fits.initwcs). If astrometry.pl fails, a backup procedure is to use skycatgaia to get an x,y,ra,dec list of matched pairs and then use IRAF ccmap to update the FITS WCS header info. See $IRDR_BASEDIR/extern/apmcat for using the APM catalog in skycatgaia (from www.eso.org/skycat). See $IRDR_BASEDIR/idl/fixskycat.pro to convert skycatgaia x,y,ra,dec output to the input format used by ccmap. Overwrite the WCS CD matrix keywords with the averages (the average scale factors are calculated using all images, and the average chip rotations are calculated per chip): avgwcs.pl irx.r*c?.fits An second iteration can be done from this astrometry using astrometry_refine.pl 13) Drizzle combined dither sets onto tile Normalize images to median brightness level: normalize.pl irx.r*c?.fits Use ESO Drizzle: http://www.eso.org/science/eis/eis_soft/soft_index.html to make tile (mosaic) image and tile weight map. Or obtain drizzle.tar.gz from anon ftp to ftp.ast.cam.ac.uk which has already been compiled for Solaris. Then add the following (using your path) to login.cl: reset eis = "/data/cass60d/sabbey/drizzle/" task eis.pkg = eis$eis.cl Run parapredriz to prepare a drizzle script. You need to know the RA,DEC of the center of the tile. You can get this by: tile.pl irx.r*c?.fits The default tile size in pixels should be about 4350 by 4350. Below is a parapredriz example: ls irx.r*c?.fits > coaddfiles PACKAGE = eis TASK = parapredriz image = @coaddfiles Input image name template outmode = bestfit Mode - tiles,bestfit or single (npara = 1) Number of parallel output streams (stem = XMM) Stem for names of output images and cl files (output = ) Name for output image (single mode) (outra = 163.35) RA of centre of output (single mode) (outdec = 57.55) Dec of centre of output (single mode) (proj = TAN) Projection for output superimage (context= no) Also create context images? (direct = ) Directory containing input images (if not pwd (weight = default) Name of weighting image associated with data (raref = 163.35) Reference RA of centre of super-image (degree (decref = 57.55) Reference Declination of centre of super-imag (outscl = 0.457) Scale for output image in arcsecs/pixel (outnx = 4350) X dimension of output coadded image tiles (outny = 4350) Y dimension of output coadded image tiles (xcen = 0) X position number of section containing centr (ycen = 0) Y position number of section containing centr (mode = ql) cl> eis cl> epar parapredriz ei> parapredriz Mode - tiles,bestfit or single (tiles|bestfit|single) (bestfit): + PARAPREDRIZ Version 0.4 (6th August 1999) Input image name template (@coaddfiles): # 16 input images found # Writing drizzle commands to script file XMM--001.cl # Number of input images is 16 # Number of drizzle operations is 16 # Number of parallel streams is 1 # Number of output sections is 1 # Total space required for output sections is 136.125 (Mb) If it takes more than 16 operations then you probably need to increase the outnx and outny parameters. Edit the script: drizzle.wt_scl="1.0" globally replace hhh with fits cl> eis cl> cl < XMM--001.cl see AXAF.cl in /data/cass60d/sabbey/AXAF/ as an example.