Notes
on Dan Kelson's Spectroscopy
Software
System Version
You need to be logged on to a network PC.
type source /loc/hst9/dgg/kelson/PythonSetup to initialise
the routines.
This adds lots of routines to your path (look in /loc/hst9/dgg/kelson/Basics/)
try copying the data in /loc/hst9/dgg/kelson/Training
to one of your disks and
running the script ./do.csh therein. This should run
the commands listed below...
Example of running the software on the LDSS2 training set.
Comments below in red are Dan Kelson's; blue
are my comments (lots are just listing which new files are
created to try and figure out what the code's doing). Bold
black commands are Dan's commands taken from
this
script and bold blue commands are additional commands which I have found to
be useful.
(1) Overscan subtract, bias subtract. I have scripts to do
it but
they are sort of instrument/detector specific. Feel free to
use/modify "overscan", "lris2bias", or "lris4bias".
note: dispersion direction should
be blue on left (can flip with -Fx [-Fy]) if necessary
(otherwise waverect will fail):
overscan ccd019.fits flata1.fits -t
LDSS2 -s "(0,1700)"
[-s takes subregion -- can take 2 or 4 positions]
-f for fourier filtering of the median
-m for median filtering
"i recommend trimming ldss2 images
--- it helps with the y-distortion
quite a bit (and may help a small amount with the rectifications
as well)."
(2) Optional: clean bad columns
cleanbad flata1.fits badcol.dat
produces flata1b.fits
cleanbad heneara.fits badcol.dat
cleanbad speca1.fits badcol.dat
efits flata1b.fits "VTKLaplace(i1,numret=1)" laplace.fits
produces laplace.fits (input image
with sharpened edges)
(3) Get the y-distortion from a flat-field.
In the training example, the slit edges are not wonderfully
spaced so
I do a quick Laplacian on the image using the omnipotent
"efits"
script.
Use the script "getrect". This is the big engine for
rectifications
and y-distortions. In fact, the y-distortion measuring
routine is
just the transpose of the rectification measurement
routine. That
is why "x" and "y" act as swapped when measuring the
y-distortions.
"nx" specifies the number of sections perpendicular
to the features
being traced and "ny" specifies the number of sections
along the
features being traced. Alternatively, "dy" specifies
the bin size
along the features if you would rather work with fixed/known
S/N in the
cross-correlations than S/N ratios that vary from slit
to slit. The
"x" and "y" parameters specify the order of the bivariate
polynomial
used in the fit to the results of the cross-correlations
of the
image sections. The optional "dump" argument dumps the
cross-correlation results to a FITS file, the polynomial
fit to
another FITS file, and the weights to another one. Then
you can
use the omnipotent "dfits" script to display them, blink,
plot
and generally see if the fit was good, if the traces
were crap,
and you can *see* what happens when you play with the
arguments.
You can also use "unrect" to see the results as they
affect the
actual data (see below).
getrect -ydist laplace.fits -nx 12 -dy 11 -x 2 -y 4 -peakwin
3 -dump
dumps refdat.fits
produces: yweight0.fits, yfit0.fits, ydist0.fits,
laplace-y-dist.fits
updates laplace.fits header
[unrect -ydist laplace.fits produces laplacet.fits,
undistorted
-- useful check]
[better to plot fit with:
dfits -nj y*t0.fits
and press '1' & '2' to alternate between data and fit.
the result looks like this and this ]
"if you run getrect with "-dump"
you get the measured and fitted distortion
maps out with the names ydist0.fits/yfit0.fits/yweight0.fits
or
xdistN.fits/xfitN.fits/xweightN.fits (where N=0...Nslits-1).
i tend to
do:
dfits -nj y*t0.fits
and then i use the keys s/d/f/b (sharper,dimmer,fainter,brighter)
to adjust
the grayscale. the keys > and < change the window size
and that's handy...
use the number keys to switch between the images being displayed
(1,2,3,...). and you can use "p" or "P" to start up "pfits" on
the
rows or columns of the images being displayed (at the position
where
the cursor is). and once in pfits you can use "u" or "d" to move
up or
down through the image(s). you can also use the number keys to
switch
between the images in pfits (they are color coded and the white
one
is the one currently being "analyzed"). in pfits you can use
x/X/y/Y/z/Z
to change the stretch in the X,Y axes. also "R" resets the window,
"f"
toggles fixing the axes so that when you move up/down the box
remains
fixed.
it is also worth running dfits (without -nj) on the data frames
to see
where the slit boundaries ended up...
pfits is handy for looking at data frames as well: you can combine
multiple
rows or columns with "a" and "A" (these increase and decrease
the number
of rows/columns being combined) and you can use "m" to cycle
through the
4 modes of combining: add (default), average, median, biweight,
add,
average, median, biweight... and "M" cycles backwards..."
Note: do not delete the files used to record the x and y distortions
(eg., xuse-x-rect.fits and laplace-y-dist.fits)!!!! You will need them
for any later re-reductions (eg. different fits to sky background,
or extracting spectra)
(4) Copy the y-distortion transformation
information to the headers
of the other images.
copyrect -ydist laplace.fits flata1b.fits henearab.fits
speca1b.fits
(5) Run the automated slitlet finder.
A little temperamental, but
generally works well if you find a decent threshold (the "th"
parameter). This script writes to the image header where the
slitlet
boundaries are in the undistorted coordinate system. You will
want
to use the omnipotent image display script "dfits" to see what
it
came up with!
findslits -ydist flata1b.fits -th 0.1 -sw 3
produces:spatial.fits, p.fits, found.dat
dfits flata1b.fits display results
(overplots slit edges in blue and red). make notes of problems
the result looks like this.
This should also exclude fiducial stars (visible
as much higher counts in slitlet)
editslits flata1b.fits displays
list of slits in editor (first, setenv EDITOR <your favourite
editor>). manually edit
list of coords of slit edges. (don't worry about renumbering,
this will be done automatically each time you save)
adding a "-fwhm 2" can help
"-sw N" is a very useful parameter -- this will trim N pixels from
either side of the slit. taking of a good few pixels
should improve the rectifications (sometimes you get weird curvature at
the slit edges). I like to use a good -sw 10.
(6)
Copy the slitlet information from one image to the rest.
copyslit flata1b.fits henearab.fits speca1b.fits -fft -ydist
(7)
Normalize the flat fields. Pull out the slit function, the
blaze, and the pixel-to-pixel+fringing flat.
makeflat -ydist flata1b.fit produces: flata1bslt.fits, flata1bflt.fits
(2d flatfield), flata1bblz.fits (2d blaze)
(8) Divide the frames by the slit function,
where FFTs are used
to find the appropriate spatial shift to compensate for flexure
shifts.
flat1d flata1bslt.fits henearab.fits -fft -ydist
produces: RT.fits CT.fits, henearabs.fits
(9) Divide the frames by the pixel-to-pixel
flat
flat2d flata1bflt.fits henearabs.fits speca1bs.fits
produces: henearabsf.fits, speca1bsf.fits (flatfielded,
2d)
(10) Optionally divide the frames by
the blaze. If you want to
flux your data this is a good thing to do when you are reducing
your flux standard, and then you can put the blaze back into
your
sensitivity function. This way you don't have to wreck the
noise
statistics in your science frames just right now...
flat2d flata1bblz.fits henearabsf.fits speca1bsf.fits produces: henearabsff.fits,
speca1bsff.fits (flatfielded with blaze)
[note that the science image produced by this operation
(speca1bsff.fits) is not used further...]
efits flata1bblz.fits henearabsff.fits "i2*greater(i1,0.1)"
xuse.fits
multiplies second by greater of 0.1 or
first images to produce xuse.fits
(11) Get
the rectifications slit-by-slit. LDSS2 data are a pain
in the arse because the distortions go a bit funny, and they
are not quite like regular grism data where there is no line
curvature.
The distortions are bad enough that you do get curvature in
the lines
but it doesn't quite behave very well so sometimes the rectification
arguments have to be played with for the top and bottom slit.
getrect -xdist -ydist xuse.fits -nx 16 -dy 3 -b 3 -x 2 -y
2 -dump -normwt 1e6
produces: xweight0.fits, xfit0.fits,
xdist0.fits, and other
no.s for each slit...
xuse-x-rect.fits (updates xuse.fits)
[ unrect -xdist -ydist xuse.fits should
rectify image to give
xuser.fits
again, can plot with: dfits -nj x*t0.fits ,
results: this
and this ]
(12)
Copy the rectification transformations to the other frames.
copyrect -xdist xuse.fits henearabsff.fits speca1bsf.fits
henearabsf.fits
copies
x-distortion correction to above files
(13) Wavelength calibrate the lamp frame.
waverect -xdist -ydist henearabsf.fits -o 5 -w1 4000 -w2
9000 -d0 5.3 \
-ll ldss2henear.dat
-arcamp -fwhm 3
[ -slit
<num> gives option to do specific slit -- note: cannot choose
dispersion direction
-- assumes blue to red, left to right]
"wavelength calibration has always been
more unstable than i would like,
and this is partly due to the desire to make this thing work
for multiple
instruments, where the solutions may be fairly linear to fairly
non-linear.
so the program sometimes does better when one plays with (1)
fwhm (oddly
enough), or (2) the d0 argument (the guess for the 1st order
dispersion
per pixel). since you are probably running with "-arcamp", it
can
sometimes help to adjust the amplitudes of the lines in the linelist
file.
another possibility is that findslits missed a slit edge and
that the
spectra from two slits are somehow combined into one for the
wavelength
calibration and that would sure cause problems!
also: sometimes a single set of d0/fwhm (and even order "-o")
do not
work for all the slits --- so going back to do a slit or two
(with -slit XX) is not uncommon (though with ldss2 usually i
find
a set of parameters that works for all he slits)."
can
check by rectifying with:
unrect -wdist -xdist -ydist henearbsf.fits -w1 3500 -w2 10500
-dw 5
which
will produce: henearbsfr.fits which looks like this
-- all arc lines are aligned in wavelength
and slit edges are straightened
(14) Copy the wavelength calibration to the science frame(s).
copyrect -wdist henearabsf.fits speca1bsf.fits
(15) Rerun wavelength calibration on the science frame, but only
allowing it to adjust the zero-points of the wavelength solutions.
waverect -wdist -ydist speca1bsf.fits -ll shortsky.dat -zero
1
Now, there are several
options on how to proceed. If your data are not undersampled, then
there's
not much point in using the clever technique described in Dan's paper (modelling
the skylines in the
distorted coordinate system) as it's very CPU intensive, and requires quite
a bit of parameter tweaking.
I recommend rectifying the data, and running easysky, to do a straight
fit to the sky along the rectified columns.
These are similar to the IRAF tasks, transform and background.
1. Rectify data first
and perform fit to sky
(16a)
unrect -wdist -xdist -ydist speca1bsf.fits -w1 3500 -w2
10500 -dw 5
will produce speca1bsfr.fits (which should have skylines
all aligned for each slit and perfectly vertical).
w1 and w2 are the starting and ending wavelengths
of the output image, and dw is the dispersion.
(17a)
easysky speca1bsfr.fits
will
perform a 1D fit to the skylines and produce a background subtracted image
(speca1bsfro.fits), with the model
of the skylines in the first row of each slit. The "-i 3" performs 3 iterations of the fit, which I find
works quite well.
Other options include -h for fit using harmonic functions and -u for fit
with uniform function.
ZZZ write about using object masks ZZZ
2. Fit sky in curved
coordinate system and rectify later (only necessary/recommended for well-undersampled
data)
(16) Run "skyrect" to get the bivariate
B-spline fits for the sky
background.
can miss out defining sky apertures (omit
-sky sky.aps), otherwise
sky.aps contains pairs of coodrs in list (start and end position
of good sky regions
away from galaxy continuum). can use marksky speca1bsf.fits
sky.aps to define these regions
(pressing 1 at start of aperture and 2
at end) and write to file sky.aps.
"i do sometimes/often run skyrect or easysky
without sky apertures
if i just want some quick redshifts.... but that can sometimes
take a bit of tweaking to ensure that the objects aren't
oversubtracted much. just depends what you want to do..."
look here for a more comprehensive
list of parameter behaviour
skyrect -ydist speca1bsf.fits -sky sky.aps -skyint 512 \
-kx 3 -ky 1 -b 1 -dkx 0.90 -skycut
8 -check 5 -pct 50 -g 3 -rn 5
tweak
fits for individual slits:
skyrect -ydist speca1bsf.fits -sky sky.aps -skyint 512 -slit
3 \
-kx 3 -ky 0 -b 1 -dkx 1.0 -skycut
8 -spass 3 -g 3 -rn 5
skyrect -ydist speca1bsf.fits -sky sky.aps -skyint 2048 -slit
13 \
-kx 3 -ky 3 -b 1 -dkx 0.75 -skycut
8 -pct 50 -check 5 -g 3 -rn 5
makes: speca1bsfm.fits (model sky frame)
(17) Subtract the resulting sky frame.
efits speca1bsf.fits speca1bsfm.fits
"i1-i2" speca1bsfo.fits
subtracts model
sky frame from data frame to produce: speca1bsfo.fits
(18) Optional: clean out the cosmic rays.
pyrclean speca1bsfo.fits
-bigsky -rn 5 -g 3 -th 40 -mx 3 -my 1
cleans
cosmic rays to produce: speca1bsfoc.fits
(19) Optional: rectify the data, if you want to work with 2D
spectra.
unrect -xdist -wdist -ydist speca1bsf{oc,m}.fits -w1 3500
-w2 10500 -dw 5.0
rectifies
to produce: speca1bsfocr.fits, ...
[looks like this
]. note the residuals
from bright sky lines are all aligned, confirming that the
wavelength calibration has worked.
addkeys speca1bsfocr.fits skyim speca1bsfmr.fits
(add header
keyword "skyim")
(20) Optional: extract 1D spectra from the unrectified frame using the
"extrect" script. "multiextrect" is still unfinished ("multiextrect"
extracts fits a single B-spline to an object using the pixel values in
multiple exposures). the default for "extrect" is to fit a 2D B-spline
to the object pixels and then do integrate the 2D B-splint over the
output pixel boundaries.
extrect speca1bsfoc.fits -obj obj.pos -rn 5 -g 3 -ap 5 -kx 3 -ky
1 \
-xdist -ydist -wdist -w1 3500 -w2 10500 -dw
5.0
Examples
FORS MOS
My example script for running on
FORS data. The inputs are three exposures of the same mask -- spec1b.fits,
spec2b.fits and spec3b.fits, the flatfield -- flat.fits and arc -- lamp.fits.
It also assumes a linelist for the arc called forsll.dat. The description
below should help explain what the script is doing. Note: this script
flatfields the data, but then uses the non-flatfielded data later. It's a
simple matter to change the name of the data operated on, later in the
script...