\documentclass[12pt]{article}
\textheight 9.0in
\textwidth 6.0in
\voffset=-0.5in
%\hoffset=-0.125in
\oddsidemargin=-0.125in
\evensidemargin=-0.125in
\parindent 0em
\pagebreak
\usepackage{psfig}
\def\nopagenumbers{\pagestyle{empty}}
\begin{document}
%\nopagenumbers
\begin{large}
\centerline{{\bf A brief note on non-linearity correction}}
\vskip 0.5cm
\end{large}
\centerline{Document Number: VDF-TRE-IOA-00008-0003 \ Version 1}
\vskip 0.5cm
\centerline{Mike Irwin, CASU}
\vskip 1.0cm
In which some options for non-linearity correction and its measurement
are explored.
\vskip 1.0cm
{\bf Correcting for non-linearity}
\vskip 0.5cm
In default DSC reset-read-read (RRR) mode, downstream of the data aquisition
system (DAS) the output that we see is
$$\Delta I' \ = \ I'_2 - I'_1 \ = \ f(I_2) -f(I_1) $$
where $I'_1$ and $I'_2$ denote the non-linear first (ie. the reset-frame)
and second readouts respectively and $I_1$ and $I_2$ the desired linear
quantities. The non-linear function $f(I)$ maps the distortion of
the desired linear counts to the non-linear system $I'$. If we define the
inverse transform $g(I')$ that maps measured counts $I'$ to linearised counts
$I$ as the inverse operator $g() = f^{-1}()$ then
$$ I = g(I') \ \ {\rm and} \ \ I_1 = g(I'_1) \ \ I_2 = g(I'_2) $$
If $I'_1$ and $I'_2$ were directly available this is a one-to-one mapping
and can be done efficiently and accurately using Look Up Tables (LUTs).
This is the conventional way of implementing the correction prior to other
image manipulation operations.
\vskip 0.2cm
However, if $I'_1$ and $I'_2$ are not separately available and all we have
to work from is the difference $\Delta I'$ then a simple LUT transformation
is not possible.
\vskip 0.2cm
For example, taking the simplest case where the illumination level across
the detector has not changed during the course of the RRR and no on-board
co-addition is happening then, in principle given only $\Delta I$ and
knowledge of the timing of the RRR operations we can deduce $I_1$ and
$I_2$ by using the effective integration time for each to estimate their
scaling to the measured difference $\Delta I$ such that,
$$ I_1 = k \Delta I \ \ {\rm and} \ \ I_2 = (1+k) \Delta I $$
Unfortunately, the ratio $k$ will not be constant for the non-linear
quantities $I'_1$ and $I'_2$ forcing us to adopt a scheme along the
following lines.
\vskip 0.2cm
Given $\Delta I'$ and defining the non-linear operator $f()$ as a
polynomial with coefficients $a_m$ (typically up to quartic) we have
$$ \Delta I' = \sum_m a_m (I_2^m - I_1^m) = \sum_m a_m [(1+k)^m \Delta I^m -
k^m \Delta I^m] $$
The quantity we want $\Delta I$ is buried in the non-linearity of the RHS
and we have to solve an equation like this for every pixel. This is possible,
and relatively simple to program using something like a Gauss-Seidel
iterative scheme, but is more inefficient than a direct mapping.
\footnote {A simulation on a twin processor 3GHz Xeon PC indicates,
for typical values, a time of $\approx$1/8th second per 2k $\times$ 2k
detector.}
\vskip 0.2cm
If we wanted to use a completely general LUT approach we would require a 2D
LUT for all possible values of $I_1$ and $I_2$ ie. 65k $\times$ 65k in size,
or 4.3$\times 2$ Gbytes. Most likely we would need a different correction for
each ``channel'' making a total of 128 (WFCAM) or 256 (VISTA) $\times$ 8.6
Gbytes $=$ 1.1 or 2.2 Tbytes of LUT for the cameras! Of course if the
range of values of $k$ is limited via exposure time quantisation this
decreases the size of the total no. of LUTs required considerably for the
constant illumination case, but would be an ugly and possibly impractical
solution.
\vskip 0.5cm
Practical considerations (eg. data volume), suggest two alternative solutions
for non-linearity correction: either correct the individual frames directly
in the DAS by measuring and downloading the appropriate LUTs, or polynomial
coefficients, to the DAS; or use a non-linear inversion on the
reset-corrected frames as outlined here. This methodology is not generally
applicable, eg. to multi-NDR/gradient fitting readouts, but is directly
applicable to
coadded (or coaveraged) frames of the same exposure times, assuming
constant illumination over the series.
\vskip 0.2cm
For reset-corrected data, the non-linear inversion is
competitive with complex operations on LUTs and much simpler to implement.
It also has the added advantage of removing all aspects of the non-linearity
correction from the DAS. The main disadvantages are the method is restricted
to DCS RRR mode, and if the illumination level is rapidly varying
(eg. twilight) the effective scale factors $k_i$ may be hard to compute
accurately - although for all realistic practical situations the knock-on
effect is likely to be negligible.
\vskip 0.5cm
{\bf Measuring the non-linearity}
\vskip 0.5cm
If all that is available are reset-corrected data from say a time series of
dome flats, it is still feasible to directly compute the non-linearity
coefficients.
\vskip 0.2cm
Given a series of measurements \{$i$\} of $\Delta I'_i$ and using the
previous notation and polynomial model
$$ \Delta I_i' = \sum_m a_m (I_2^m - I_1^m) = \sum_m a_m \Delta I^m_i \ [(1+k_i)^m - k_i^m] $$
where $k_i$ are the exposure ratios under the constant illumination.
In general $\Delta I_i = s \ t_i$ where $t_i$ is the exposure time of the
$i$th reset-corrected frame in the series and $s$ is a fixed (for the series)
unknown scale factor. The $k_i$ are computable from a knowledge of the
exposure times and the reset-read overhead, $t_i$ and $\Delta I_i'$ are
measured quantities leaving the polynomial coefficients $a_m$ and the
scaling $s$ to be determined.
\vskip 0.2cm
Thus the model is defined by
$$ \Delta I_i' = \sum_m a_m (I_2^m - I_1^m) = \sum_m a_m \ s^m \ t_i^m \ [(1+k_i)^m - k_i^m] $$
and can be readily solved by standard linear least-squares methods using
the following sleight-of-hand. Since the scaling $s$ and hence the
polynomial solution $a_m$ are coupled, by simply (and logically) requiring
in the final solution $a_1 = 1$, computation of $s$ can be completely avoided.
\vskip 0.2cm
Rewriting the previous equation in the following form makes this more apparent
$$ \Delta I_i' = \sum_m (a_m \ s^m) \ t_i^m \ [(1+k_i)^m - k_i^m] =
\sum_m b_m \ t_i^m \ [(1+k_i)^m - k_i^m] $$
where now $b_m$ are the coefficients to be solved for. The final step is
to note that
$$ a_m \ = \ b_m / s^m = \ b_m / b_1^{\ m} $$
since by definition $a_1 = 1$.
\vskip 0.5cm
{\bf Individual frames available}
\vskip 0.5cm
If both reset and data frames are available then the problem is much simpler
since it can be phrased in the following way
$$I_1 \ = \ \sum_m a_m \ I'_1^{\ m} \ = \ s \ t_1 $$
$$I_2 \ = \ \sum_m a_m \ I'_2^{\ m} \ = \ s \ t_2 $$
where $t_1$ and $t_2$ are the known effective exposure times. Given a
series of measurements \{$i$\} these can
either be solved as a coupled pair of linear least squares minimisations
or by forming
$$\Delta t \ = \ \sum_m a_m/s \ (I'_2^{\ m} - I'_1^{\ m}) $$
as a standard linear least-squares problem. Again the
requirement that $a_1 = 1$ automatically defines $s$. These coefficients can
then be directly applied to the observed flux to give the desired linearised
units.
\newpage
{\bf Summary}
\vskip 0.5cm
There are two general options for non-linearity correction. The first
is to compute and apply the non-linearity correction on individual readouts,
preferably in the detector controller system, for data volume considerations,
prior to computing the reset-corrected frames. The second method forgoes
direct access to the individual frames and instead computes and applies the
non-linearity correction at the front-end of the VDFS pipeline.
\vskip 0.2cm
Each option has certain advantages and disadvantages. The former method
is more complex from an operational point-of-view since the non-linearity
measures will need to be computed using a different readout mode to that
in normal use and then fed back to the controlling system for application.
Leaving the non-linearity operation for the pipeline simplifies the
operational aspects but makes various (not unreasonable) assumptions about the
timing of reset and readout operations and the stability of the illumination.
\vskip 0.2cm
The only clear drawback of correcting post-facto is that it would probably
be very difficult, if not impossible, to correct multiple NDR gradient fitting
output that way. It is not obvious that this is a problem for VISTA,
or WFCAM, since an adequate alternative for narrow-band imaging is simply to
integrate for long enough to get a signal dominated by sky background
and stick with the DCS RRR mode.
\end{document}