stationary process:
Definition
Formally, let be a stochastic process and let represent the cumulative distribution function of the joint distribution of at times . Then, is said to be stationary if, for all , for all , and for all ,
Since does not affect , is not a function of time.
Regression:
In all cases, the estimation target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function which can be described by a probability distribution.
The performance of regression analysis methods in practice depends on the form of the data generating process, and how it relates to the regression approach being used. Since the true form of the datagenerating process is generally not known, regression analysis often depends to some extent on making assumptions about this process. These assumptions are sometimes testable if a sufficient quantity of data is available. Regression models for prediction are often useful even when the assumptions are moderately violated, although they may not perform optimally. However, in many applications, especially with small effects or questions of causality based on observational data, regression methods can give misleading results.
match filtering:?
Adaptive subtraction: Standard adaptive subtraction methods use the wellknown minimum energy criterion, stating that the total energy after optimal multiple attenuation should be minimal.
Adaptive subtraction
The goal of adaptive subtraction is to estimate the nonstationary filters that minimize the objective function


(71) 
where represents the nonstationary convolution with the multiple model obtained with SRMP (i.e., Chapter ) and are the input data. These filters are estimated in a leastsquares sense for one shot gather at a time. Note that in practice, a regularization term is usually added in equation () to enforce smoothness between filters. This strategy is similar to the one used in Chapter . The residual vector contains the estimated primaries.
Multiple subtraction
In this section, the multiple model computed in the preceding section is subtracted from the data with two techniques. The model is obtained after shot interpolation with the sparseness constraint. The first technique is a patternbased method introduced in Chapter that separates primaries from multiples according to their multivariate spectra. These spectra are approximated with predictionerror filters. The second technique adaptively subtract the multiple model from the data by estimating nonstationary matching filters (see Chapter ).
Shaping regularization: http://www.reproducibility.org/RSF/book/jsg/shape/paper_html/
Least squares sense:
Least Squares Fitting
A mathematical procedure for finding the bestfitting curve to a given set of points by minimizing the sum of the squares of the offsets (“the residuals”) of the points from the curve. The sum of the squares of the offsets is used instead of the offset absolute values because this allows the residuals to be treated as a continuous differentiable quantity. However, because squares of the offsets are used, outlying points can have a disproportionate effect on the fit, a property which may or may not be desirable depending on the problem at hand.
Matched filter:
In signal processing, a matched filter (originally known as a North filter^{[1]}) is obtained by correlating a known signal, or template, with an unknown signal to detect the presence of the template in the unknown signal. This is equivalent to convolving the unknown signal with a conjugated timereversed version of the template. The matched filter is the optimal linear filter for maximizing the signal to noise ratio (SNR) in the presence of additive stochastic noise. Matched filters are commonly used in radar, in which a known signal is sent out, and the reflected signal is examined for common elements of the outgoing signal. Pulse compression is an example of matched filtering. It is so called because impulse response is matched to input pulse signals. Twodimensional matched filters are commonly used in image processing, e.g., to improve SNR for Xray. Matched filtering is a demodulation technique with LTI filters to maximize SNR.
Derivation of the matched filter impulse response[edit]
The following section derives the matched filter for a discretetime system. The derivation for a continuoustime system is similar, with summations replaced with integrals.
The matched filter is the linear filter, , that maximizes the output signaltonoise ratio.
Though we most often express filters as the impulse response of convolution systems, as above (see LTI system theory), it is easiest to think of the matched filter in the context of the inner product, which we will see shortly.
We can derive the linear filter that maximizes output signaltonoise ratio by invoking a geometric argument. The intuition behind the matched filter relies on correlating the received signal (a vector) with a filter (another vector) that is parallel with the signal, maximizing the inner product. This enhances the signal. When we consider the additive stochastic noise, we have the additional challenge of minimizing the output due to noise by choosing a filter that is orthogonal to the noise.
Let us formally define the problem. We seek a filter, , such that we maximize the output signaltonoise ratio, where the output is the inner product of the filter and the observed signal .
Our observed signal consists of the desirable signal and additive noise :
Let us define the covariance matrix of the noise, reminding ourselves that this matrix has Hermitian symmetry, a property that will become useful in the derivation:
where denotes the conjugate transpose of , and denotes expectation. Let us call our output, , the inner product of our filter and the observed signal such that
We now define the signaltonoise ratio, which is our objective function, to be the ratio of the power of the output due to the desired signal to the power of the output due to the noise:
We rewrite the above:
We wish to maximize this quantity by choosing . Expanding the denominator of our objective function, we have
Now, our becomes
We will rewrite this expression with some matrix manipulation. The reason for this seemingly counterproductive measure will become evident shortly. Exploiting the Hermitian symmetry of the covariance matrix , we can write
We would like to find an upper bound on this expression. To do so, we first recognize a form of the CauchySchwarz inequality:
which is to say that the square of the inner product of two vectors can only be as large as the product of the individual inner products of the vectors. This concept returns to the intuition behind the matched filter: this upper bound is achieved when the two vectors and are parallel. We resume our derivation by expressing the upper bound on our in light of the geometric inequality above:
Our valiant matrix manipulation has now paid off. We see that the expression for our upper bound can be greatly simplified:
We can achieve this upper bound if we choose,
where is an arbitrary real number. To verify this, we plug into our expression for the output :
Thus, our optimal matched filter is
We often choose to normalize the expected value of the power of the filter output due to the noise to unity. That is, we constrain
This constraint implies a value of , for which we can solve:
yielding
giving us our normalized filter,
If we care to write the impulse response of the filter for the convolution system, it is simply the complex conjugate time reversal of .
Though we have derived the matched filter in discrete time, we can extend the concept to continuoustime systems if we replace with the continuoustime autocorrelation function of the noise, assuming a continuous signal , continuous noise , and a continuous filter .