A function plugin is an application which modifies, combines or massages data in a specific way. There are a number of function definitions built into the UDFAnalysis framework which are invoked from the Function Definition Menu. Each function has its own unique setup menu. This defines the input variables to the function, supplies any options used to control how the function operates, and specifies the variables to hold the results. These will be described in detail later.
All function setup menus use a common layout format and share some common options. The differences are in the function specific options. Rather than repeat the usage of the common options under each function description we will describe them here. Figure 15 shows an example function setup menu, in this case for the MASK function.
The menu consists of three sections common to all function menus. At the top is the comment field. This is followed by a central text window below which is a set of buttons which allow interaction with information found in the text window. At the bottom is work area where the function specific inputs, outputs and any option settings are defined.
The first two sections of the menu are identical in all function setup menus. The only difference would be the contents of the text window which holds the function definitions. It should be noted that any function definition can be set up to be executed multiple times using different input variables and options. This could equally well be accomplished by selecting the function multiple times from the Function Definition Menu.
The operations and options in these common menu setup sections are presented below.
A short free form comment field. This should describe the purpose of the function. It will be added to the function description line in the text window in the FUNCTION DEFINITION menu when you next update it.
The text window contains the function definition. Each definition (and there may be multiple) occupies one or more lines. These contain the function option settings. When there are multiple definitions the function application is run multiple times starting with the first defined definition in the text window and continuing to the last. You can change this order by hilighting a function definition and then using the up and down arrows to more it up and down in the list. The buttons below the window allow new algorithm definitions to be added and current ones to be modified or deleted.
Creates a new function definition line in the text box. The settings are taken from the current set of information in the work area.
Takes the options in the currently hilighted line and copies them down into the work area. There they can be modified. Clicking the accept button copies them back into the hilighted line. Edit can also be used to clone a function definition. Edit a definition, unset the hilight, and then add to create a copy of the definition. Edit the new definition to make what changes you want.
Copies the work area definitions into the currently hilighted line. It is used after editing the settings in a function definition to copy the changes back into the appropriate line in the text window. You can also clone the current work area setting into an already defined function by hilighting it and then clicking the accept button.
Unhilights the currently hilighted line.
Resets the work area definitions to their default settings.
Deletes the function definition associated with the currently hilighted line. Use with care. There is no recovery of the options associated with this definition and no warnings that you are about to delete the line.
The Bin function takes a set of data associated with an arbitrary order variable and bins it according to value. Both the range over which the data is binned and the number of bins within the range are selectable. The routine is often used in setting up probability distribution functions. Components of the input variable are processed individually and produce separate outputs.
The binning is done by setting up a grid of N cells with the first cell beginning at V b and the last cell at V e. The cell width is given by (V e −V b)∕N. The routine counts the number of values in the variable which fall into each bin cell and returns both a Y bin array (occurrences per bin) and an X bin array which contains the bin centers.
Figures 16and 17 show examples of an unpopulated and populated set up menu for the bin function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The variable(s) to be binned.
The variable which will hold the returned bin centers. If running multiple instances of the function you can store the bin centers into the same variable for any definitions which bin over the same range and use the same number of bins. In each instance the bin centers will be the same in each execution.
This variable contains three separate fields of information. The array elements 0 through N-1, where N is the number of defined bins, holds the number of events in each bin. The array element MaxP contains the center value of the bin containing the maximum number of events and the array element MaxV contains the maximum. If the grid is normalized, the value in MaxV is what each cell was normalized to. This variable must be of the same order as the input variable.
The value of the lower edge of the bin grid.
The value of the upper edge of the bin grid.
The number of cells in the bin grid. The width of the bins is given by
BinWidth = (UpperLimit - LowerLimit) / Bins
If set to YES the bin array is normalized to the bin with the largest number of occurrences.
The Collapse function collapses a 2D grid over a specified range into a 1D grid. The collapse can be done over either the X or Y direction. The returned grid will have the same size as that of the uncollapsed axis.
Figures 18and 19 show examples of an unpopulated and populated set up menu for the collapse function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The input variable representing the grid to be collapsed. The variable was presumably defined in a call to the GridFill function.
The variable which will contain the collapsed grid.
The minimum value at which to start the collapse.
The maximum value at which to end the collapse.
The direction in which to collapse the grid. This will be either X or Y.
The format used to collapse the data. This will be either AVG or SUM. If the format is SUM then the data within the collapsed range is added together, otherwise it is added together and then divided by the number of bins within the collapsed range.
The Conversion function performs basic unit conversions as well as some general conversions. The function works with arbitrary order variables.
Figures 20and 21 shows an unpopulated and populated setup menu for the conversion function respectively.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The input variable. These are the variable(s) to which the conversion will be applied.
The conversion to apply. These are shown in the table below.
| Conversion | Action |
| Phi360->Phi180 | Converts angles from 0∘ → 360∘ to −180∘ → 180 |
| Theta180->Theta90 | Converts angles from 0∘ → 180 to −90∘ → 90 |
| cm->m | Converts centimeters to meters |
| cm->km | Converts centimeters to kilometers |
| m->km | Converts meters to kilometers |
| cm2->m2 | Converts centimeters2 to meters2 |
| cm2->km2 | Converts centimeters2 to kilometers2 |
| m2->km2 | Converts meters2 to kilometers2 |
| cm->m3 | Converts centimeters3 to meters3 |
| cm3->km3 | Converts centimeters3 to meters3 |
| m3->km3 | Converts centimeters3 to meters3 |
| deg->rad | Converts degrees to radians |
Set to YES to reverse the specified conversion. As an example, the conversion cm->m would be applied as m->cm.
Set to YES to perform 1 over the specified inversion. As an example, the conversion cm3->m3 would be applied as cm−3 → m−3.
The output variable(s) holding the results of the conversion. This must be of the same order as the input variable. The results can be stored back into the input variable.
The Cross Cal(ibration) function aligns variables to a common base variable. This is done
adjusting the average value of each variable to match the average of the base value. The
adjustment is done by shifting each variable by a constant additive or multiplicative value. If
is the average of variable A and
the average of the base variable, then the function will modify
each element in A as:

when using the additive approach or as

when using the multiplicative approach. In both approaches if A = B there is no change in A. Vectors can be cross calibrated component by component or as a whole by matching and shifting the vector magnitudes. The the latter case the cross calibrated vectors are constructed by multiplying the cross calibrated magnitudes by the unit vectors of the input vectors.
Figures 22and 23 shows an unpopulated and populated setup menu for the cross calibration function respectively.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
A linked variable containing as components all of the measurements to be cross calibrated including the base measurement. The linked variables can be either vectors or scalars but not both.
The type of variables which make up the input linked variable. This is either Scalar or Vector.
The cross calibrated variables. This should be a linked variable of the same order as the input variable. The variables will be returned in the same order they are input.
Determines how the base variable is set. The options are Avg and Mean. Use Avg if you want to calibrate to the average of all the input data rather than to a specific variable. Use Mean if you want to select the base variable from those in the Input variable.
Do the cross calibration by either adding an offset to the component variables or by multiplying them by a constant value.
Select the base variable to use in the cross calibration if Method is set to Mean. For scalars this is an offset into the linked input variable to the variable to use as the base. For vectors this is a set of three offsets, comma separated without spaces, to the vectors whose x, y, and z variables are used as bases. Each component can come from a different vector.
Applicable only to vector variables. Set to Comp if the cross calibration is to be done component by component and set to Mag if only the vector magnitudes are to be used in the cross calibration.
The Cross Helicity function computes the vector Elsässer variables from the plasma density (cm−3), bulk plasma velocity (km/s) and the local magnetic field (nT). The equations used are:


where B is the magnetic field, N the plasma density and V is the bulk velocity. The term
21.8 ∗
∕
is the local Alfvèn velocity and aV and aB are the mean plasma bulk and Alfvèn velocities
computed over the time period covered by the data. The removal of the mean in the equation is
optional.
Figures 24and 25 shows an unpopulated and populated setup menu for the cross-helicity function respectively.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
A scalar variable containing the plasma density in units of cm−3.
A vector variable containing the plasma bulk velocity components in units of km/sec.
A vector variable containing the magnetic field components in units of nT.
An vector variable which will contain the results of the first stated equation above (the outward propagating component of the the Elsässer computations).
An vector variable which will contain the results of the second stated equation above (the inward propagating component of the the Elsässer computations).
Set to YES to subtract the average bulk plasma and Alfvèn velocities in the Elsässer computations. If set to NO both aV and aB are set to 0 in the equations.
The Def(ine)Grid function sets up a grid information structure which may be used later to define data grids into which data can be stored. A data grid is a two dimensional set of cells extending over a fixed range in both the X and Y direction. Both the ranges and the number of cells in each direction can be different. The grid can be cyclic in one, both, or neither direction. The data which is being stored in the grid can be filtered to lie within a defined intensity range. If the measurements being stored have width in either X or Y they can be stored to cover multiple cells in the grid by setting the storage method to be BAND. A 1-D grid definition is possible by setting the number of cells along one of the two directions to 1. If the X range of the grid is left unset, the X-axis of the grid can be set to the same range as the internal time grid set up in the Time Definition Menu.
Figures 26 and 27 below shows both an unpopulated and a populated setup menus for the defgrid function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins. Note: A single function definition entries occupies two lines in the text window.
The variable name under which the grid definition will be stored.
This is the minimum value of the range covered by the X axis of the grid. If this value is left unset (blank) then it is internally set to the start time in the Time Definition Menu. If this field is left blank then the X Maximum value field should also be left blank.
This is the maximum value of the range covered by the X axis of the grid. If this value is left unset (blank) then it is internally set to the end time in the Time Definition Menu. This field should be lefti blank if the X Minimum value field has been left blank.
If the X Minimum and Maximum values are defined then this is the number of grid cells along the X axis of the grid otherwise it is the time duration in seconds spanned by a single cell.
The method to use when storing data along the X direction in the grid. This can be either POINT or BAND. The two methods are discussed in Section 2.1.3 with respect to the system-wide time grid but can be generalized to any measurement. The same caveats mentioned for time storage are valid for general storage.
Set to YES if the X-axis of the grid is cyclic, NO otherwise.
This is the minimum value of the range covered by the Y axis of the grid.
This is the maximum value of the range covered by the Y axis of the grid.
The number of grid cells along the Y axis of the grid.
The method to use when storing data along the Y direction in the grid. This can be either POINT or BAND. The two methods are discussed in Section 2.1.3 with respect to the system-wide time grid but can be generalized to any measurement. The same caveats mentioned for time storage are valid for general storage.
Set to YES if the Y-axis of the grid is cyclic, NO otherwise.
Any data with values less than or equal to this value are excluded from the grid.
Any data with values greater than or equal to this value are excluded from the grid.
The Dynamic Power Spectra function computes the dynamic power spectrum associated with a scalar variable. The variable must have been stored in a time-based grid, either the system time based grid or one created within the UDFAnalysis framework. The dynamic power spectrum is returned as a two dimensional grid of spectra with time along the X axis and frequency along the Y axis. There is no averaging of the output spectra. The number of cells along the X axis depend on the length of the input variable, the spectra window size and the size of the window advance. An estimate of the number cells can be obtained from the formula:

where Np is the total number of elements in the input variable, Wp is the number of points used per spectra, and Ap is the number of points to advance after computing a spectra.
The number of cells along Y in the grid as well as the frequency range covered depends on the method used to compute the spectra. When the spectra are computed using an FFT the number of cells is half the window length and the frequency range runs from 0 to the Nyquist Frequency. When using an MEM (Maximum Entropy Method) the number of cells is set to the number of frequency steps requested and the range will run from the specified beginning to ending frequency step.
Using an FFT requires that the window length be a power of 2. If this is not the case the number of points used are zero padded up to the nearest power of 2.
You can select to store multiple power spectra runs within a single power spectra grid. This makes sense only when using an MEM to produce the power spectra and when the input data has been frequency filtered into 2 or more frequency bands. When storing multiple dynamic spectra in a common grid the first definitions of how to compute the spectra will be used with the exception, in the case of MEM, of the number of coefficients to use.
Figure 28 and 29 below show an unpopulated and a populated setup menu for the dynamic power spectra function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The variable name of the measurement from which the dynamic power spectra will be constructed.
The variable name of the dynamic power spectra. Leave this field blank if you want the dynamic spectra to be added to the grid associated with the last defined output variable.
The method used to compute the power spectra. This will be either MEM (Maximum Entropy Method) of FFT (Fast Fourier Transform).
The window length which is the number of points to include in the computation of each spectra.
The number of points to advance the window after computing each spectra.
The number of coefficients to use to compute an MEM based power spectra. This only has relavence if the selected method is MEM.
The number of frequencies at which to compute the power spectra when using the MEM method. The frequencies are equally spaced, either logarithmically or linearly between the specified start and stop frequencies. This value also sets the number of cells along the Y axis in the dynamic power spectra grid. This option only has relevance if the selected method is MEM.
The starting frequency in Hertz at which to compute the power spectra when using the MEM method.
The ending frequency in Hertz at which to compute the power spectra when using the MEM method.
The scaling to use when dividing the frequency range to determine the locations at which to compute the power. This can be either LINEAR or LOG.
The Equation function solves arbitrary algebraic expressions. These can include constants and arbitrary order variables. All variables of order greater than 1 used in an equation must have the same length. Scalar variables can always be used with higher order variables. The expressions can include any of the normal C math functions such as sqrt, cos, atan2, log, exp, etc.
When variables of order greater than 1 are included in the equation, the expression is solved individually for each component.
Figures 30 and 31 below show an unpopulated and populated setup menu for the equation function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The variable which holds the solutions to the expression. This must be of the same order of the highest order variable included in the equation.
The expression to be solved. Variables are specified as defined within the UDFAnalysis framework except that they are preceded by an underscore (_). The variable S would then be specified as _S and the vector variable vec,V as vec,_V. All variable names must be delimited by spaces unless they are the first or last item in the expression in which case the leading or trailing space can be dropped respectively.
Equations can contain constant values and any of the normal high level C functions such as atan, exp, log, sin, etc.
You can use variables of any order within an expression, however, with the exception of scalars you cannot mix variables within an expression of different orders. When using linked variables the expression is evaluated for each component. As an example, the equation
2.0 * vec,_V + _S + 1.0
expands to the three equations
2.0 * _Vx + _S + 1.0
2.0 * _Vy + _S + 1.0
2.0 * _Vz + _S + 1.0
and so requires that the output variable be of order 3.
The example below shows the the inclusion of a math function in an equation.
2.0 * exp( sqrt( _Vx * _Vx + _Vy * _Vy + _Vz * _Vz )/ _T )
Note: Only the variables need to be space delimited so that 2.0*exp( for instance can be specified without spaces if desired.
The Filter function use sets of low pass Savitsky-Golay filters to frequency filter data. The function can return data within, above, and below a defined frequency band. The pass band can be set to have zero width in which case there is no data within the pass band. The function is often used in the latter mode to remove the mean (i.e., very low frequency data) from a data set. For variable inputs of order greater than 1, the function is run individually for each component in the variable.
Figures 32 and 33 show examples of an unpopulated and populated setup menu for the filter function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The variable name of the data to be filtered. The function accepts variables of any order.
The variables which hold the filtered data. When computing a pass band filter, there can be a maximum of three variables returned, the data within, above and below the pass band. A maximum of two variables can be returned when the pass band has zero width. What it returned and the order it is returned in is determined by the Return option described below.
When using a linked input variable you need to make sure that there are enough output variables defined to hold the number of returns for each variables. Running a filter with a non-zero pass band and returning all three regions on a vector input requires output variable of order 9 (3 variables for each component of the input variable). The data is returned in the order specified in the Return option component by component.
The lower edge of the band pass filter in Hz. If it is set to the Upper Band Edge then the band pass has zero width and only the filtered data above and below this frequency can be returned.
The upper edge of the band pass filter in Hz. If it is set to the Lower Band Edge then the band pass has zero width and only the filtered data above and below this frequency can be returned.
The order of the fit polynomial used in the Savitsky-Golay computation. The default value of 2 is a good choice. Higher values tend to increase smoothing of narrow features in the data at the expense of broader ones.
The order of the derivative to use in the Savitsky-Golay computation. For data smoothing you want leave this at the default value of 0. If you are looking to compute the numerical derivative of the data then you want to set this value to 1 or higher.
You can mask off the ends of the filtered data arrays. These are suspect due to the finite filter lengths used in the filtering. The amount of data masked off is half the used filter length which varies depending on the specified filter frequencies. If the option is set to YES the suspect cells in the data grids are set as unfilled.
A string of maximum 3 characters specifying what quantities are to be returned. H indicates that the filtered data above the upper frequency cutoff is to be returned, L that the filtered data below the lower frequency cutoff is to be returned, and B that the filtered data within the pass band is to be returned. The latter only had meaning with a non-zero band width. The data is returned in the order specified in the option, hence, LBH would return the filtered data below the band in the first component of the output data, the filtered data in the band data in the second component of the output data, and the filtered data above the band in the third component of the output data. A return setting of just L would return only the filtered data below lower frequency cutoff.
The Fit function performs least square and non-linear 1-D fits to arbitrary data models and , 2-, and 3-D least square fits to arbitrary order polynomial models. The function returns the fit coefficients as well as appropriate goodness of fit values (χ2, standard deviation, covariance values). The routine can also return the fit expanded and stored within a data grid identical in size to that of the input data. This is needed when the fit is to be used as input to other function calls but not if it is just to be plotted. The plotting routines will automatically expand a fit from the coefficients prior to output.
The fitting routine accepts data of the form (X,Y,Z,V) and fits it to a model F such that V = F(X,Y,Z). Fitting is carried using one of three different algorithms. Which algorithm is used depends both on the type of fit requested as well as the dimensionality of the data. The model characteristics are must be supplied to the routine as a Tcl procedure. There are a number of models currently available within the UDFAnalysis framework, however, if a suitable model is not found it is up to the user to provide the necessary procedure describing the model characteristics. The procedure specifics depend on the type of fit being performed and are described below.
1D Least Squares Fit. This fits a set of (X,V) data to a generic linear model. The characteristics of the model are supplied to the fit routine through an external Tcl procedure. The procedure computes and returns the model basis function values at any X. The interface to the procedure has the form:
proc FNAME X cM nC
where FNAME is the procedure name, X is the input value at which the model basis functions are computed, cM is the returned array of computed basis functions, and nC is the number of coefficients used in the fit. An example procedure TUpolyFunc is shown below. The basis functions for a polynomial are easy to compute being just XN for N running from 0 to nC - 1.
proc TUpolyFunc { X cM nC } {
upvar $cM A
# FIRST BASIS function is 1.0
set A(0) 1.0 set J 0
# COMPUTE succeeding basis functions by multiplying X by
# the previous basis function.
for { set I 1 } { $I < $nC } { incr I ; incr J } {
set A($I) [expr $A($J) * $X]
}
}
Routines similar to that shown above can be written for any model which can be expressed by a recursive algorithm such as those based on Bessel functions or Legendre polynomials. If there is no recursive relationship for the function you need to limit the number of coefficients to some reasonable value to prevent the procedure from becoming monolithic is size.
1D Non-Linear Fit This fits a set of (X,V) data to a generic non-linear model using the Levenberg-Marquart method of solution. The characteristics of the model are supplied to the fit routine through an external Tcl procedure. The procedure computes both the value of the model fit and the values of the derivatives of the model with respect to the fit coefficients at any X. The interface to the procedure has the form:
proc FNAME nA dFdC InFo
where nA is the number of coefficients being solved for, dFdC are the values of the derivative of the model with respect to each coefficient, and Info is an array provided by the fitting routine containing values needed in the computation of both the value of the function and dFdC. The value of the function is returned through the procedure.
The procedure needs to have the array of X variables being fit as a global variable. This variable is always _X_. The InFo array contains the index of the X variable to use in forming the returned solutions in InFo(4) and and the current estimates of the fit coefficients in InFo(5) through InFo(5+nA-1).
An example of the procedure used in the fit of a set of data to the model

where A, B, C and D are the coefficients being solved for is shown below.
proc FunC { nA dFdC InFo } {
global _X_
upvar $dFdC dA
upvar $InFo iN
# THIS is the X value to solve the equations for
set xV $_X_($iN(4))
# THESE are the current values of the coefficients
set A $oP(5)
set B $oP(6)
set C $oP(7)
set D $oP(8)
# COMPUTE X**2, the value of the exponent, and the terms
# multiplying the exponent.
set xS [expr $xV * $xV]
set ExP [expr exp(-$D * $xS)]
set PolY [expr $A * $xS + $B * $xV - $C]
# THE value of the function which is returned through
# the procedure is
set yV [expr $PolY * $ExP]
# THE derivitive with respect to A [X^2 * exp(-D * X^2)]
set dA(0) [expr $xS * $ExP]
# THE derivitive with respect to B [X * exp(-D * X^2)]
set dA(1) [expr $xV * $ExP]
# THE derivitive with respect to C [-exp(-D * X^2)]
set dA(2) [expr -$ExP]
# THE derivitive with respect to D [X^2 * yV]
set dA(3) [expr -$xS * $PolY * $ExP]
return $yV
}
In addition to the external routine which supplies the characteristics of the model, the non-linear fit also requires a initial guess of the model coefficients. The better the guess the faster the convergence is reached in the solution. If the guess is to far off sometimes convergence is never reached.
2 and 3D Least Squares Fits These fits work on 2 and 3D data sets. Because they only fit to a model polynomial neither requires an external user supplied function.
Figures 34 and 35 shows an unpopulated and populated setup menu for the fit function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
Set to YES if the fit is to be made using the least-squares fitting algorithm. Set to NO if the Non-Linear fitting algorithm is to be used. The Least Squares algorithm works with any dimension data while the Non-Linear algorithm works only with 1D data. Both Least Squares and Non-Linear fits to 1D data sets require an external Tcl routine to describe the model characteristics.
The input data as a linked variable which is generally set up using the VMAP function. The number of components depends on the dimension of the function being fit and is generally 1 larger than the dimension. In their component order in the input variable, 1D fits should contain the X then V variables, 2D fits the X, Y, and V variables, and 3D fits the X, Y, Z, and V variables. If there is a Weighting function associated with the V variable it should be included as the last component in the input variable.
A 4 character string specifying the scaling to use for the input data to the fit. The first character in the string is associated with the X variable, the second the Y, the third the Z and the fourth the V. If the variable is not included in the fit then it scaling setting is ignored. Variables which have their scaling set to L (linear) are used in the fit as input. If its set to any other character then the scaling is assumed to be logarithmic and the log10 of the variable is used in the fit. In the latter case variables which have values which are <= 0 are not included in the fit.
For 1D fits, either least-squares or non-linear, this is the number of coefficients used in describing the model. For 1D fits to a polynomial this is 1 larger than the order of the polynomial to be fit to. For 2D and 3D fits this is the order of the polynomial being fit to (the function will determine the actual number of coefficients).
When performing a 1D Non-Linear fit the user must supply a set of initial guesses for each of the model coefficients. These are set in the variable given here. Use the SETV function to set up the values.
The name of the external Tcl procedure describing the model being fit to. This field is only used in 1D linear and non-linear fits. It is not used for either 2D or 3D least squares fits. The UDFAnalysis framework contains several built-in procedures to describe various models. These are shown in the table below.
| Procedure | Notes | Expression |
| TUpolyFunc | 1D Linear | ![]()
|
| TUlgp0Func | 1D Linear | ![]()
|
| TUlgpFunc | 1D Linear −1.0 ≤ X ≤ 1.0 | ![]()
|
| TUsphFunc | 2D Linear Y = cosθ X = ϕ (deg) | ![]() |
| APgaussFunc | 1D Non-Linear | ![]()
|
| APtgaussFunc | 1D Non-Linear | ![]()
|
| APlognormalFunc | 1D Non-Linear | ![]()
|
| APkappaFunc | 1D Non-Linear | ![]()
|
The output variable which may be of order 1 or 2. The first variable holds both the returned fit coefficients as well as other information returned by the fitting routine. The second variable when present will contain the expanded model fit.
The returned data from the fitting routine has its information stored in the indices indicated in the table below.
| Index | Contents |
| aType | Identifies the variable type. It is set to FIT. |
| fDEF | Identifies the function definition used in the. |
| fDim | The dimension of the fit |
| fFunc | The external function used in 1D Least-Squares and Non-Linear fits |
| LSq | YES if a Linear-Squares fit was performed |
| xSca | X variable scaling (Linear or Log) |
| ySca | Y variable scaling (Linear or Log) |
| zSca | Z variable scaling (Linear or Log) |
| vSca | V variable scaling (Linear or Log) |
| nC | Number of coefficients returned |
| gFit | A goodness of fit value. There is no value returned with non-linear fits. With a 1D least squares fit this is χ2 and with 2D and 3D least squares fits this the standard deviation of the data from the model. |
| CV# | The covariance values for each of the coefficients in the model. These are returned only for the 1D least squares and non-linear fits. # runs from 0 to nC - 1 |
| # | The model coefficients. # runs from 0 to nC - 1 |
The Gridfill function either creates and fills a new grid from the input variables or adds them into already existing grid. Empty cells in a grid can be filled using a linear interpolation scheme.
Figures 36 and 37 show examples of an unpopulated and populated setup menu for the grid fill function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The variable representing the grid(s) to be created or added to. The routine accepts variables of arbitrary order. If this entry is left blank it reverts to the last defined instance. This allows multiple variables to be placed within a single grid definition.
If a grid is to be created from the input X, Y, and I variables then this is the variable holding the grid information structure which was presumably created through the DefGrid function. Leave this field blank if you want just to fill an existing grid. In this case the Grid variable should already exist and will be appended to according to the grid information under which it was created.
The X variable to use when creating the grid. This should be a variable of order 2 if the measurements have width in the X direction (i.e, a start and stop value). If the variable has order 1 then the stop values are set to the start values which makes this a point measurement. When constructing multiple Grids, the same set of X values is used in each construction.
The Y variable to use when creating the grid. This should be a variable of order 2 if the measurements have width in the Y direction (i.e, a start and stop value). If the variable has order 1 then the stop values are set to the start values which makes this a point measurement. When constructing multiple Grids, the same set of Y values is used in each construction.
The Intensity variable(s) to use when creating the grid. There should be one defined intensity variable for each grid being constructed.
The fill option. This is OFF if no fill is to be performed, X to fill just along the X direction, Y to fill just along the Y direction, XY to fill first along the X direction and then along the Y direction and YX to fill first along the Y direction and then along the X direction.
The Index function constructs an an indexed array for a scalar variable. The index array has the same length as the parent variable and runs sequentially from 0.5 to N − 0.5 where N is the array length.
Figures 38and 39 show an unpopulated and populated setup menu for the index function respectively
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The scalar variable for which the index array is generated.
The index array.
The Mask function constructs a data mask from a scalar variable which can then be applied to other variables to mask out specific elements in their array. A mask is a binary grid (0 or 1) of the same length as the variable from which it is constructed. Elements which are 0 in the mask correspond to the cells in the variable from which the mask was built which meet a defined condition. When the mask is applied to other variables, all cells in the variable for which the mask cell is 0 are set to empty (unfilled). The masked variable can be returned either in the input variable or in a new variable (preserving the input variable).
The mask function allows for continued function definitions. To continue a definition leave the source option blank. The function appends the option definitions on this line to those of the last function definition in which a source was specified. The current Condition and Value options in the continued definition line are ignored. This improves execution speed for function definitions which would use the same mask.
Figures 40 and 41 shows an unpopulated and populated setup menu for the mask function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The name of the scalar variable from which the mask is created. Leave this option blank if you want to use the mask generated from last execution of the function.
The conditional statement to use when constructing the mask. This is ignored if no source is defined. The conditions available and their usage are shown in the following table where D is the data being tested and V1 and V2 are the conditional values defined in the next entry. The mask entry is set to 0 if the condition is met.
| Condition | Definition |
| > | D > V1 |
| >= | D >= V1 |
| == | D == V1 |
| != | D != V1 |
| <= | D <= V1 |
| >< | (D1 > V1) && (D1 < V2) |
| >=<= | (D1 >= V1) && (D1 <= V2) |
| >=< | (D1 >= V1) && (D1 < V2) |
| ><= | (D1 > V1) && (D1 <= V2) |
The constants used in the condition tests according to the table above. All conditional tests make use of the first constant while only the last four conditions listed in the table make use of the second constant value. This option is ignored if no source is defined.
The variables to apply the mask to. The routine accepts variables of arbitrary order. The variable components are individually processed.
The variable holding the masking results. The output variable must be of the same order as the target variable. The results of the masking may be stored back into the target variable which would overwrite is current contents.
Set this option to YES is you want any masked out cells in the masked variable to be filled using a linear interpolation. Leave at NO otherwise.
The math function provides constructs of the form

where A and B are variables defined within the UDFAnalysis framework, op is an mathematical operation to perform and C is the variable in which the result is returned. You can substitute a constant for the B variable, but not for A. The function is designed to work with variables of arbitrary order. How the expression is evaluated depends on the operation.
Figures 42 and 43 shows an unpopulated and populated setup menu for the mask function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The A variable in the math expression. This may be a linked variable. It cannot be a constant value.
The math operation. These are summarized below. It should be understood that in the definitions A refers to Input A, B refers Input B and C refers to the Output variable. If Input B is not indicated as being used in the definition then it is not required in the expression and should be left blank.
| Operator | Definition |
| +, -, *, / | The basic addition, subtraction, multiplication and division math operators. They require both A and B. If A and B are linked variables they must be of the same order. The expression is evaluated once for each pair of components in the A and B variables. If B is a scalar or constant then then expression is evaluated for each A variable using the same B variable in the evaluation. C must be the same order as the A. |
| ABS | The absolute value executed as C = abs(A). The expression executes once for each component. C must be of the same order as A. |
| ATAN | The arc tangent executed as C = atan2(A,B). with C returned in radians. C must be the same order as A. B must be of the same order as A or a scalar. The expression executes once for each component. |
| ATAND | The arc tangent executed as C = atan2(A,B) with C is returned in degrees. C must be the same order as A. B must be of the same order as A or a scalar. The expression executes once for each component. |
| AVG | The
average
operator
executed
as
C = |
| DETREND | The
detrend
operator
executed
as
C = A − |
| EXP | Base e exponentiation executed as C = exp(A). The expression executes once for each component. C must be the same order as A. |
| FMOD | The floating point modulus executed as C = fmod(A,B). C must be of the same order as A. B must be of the same order as A or a scalar. The expression executes once for each component. |
| LOG10 | Base 10 logarithm executed as C = log10(A). Grid values in C are to undefined if the corresponding grid value in A is <= 0.0. The expression executes once for each component. C must be the same order as A. |
| LOGE | Base e logarithm executed as C = ln(A). Grid values in C are set to undefined if the corresponding grid value in A is <= 0.0. The expression executes once for each component. C must be the same order as A. |
| NORM | Normalization operator executed as C = A / Max(|A|) where Max(|A|) is the maximum absolute value in the A grid. The values in C will all lie between -1.0 and 1.0. C must be the same order as A. |
| POW | Exponentiation to an arbitrary base execited as C = pow(A,B) where A is the base and B the exponent. C must be of the same order as A. B must be of the same order as A or a scalar. The expression executes once for each component. |
| SQRT | Square root executed as C = sqrt(A). Grid values in C are set to undefined if the corresponding grid value in A is < 0.0. The expression executes once for each component. C must be the same order as A. |
| SUM | The summation operator executed as as C = ∑ Ai + ∑ Bj. The sum is over all i components of A and j components of B. Input B is not required but is used if defined. C is a scalar variable. If A is not a linked set of variables and no B variable is defined then this operator will return a copy of A in C. |
The B variable in the math expression. This may be a linked variable or a constant value.
The variable which holds the result of the operation. The order of the output variable is determined by the operation used in the expression. See the table under the Operation option above.
The MEM function uses the Maximum Entropy Method to compute the power spectrum of a variable with evenly sampled data. Variables of arbitrary order are accepted with their components processed individually. The output of the MEM is a non-time based grid of length Number of Steps. It can only be used in Functions with variables of like length.
Figures 44 and 45 shows an unpopulated and populated setup menu for the MEM function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The variable that the MEM operates on. The routine accepts variables or arbitrary order. The components are individually processed to produce a power spectrum of each.
The variable containing the output power spectral density. The variable must be of the same order as the Input variable.
A scalar variable in which the frequency steps at which the power spectral densities are computed is returned. The function computes the power at the same frequency steps for each processed component in the input variable.
The number of linear prediction coefficients to use in computing the power spectral density. There is not a straightforward method to determine how many coefficients to use. Too small a value will omit or smooth high frequency peaks and sometimes will not split closely related frequencies while too large a value tends to create noise in the spectra. A good value to start out with is something on the order of 3 to 6% of the total number of data points.
The beginning frequency in Hz of the power spectrum. This will be the first value in the Output Frequency array.
The ending frequency in Hz of the power spectrum. This will be the last value in the Output Frequency array.
The number of frequencies at which to compute the power spectral densities. The spectral densities are computed starting at the beginning frequency and ending at the ending frequency in steps of

where eF is End Frequency, bF is Begin Frequency and nF is Number of Steps.
The Print function creates an ASCII dump file of selected variables. The file consists of a short header which lists the dumped variables followed by the data itself. The routine handles variables of arbitrary order. You can print both time and non-time based variables but the can’t be mixed. All of the variables being output must have the same grid size, if not, the routine fails silently.
The print function allows for continued function definitions. To continue a definition leave the output file option blank. The function appends the option definitions on this line to those of the last function definition in which an output file was specified. This allows multiple variables to be included in the same file.
Figures 46 and 47 shows an unpopulated and populated setup menu for the Print function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The name of the file ASCII output file. If you want the file in a directory other than the current working directory you need to include the full path name. If the file already exists it will be blatantly overwritten. If you leave this option blank the variables are output together with the variables in the last function definition which includes a defined output file.
If the data being dumped is time based then select YES here. This will add the a start and stop time to each line of data in the file. You can say NO even if the data is time based in which case the start and stop times will be replaced a sequential counter beginning at 0. You should not mix time and non-time based data in the same dump file. The option is ignored if the File Name option is left blank.
The C format you want the data to be output with. The same format is applied to all of the dumped variables in this definition. Different definitions can be output using different formats.
The variable to be dumped. This can be a variable of arbitrary order in which case all its components will be printed.
The Random function constructs variable of random numbers. The length of the variable is set to be that of the input variable whose sole purpose is to provide the length. The random variables can be of an sign and will range up to a preselected magnitude. The seed to the random number generator is selectable.
Figures 48 and 49 shows an unpopulated and populated setup menu for the Random function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The scalar value used to provide the length of the random number variable.
The variable containing the random numbers.
The sign range over which to generate the random values. This can be PN for positive and negative values, P for positive only values and N for negative values only.
The maximum magnitude of the random values. If the range is set to PN then values will be generated between -(Max Value) and (Max Value), if the range is set to N then values will be generated between -(Max Value) and 0.0, and if the range is set to P then values will be generated between 0.0 and (Max Value).
The initial seed value to use when computing the random number. The value can be any number between 1 and 65536. If left blank, the current seed value will be used.
The Spatial Derivative function computes the curl (∇×) and divergence (∇⋅) of a vector field or the gradient (∇) and Laplacian (∇2) of a scalar field. A vector or scalar field is established by specifying the values of a vector or scalar value at multiple positions within a spatial volume and then fitting the data to a 3D polynomial model. The order of the polynomial is selectable, however, it must be low enough that the number of coefficients used in the fit is less than or equal to the number of defined spatial positions in the volume. The minimum number of positions for a first order fit is 4, for a second order fit 10, and for a third order fit 20. The positions used in the fit cannot be coplanar. The derivatives are computed at the arithmetic average position of all points making up the volume. The function can also return the vector or scalar value at the same position.
In most instances the data are fit to a first order 3D polynomial. This has the form:

where i represents the x, y, or z component for vector variables and can be ignored in the case of scalar variables. The A’s are the fit coefficients.
Computation of the divergence and curl as well as the gradient and Laplacian is straightforward. The divergence is given by:

the curl is given by:

the gradient is given by

and the Laplacian is 0. None of the derivatives have a a spatial dependence in a first order fit. Fits to higher order polynomial models introduce a spatial dependence for all derivatives and allows for the possibility of a non-zero Laplacian.
At present this function only works for first and second order fits as it is lacking the code to compute the derivatives for higher order polynomials.
When working with space plasma data it is possible in certain situations to artificially extend the number of input positions. This can be done when a) the bulk plasma velocity is known and b) the Taylor frozen-in field theorem is valid. In its default mode the function computes one set of spatial derivatives at each grid point in the input data. Using the frozen in theorem this can be expanded to include 1 or more neighboring cells. This is done by shifting by using the measured plasma velocity together with the time difference between the base and extended cells to adjust their positions to where the measured vector or scalar plasma moments would have existed at the time of the base measurement. This adds more measurement positions to the volume (as well as increases its size) which allows fits to higher order polynomial functions. It is one way, when there is insufficient data to fit to a second order polynomial, to allow such fits to be done and to obtain non-zero estimates of the Laplacian. If, for example, you know a measurement at 4 positions you need extend the volume to include the next two cells (8 extra points) to fit to a general second order polynomial.
Invocation of the frozen in theorem allows manipulation of the data in another manner. When the measurement points are or close to coplanar, an time delay can be added to one or more of the measurements to take them out of coplanarity. This does not increase the number of data points in the volume but changes their location.
The spatial derivative function allows for continued function definitions. To continue a definition leave the position variable blank. The function adds appends the option definitions on this line to those of the last function definition in which a position variable was specified. The Delay, Fit Order, and Extend options on this line will be ignored. This allows a number of measurements to be specified at the defined positions and also improves execution speed as the routine does not have to repeat computations which deal with the positions.
Figures 50 and 51 shows an unpopulated and populated setup menu for the Spatial Derivative function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
A linked variable containing the position vectors at which the input measurements are taken. Each position vector in the variable is given in the order X, Y, and Z respectively. There is a 1 to 1 correspondence between the position vectors and the input measurements. Leave the positions blank if the defined input information on the line to be grouped with the information from the last line for which a position variable was defined. The position variable is generally linked using the VMAP function.
An array of time offsets, one per position vector generally defined through the SETV function. These are used when working with plasma data to artificially move one or more of the measurement positions according to the Taylor frozen in field theorem. To disable this option leave it blank. It is also ignored if the Positions option has been left blank in which case the definition on line in which the positions are specified is used.
The computation of the new positions are made using the vector plasma bulk velocity which must be the first defined input variable in the function definition. Delays are given in units of seconds and are applied as

where
is the position
at a delay of 0 seconds,
is the bulk plasma velocity, T is the delay time, and
is the
the new position. Delays can be either positive or negative.
The input variable type. This can be set to either VECTOR or SCALAR.
A linked variable containing the set of vector or scalar measurements taken at each defined position. The type of measurements must match its IType setting.
The number of grid cells to use to form the volume. The default is 1. If you are fitting plasma data in which the Taylor frozen in condition is valid you can set this to higher values to increase the number of points defining the volume used in the computation. When doing this the first input variable in the function definition must be the set of plasma velocity vectors being used to reposition the added data points. The option is ignored if the Positions option has been left blank.
The order of the polynomial to fit to. It is defaulted to 1.
The results of the derivative computations. If the input variable is a vector this is the curl (a vector quantity) and divergence (a scalar quantity), otherwise it is the gradient (a vector quantity) and Laplacian ( a scalar quantity). The Laplacian is only returned if the fit order is greater than 1.
The number of components in the derivative variable can be used to limit what is returned. If the variable is a scalar only the scalar derivative will be returned, if it is a vector then only the vector derivative will be returned, and if its a fourth order variable both the vector and scalar derivatives are returned. In the latter case the three vector components are returned in the first three variable components and the scalar is returned in the fourth component.
The value of the scalar or vector input variable at the arithmetic average position of all points making up the volume. This is the location at which the derivatives are computed. The variable must be the same order as the input variable.
The Setv function allows constant values to be assigned to locations in a variable array. Up to 4 constants can be assigned at once beginning at a selectable array index and incrementing from there.
The setv function allows for continued function definitions. To continue a definition leave the variable option blank. The function appends the values on this line to the last defined variable continuing at the next index if no index position has been defined or begins at the defined index.
Figures 52 and 53 shows an unpopulated and populated setup menu for the Set Value function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The variable to assign the constants to. If this field is left blank the routine will use the variable assigned in the last function definition where field is defined.
The array index at which the first defined value is to be stored. If both this and the Variable fields are blank then routine will continue filling the last defined variable beginning at the location at which it left off.
The first constant value to be stored in the variable.
The second constant value to be stored in the variable. Leave the field blank is there is no second value.
The third constant value to be stored in the variable. Leave the field blank is there is no third value.
The fourth constant value to be stored in the variable. Leave the field blank is there is no fourth value.
The Statistics function computes the mean value, variance, standard deviation, average deviation, skewness and kurtosis of a variable. The information is returned in the output variable in the order listed above (array element 0 is the mean, element 1 the variance, etc). Needless to say this is a non-plottable variable.
The various quantities are defined below. In all the algorithms input X is the variable, N is the number of elements in X (number of cells in the data grid).
| Quantity | Algorithm |
| Mean | x = ∑
n=0N−1x
i |
| Variance | V = ∑
n=0N−1(x
i −x)2 |
| Standard Deviation | σ = |
| Average Deviation | σ = ∑
n=0N−1|x
i −x| |
| Skew | S = ∑
n=0N−1 3 |
| Kurtosis | K = − 3 |
Figures 54 and 55 shows an unpopulated and populated setup menu for the Statistics function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
Input variable to the statistics algorithms. This can be an arbitrary order variable. The statistics computations are performed independently on each component.
The variable holding the results of the statistical computations. This must be of the same order as the input variable. Each component of the variable contains 6 elements. These are in order beginning at index 0 and ending at index 5, the mean value, the variance, the standard deviation, the average deviation, the skewness and the kutosis.
Data below this value are excluded in the statistics summations.
Data above this value are excluded in the statistics summations.
The Unset function frees all memory associated with specified variables. This function is not meant to be used with variables defined through the Variable Definition menu but with variables defined in various function calls, especially temporary variables.
Figures 56and 57 show an unpopulated and populated setup menu for the Unset function respectively
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The variable to be freed.
The Vector function provides operations of the form

where A and B are vector variables defined within the UDFAnalysis framework , op is a vector operation such as the dot or cross product, and C is the resultant scalar or vector.
Figures 58 and 5955 shows an unpopulated and populated setup menu for the Vector function.
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The primary vector used in the algorithm.
The operation performed by the function. The available operations are described in the table below. It should be understood that in all formula A represents Input A, B represents Input B and C represents the output. If B is not shown in the formula then it is not used in the definition. The Output Type column indicates the order of the returned variable.
| Operation | Description | Output Type |
| ANGLE | C = cos−1 (degrees) | Scalar |
| CROSS | C = A × B | Vector |
| DISTANCE | C = | Scalar |
| DOT | C = A ⋅ B | Scalar |
| MAGNITUDE | C = |A| | Scalar |
| RECTOSPH | (Ax,Ay,Az) → (Cr,Cϕ,Cθ) | Vector |
| SPHTOREC | (Cr,Cϕ,Cθ) → (Ax,Ay,Az) | Vector |
| UNIT | | Vector |
In the RECTOSPH operation the returned azimuth angle runs from −180∘ to 180∘ and the returned polar angle from 0∘ to 180∘. The angles are returned in degrees. The SPHTOREC operation expects the angle inputs to be in degrees and over the same ranges.
The secondary vector set used in the algorithm. It is not needed for all vector operations (see the above table).
The variable holding the results of the vector operations. The output variable is either a scalar or a vector depending on the vector operation.
The Vmap function maps one variable name into another. This is not a renaming of the variable but the creation of an alias. Both the original and new name can be used interchangeably. The main use of the function is in creating sets of linked variables. The function works with arbitrary order variables.
Figures 60and 61 show an unpopulated and populated setup menu for the Variable Map function respectively
The work area options associated with this menu are described below. The other options have already been described in the introduction to Function Plugins.
The input variable name. This can itself be a linked set of variables.
The output variable name. This is the alias being created for the input variable. It must be of the same order as the input variable.