next up previous
Next: Other remarks Up: Modelling and analysis Previous: Model generation

Belief construction and sensitivity analysis

We have defined - partly functionally and partly explicitly - the model, the quantities of interest, and the belief specifications that we need to carry out the adjustment. In particular, we have organised matters so that variances and covariances pertaining to underlying mean components are stored in variance-covariance store 2, and those pertaining to the mean-plus-residual components are in store 1. It remains only to define a range of values for tex2html_wrap_inline2490 (we choose initially the three values tex2html_wrap_inline2435 ), and for each such value to construct the corresponding tex2html_wrap_inline2417 , and then perform the belief adjustments desired. The fragment of code in Figure 8 contains three parts: calls to the main analysis subroutine with different values of tex2html_wrap_inline2391 , the subroutine itself, and the data. We concentrate only on describing the subroutine.

  figure757
Figure 8:  

For each value of tex2html_wrap_inline2391 , the INDEX:  and COBUILD:  commands are used to construct thirteen tex2html_wrap_inline2443 and thirteen tex2html_wrap_inline2445 quantities. These will have expectations, variances, and underlying mean-component variances computed from their definitions: the former functionally, and the latter as linear combinations of intercept, slope, and error quantities. (Much computation is involved at this point, so the program may take some time to generate the quantities, depending on computer platform.)

Once constructed, the regression parameters tex2html_wrap_inline2447 are collected into the base named tex2html_wrap_inline2078 , and the observables are gathered into the base whose name is tex2html_wrap_inline1905 . Observations from three experiments are then attached: these are shown at the foot of the listing.

For the analysis, we exploit the notion of Bayes linear sufficiency via exchangeability. Here the vector of averages of the observables, tex2html_wrap_inline2453 , over the three experiments is Bayes linear sufficient for the regression parameters. To perform the Bayes linear adjustment of the regression parameters by the observables, it is necessary to set various belief source controls. These indicate to [B/D] the belief stores in which the overall and underlying mean-component variance matrices are stored:

The exchangeable  and usedata  controls specify that [B/D] should take into account data on the observables, and should use the internal routines which exploit exchangeability.

 

Prior Adjusted expectation
Parameter expectation tex2html_wrap_inline2463 tex2html_wrap_inline2489 tex2html_wrap_inline2491
a 1.4 1.5206 1.5256 1.5096
tex2html_wrap_inline2403 0.1 0.0996 0.0877 0.0950
tex2html_wrap_inline2497 0.1 0.0944 0.0858 0.0949
tex2html_wrap_inline2499 0.1 0.0847 0.0827 0.0947
tex2html_wrap_inline2501 0.1 0.0856 0.0823 0.0947
tex2html_wrap_inline2503 0.1 0.0708 0.0767 0.0942
tex2html_wrap_inline2505 0.1 0.0653 0.0746 0.0941
tex2html_wrap_inline2507 0.1 0.0770 0.0806 0.0949
tex2html_wrap_inline2509 0.1 0.0815 0.0846 0.0956
tex2html_wrap_inline2511 0.1 0.0996 0.0955 0.0971
tex2html_wrap_inline2513 0.1 0.0894 0.0918 0.0971
tex2html_wrap_inline2515 0.1 0.1187 0.1105 0.0999
tex2html_wrap_inline2517 0.1 0.1108 0.1062 0.0995
tex2html_wrap_inline2405 0.1 0.0979 0.0972 0.0980
Table 3:  Estimates of the regression parameters

We next make two kinds of adjustments. Firstly, we adjust the regression parameters by the data (three observations over the tex2html_wrap_inline2445 ), and display their adjusted expectations using the SHOW:  command. (As the observables are Bayes linear sufficient for the regression parameters, [B/D] automatically obtains the general adjustment via the sample means only.) The adjusted expectations for the regression parameters are shown in Table 3 for the three values of the stability parameter tex2html_wrap_inline2490 . This output indicates that although the model and observations are consistent with changes in expectation for these parameters (the adjusted expectation for the intercept rises slightly, etc.), changing the stability parameter makes little difference, so that the model is not particularly sensitive to choice of tex2html_wrap_inline2490 for predicting individual tex2html_wrap_inline2528 values.

Our second adjustment assesses variance sensitivities in relation to changes in sample size. It is a property of such second-order exchangeable adjustments that the analysis for a sample of size tex2html_wrap_inline2527 can be deduced with almost no additional computation from the same analysis performed for a sample of size tex2html_wrap_inline2529 . The analysis is particularly simple for adjustments where the observables are Bayes linear sufficient for the collection to be adjusted, as is the case here. Consequently, we tend to make an initial analysis assuming a sample of size one from which we may deduce easily the analysis for a general sample size n. Therefore, we use the usedata  and obs  controls to indicate that the analysis should assume an initial sample size of one and that it should ignore the actual observations available, and we then perform the adjustment of the underlying mean component vector tex2html_wrap_inline2345 by the observables, exploiting Bayes linear sufficiency automatically. [B/D] deduces here that by Y we mean tex2html_wrap_inline2345 from the settings of the belief source controls described above.

Much of the output available from an adjustment is available interactively as further input to the program. These include adjusted expectations, variances and covariances; the resolution transform and its canonical structure, and so forth. Various other functions are available to exploit exchangeability as appropriate. For the sensitivity study here, we output, for each value of tex2html_wrap_inline2490 , the following:

 

displaymath2433


Table 4:  Measures for assessing sensitivity over the model

For the analysis of sensitivity over the model we examine both the canonical resolutions and some implications of changing the sample size. For three values of tex2html_wrap_inline2391 , Table 4 shows two sample sizes and thirteen canonical resolutions. The first sample size, tex2html_wrap_inline2475 , is the sample size needed to achieve a 50% reduction in uncertainty over the collection overall, as measured by the trace of the resolution transform. The second, tex2html_wrap_inline2539 , is the sample size needed to guarantee a variance reduction at least 50% in every linear combination of the quantities tex2html_wrap_inline2477 . The canonical resolutions indicate the effective dimension of the model and the speed of variance reduction as the sample sizes increase.

We discover that these values are highly sensitive to the choice of stability parameter tex2html_wrap_inline2391 . For tex2html_wrap_inline2484 the smallest canonical resolution is tex2html_wrap_inline2483 , so that for a sample size n=1 we can guarantee a reduction in uncertainty of only 0.1% in every linear combination of the mean components. This guaranteed reduction rises to 50% if we take a sample size n=1001, whereas we need take only n=79 to achieve the same reduction when we choose tex2html_wrap_inline2485 . For uncertainty in the collection overall the picture is similar. Examining the canonical resolutions, for tex2html_wrap_inline2491 , the model is dominated by two canonical quantities with resolutions of 0.60 and 0.31 respectively. The remaining canonical quantities have small resolutions, so that large sample sizes will be needed to reduce their variances appreciably. Therefore the learning process for the model with tex2html_wrap_inline2491 is essentially two-dimensional. (This should not surprise us: taking tex2html_wrap_inline2551 forces (4) to become a simple regression model with two parameters, intercept and common slope.)

As we reduce the stability parameter, the dynamics of adjustment change: we increase the number of canonical quantities having noticeable variance reductions for small sample sizes, and we learn more quickly about the collection overall, so that for tex2html_wrap_inline2556 we can learn about almost all combinations of the Y values. Therefore, the model is very sensitive to choice of tex2html_wrap_inline2490 , for learning about changes in Y over time.


next up previous
Next: Other remarks Up: Modelling and analysis Previous: Model generation

David Wooff
Thu Oct 15 11:27:04 BST 1998