Mining Data on Traumatic Brain Injury with Reconstructability Analysis

Mining Data on Traumatic Brain Injury with Reconstructability Analysis Martin Zwick

Nancy Carney

Rosemary Nettleton

Systems Science Program Portlan

Author Jeffry Summers

7 downloads 752 Views 322KB Size
JOURNAL TRANSCRIPT
Mining Data on Traumatic Brain Injury with Reconstructability Analysis Martin Zwick

Nancy Carney

Rosemary Nettleton

Systems Science Program Portland State University Portland Oregon, USA [email protected]

Dept of Medical Informatics & Clinical Epidemiology Oregon Health & Science University Portland Oregon, USA [email protected]

Dept of Medical Informatics & Clinical Epidemiology Oregon Health & Science University Portland Oregon, USA [email protected]

Abstract—This paper reports the analysis of data on traumatic brain injury using a probabilistic graphical modeling technique known as reconstructability analysis (RA). The analysis shows the flexibility, power, and comprehensibility of RA modeling, which is well-suited for mining biomedical data. One finding of the analysis is that education is a confounding variable for the Digit Symbol Test in discriminating the severity of concussion; another – and anomalous -- finding is that previous head injury predicts improved performance on the Reaction Time test. This analysis was exploratory, so its findings require follow-on confirmatory tests of their generalizability. Keywords—machine learning; reconstructability analysis; OCCAM; information theory; data mining; traumatic brain injury; concussion; health care analytics

I.

INTRODUCTION

The analysis of health care data is usually done in a confirmatory mode, with analysis typically restricted to hypotheses generated in advance of the study. Secondary analysis can be useful when the clinical population exhibits unexplained variability in outcomes that are not resolved by the primary analysis. Also, the long time and considerable expense needed to complete a study make additional examination of the data desirable. Both of these conditions are highly relevant to traumatic brain injury: TBI is a serious and prevalent clinical condition for which unexplained variation in outcome unfortunately persists despite decades of research; moreover the volume of existing TBI data provides a unique opportunity for secondary analyses [1-2]. This paper reports the use of an exploratory modeling approach known as reconstructability analysis (RA) applied to the secondary analysis of TBI data. The aim is to discover unexpected relationships in the data and to contribute to ongoing efforts of the Brain Trauma Evidence Based Consortium (BTEC) to develop a dynamic model of brain trauma and a new clinically useful TBI classification system. RA [3-5] is a probabilistic graphical modeling technique, a fusion of information theory and graph theory. Graphs define the models that are considered, and information measures quantify the models’ predictive efficacy. In these graphs, a node is a variable and a link is a relation (an association) between two or more variables. If relations link only two nodes, this is an ordinary graph; if some relations link more This material is based in part upon work supported by the U.S. Army Contracting Command, Aberdeen Proving Ground, Natick Contracting Division, through a contract awarded to Stanford University (W911 QY-14-C0086), a subcontract awarded to the Brain Trauma Foundation, and a secondtier subcontract awarded to Portland State University.1 978-1-5386-2726-6/17/$31.00 ©2017 IEEE 132

than two nodes, it is a hypergraph. One is interested in models that are hypergraphs because one is interested in associations between more than two variables. RA is explicitly designed for exploratory modeling, having the capacity to detect non-linear and complex multivariate interactions that are not hypothesized in advance. RA models are also conceptually transparent: an RA model is simply a conditional probability distribution of a dependent variable (DV), given the composite state of a set of independent variables (IVs). As a probabilistic graphical modeling method, RA overlaps with log-linear modeling, logistic regression, and Bayesian networks. Where it overlaps with these similar methods, it is equivalent to them, although RA has unique features not present in these other methods, and these other methods have unique features not available in RA. All of these probabilistic graphical modeling methods differ from other machine learning methods, such as support vector machines and neural networks, which are designed for continuous variables. The reason RA is attractive for secondary data analysis is that other data analysis methods are often not well designed for exploration, have more limited model types, have difficulty with nominal variables or with stochasticity, or are not conceptually transparent. II.

DATA

The data analyzed here, obtained from Megan Preece [6-9], is on patients with traumatic brain injury resulting from automobile accidents. There are 52 variables, divided into five types, labeled as P, Y, G, C, and N variables, where P = patient characteristics (17 variables), Y = symptoms, i.e., subjective reports (25 variables), G = signs, i.e., objective indicators (4 variables), C = cognitive deficits (5 variables), N = neurologic deficits (1 variable). The sample size is 337, reduced to 175 or fewer when missing data are excluded. The aim of the study is to predict specific deficit (C or N) variables from P, Y, and G variables and from the other deficit variables. In this paper, we report only the prediction of two C variables: the neuropsychological Digit Symbol Substitution Test (DSST), abbreviated as Cdg (N = 255), and the Spatial Reaction-Time Test (RT) normalized for age and sex, abbreviated as Cnr (N = 210). The DSST is a paper and pencil or online task requiring the patient to match symbols with their corresponding digits under timed conditions. It is considered to

be sensitive to brain injury and to concussion in particular. The RT test, less complex than DSST, assesses how quickly the patient responds to visual stimuli. The variables involved in the predictive models discussed in this paper, as IVs or DVs or both, are listed in TABLE I.

In directed searches, a candidate model is compared to a reference model, which is either the independence model, for which no IV predicts the DV, or the data, for which all the IVs predict the DV in a single interaction effect. For example, consider three IVs, A, B, and C, and one DV, Z. The independence model, at the bottom of the lattice of structures, is ABC:Z, where the colon means ‘and.’ This model says that there is a relation between A, B, and C, but no relation between any of these IVs and Z. The data, at the top of the lattice of structures, is ABCZ, in which there is a four-way interaction effect where A, B, and C collectively predict Z. In the present study, the independence model is the chosen reference.

The first letters of the variables indicate their variable types. The table lists, after the variable abbreviations, their initial cardinalities; some variables were rebinned to lower cardinalities in the analysis. For some records, values of some variables were missing. Being missing is included as an additional possible state; so, for example, binary variables with some values missing are listed as having cardinality 3. III.

A relation includes all its projections (embedded relations). ABC thus includes AB, AC, and BC, and the univariate margins, A, B, and C. The order of the relations in a structure is arbitrary, and the order of the variables in a relation is also arbitrary. For example, Z:BAC is identical to ABC:Z.

METHODOLOGY

This section provides a brief summary of the main features of reconstructability analysis. RA calculations in this study were performed using the Occam software package developed at Portland State University (PSU) [10]. This package takes standard text input and provides easily interpretable output. It is web-accessible and can be run either in real time, where it provides html output, or in batch (off-line) mode, in which it emails results to the user as a csv file. This software package runs on PSU servers and is openly available for noncommercial research and educational uses.

An example of a model intermediate between the independence model and the data is ABC:BZ, which says that there is a relation between A, B, and C, which is non-predictive since it doesn’t involve the DV, and there is also a predictive relation between B and Z. The ABC in ABC:BZ is called the ‘IV component’ since it includes all the IVs, and in Occam output, the model is referred to as IV:BZ. In directed search models, an IV component is always included to allow for relations among the IVs. When a predictive relation – here BZ – is included in a model, this does not mean that the relation is strong; it just means that this relation is being modeled.

Being based in information theory, RA is inherently a nominal data method, but can be applied also to continuous variables if their values are discretized (binned). Binning procedures are available in many commercial and public domain software packages; a utility program is also available at the RA web site [11], which outputs a data file in Occam input format. Occam also allows easy rebinning (aggregating existing bins) in the input file. The RA web site includes an Occam user manual and access to many publications that make use of RA methodology.

Models with one predicting relation, e.g., ABC:BZ, do not have loops, while models with multiple predicting relations, e.g., ABC:AZ:BZ, have loops. (The loop here consists of AZ, ZB, and BA; the last of these is embedded in ABC). In this latter model, AZ and BZ are separate, but they are not simply additive contributions to the prediction of Z. A conventional three-way interaction effect between A, B, and Z would be represented by an ABZ relation, as in model ABC:ABZ, but the AZ and BZ relations in ABC:AZ:BZ also constitute a (lesser) type of interaction effect [12]. Models without loops are computationally simple, since they can be fit algebraically. Models with loops can present challenging computational space and time demands, since they must be fit iteratively. For many variables, nearly all models have loops. One drawback of Bayesian networks (BN) is that they cannot have loops; RA, by contrast, encompasses such models, though RA in turn doesn’t consider all BN models [12].

An RA model is simpler – has fewer degrees of freedom (df) – than the data, but captures much of the information in the data. RA searches for good models are of two types: directed and neutral. Directed searches consider models that predict a dependent variable (DV) from a set of independent variables; neutral searches consider models that do not make any IV-DV distinction. Searches discussed in this paper are directed. TABLE I. VARIABLES IN MODELS DISCUSSED IN THIS PAPER (short name, initial cardinality, definition) Ped

8

Models are subsets of variables, each subset indicating a projection of the data that is preserved in the model. The above models are all ‘variable-based.’ Another type of model includes components that specify specific states of variables. An example is ABC: Z: A1B2Z. The first two components of this model, namely ABC and Z together define the variablebased independence model. Addition of the A1B2Z component, however, makes this a state-based model. This third component means that the probability that A = 1, B = 2, and any value of Z is either unusually high or unusually low. State-based models pick out informationally salient states. In results reported below, the independence part of the state-based model is often omitted for simplicity.

highest level of education

Pij

5

Injury group (patient or control)

Pph

3

Previous head injury

Pri

3

Recent illness

Psx

2

Sex

Pye

6

Years of education

Ggc

4

Glasgow coma scale

Gpt

3

Post traumatic amnesia

Cdg

7

Digit Symbol Substitution neuropsychological test

Csr

6

Spatial Reaction Time test (reaction time to visual stimuli)

Cnr

6

Spatial Reaction Time test normalized for age and sex

133

The predictive success of (equivalently, the information captured in) a model is quantified by %ΔH, the reduction of uncertainty (Shannon entropy) of the DV if one knows the values of the predicting IVs. Like variance, H is a measure of spread, here the spread of a probability distribution, but unlike variance, low values of uncertainty-reduction, even as low as 8%, can indicate big effect sizes. Uncertainty reduction is the central information theoretic measure of predictive efficacy, but since it is useful to compare RA results to other methods that don’t generate this measure. Occam reports also the more general accuracy measure of %correct, displayed in Occam as %c, and the related measures of true and false positives and negatives, sensitivity, and specificity.

searches are slow and can handle at most 100s of variables; ultra-fine searches are very slow, and can handle only fewer than 10 variables. Differences between these three searches are illustrated in Fig. 1. In this figure, a level in red represents the model selected by the search. Fine searches consider more models, at smaller increments of Δdf, than coarse searches, and ultra-fine searches more models than fine searches. More refined searches are advantageous because they might yield more complex and thus more predictive models that are still statistically justified, or they might yield models that are equally predictive but simpler (smaller Δdf) than those obtained from less refined searches. The above figure illustrates the first of these possible benefits: the fine search selects a more complex, and thus more predictive, model that is not considered by the coarse search; and the ultra-fine search selects a still more complex model that is not considered by the fine search.

Uncertainty reduction roughly tracks with %correct – the more the uncertainty of the DV is reduced, the higher the accuracy of prediction tends to be – but these measures do not track perfectly. Moreover, they track best when the marginal probability distribution of the DV is approximately uniform. For skewed distributions, models can reduce uncertainty but still not improve accuracy. In such cases, the real predictive strength of the model is its uncertainty reduction, not its %correct. Uncertainty reduction, for example, registers the difference, for a binary variable, between predicting a state because it has a probability of .55 or because it has a probability of .95, despite the fact that both probability values give the same prediction and contribution to %correct.

I.

RESULTS

This paper reports the results of coarse, fine, and ultra-fine searches for two DVs: the Digit Symbol Substitution Test (Cdg) and the Normalized Reaction Time Test (Cnr). For these DVs, a final best model was selected from the ultra-fine search, and for this model, the conditional probability distribution of the DV, given the predicting IVs, is shown and is then also summarized in a decision tree.

A good model has high uncertainty reduction or %correct; it also has low complexity, defined as degrees of freedom, or low Δdf, the difference between df(model) and df(reference), where the reference here is independence.. These two aspects of goodness oppose one another, so a good model is really one that optimally trades off accuracy (uncertainty reduction, information captured) and simplicity. This tradeoff is either explicit, as in the Bayesian Information Criterion (BIC) and the Akaike Information Criterion (AIC), which compute weighted sums of error and complexity (the opposites of accuracy and simplicity), or the tradeoff is implicit, as in a Chi-square pvalue calculation, also a standard way of selecting a model.

A. Predicting performance on Digit Symbol Substitution test Table II presents the results of coarse, fine, and ultra-fine searches that attempt to predict Cdg after this DV has been rebinned to two states, roughly equal in probability. In listing the models, the table omits the non-predicting IV component. For the coarse search, the six top single predicting IVs are listed with their complexities (Δdf), the p-values that assess the significance of their difference from independence, their %reduction of DV uncertainty (%ΔH), their %correct (%c), and their ΔBIC from independence. The single predictors are ordered by their uncertainty reductions, which is different from the order of their ΔBIC values, since ΔBIC considers not only uncertainty reduction but also complexity.

BIC penalizes more for complexity than AIC, and is thus more conservative than AIC. A third model selection criterion in Occam is ‘Incremental p-value,’ which uses Chi-square pvalues to pick models. The IncrP model is the model with the highest uncertainty reduction whose difference from (the bottom reference of) independence is statistically significant, and for which a path exists from independence to the model in which every incremental increase in complexity is statistically significant. BIC and AIC are given in Occam output as differences between these measures for the reference minus their values for the model. Large positive differences indicate good models. Occam offers three types of searches that differ in refinement and thus predictive power: (1) a coarse search, using variable-based models without loops, which have only one predicting relation, e.g., IV:BZ; (2) a fine search, using variable-based models with loops, which have multiple predicting relations, e.g., IV:AZ:BZ; and (3) an ultra-fine search, which uses state-based models, e.g., IV:Z:A1B2Z. Coarse searches are fast and can handle many variables; fine

Fig. 1. Three types of model searches.

134

TABLE II.

CDG MODEL SEARCHES

Model REFERENCE (independence) Cdg

Δdf

p

% ΔH

%c

ΔBIC

0

1.00

0.0

50.9

0.0 47.6

COARSE & (single predictors) Pij Cdg

3

0.00

11.9

68.3

Ped Cdg

7

0.00

11.7

65.0

5.9

Ggc Cdg

3

0.00

5.6

65.0

18.3

Cnr Cdg

5

0.00

3.5

60.8

6.1

Pye Cdg

1

0.00

3.0

68.3

27.9

Csr Cdg

5

0.00

2.5

63.3

0.4

Pij Cdg : Pye Cdg

4

0.00

25.5

72.9

BIC

Pij Cdg : Pye Cdg : Cnr Cdg

9

0.00

32.8

76.7

AIC

Pij Cdg : Pye Cdg : Cnr Cdg : Psx Cdg

10

0.00

32.9

76.3

IncrP

2

0.00

13.5

68.6

BIC

FINE *

ULTRA-FINE

#

Pij2 Cnr 1 Cdg : Pye0 Cdg Pij = patient injury type

Pye = years of education

Ped = education level

Csr = Spatial Reaction Test

Ggc = Glasgow coma scale

Psx = sex

Cnr = Norm. Spatial Reaction Test &

N = 240

*

N = 240, |Cnr| = 6, including missing

#

N = 275, |Cnr| = 2, no missing

TABLE III. shows the conditional probability distribution, p(Cdg | Pij Pye Cnr), for the data and for this best model. The DV states, Cdg0 and Cdg1, mean low and high Digit Symbol scores, respectively, so a high probability of Cdg0 indicates a cognitive deficit. Alongside the conditional probability values, the table lists, for each composite IV state, the probability of a high score divided by the probability of a low score, namely Odds = p(Cdg1 | Pij Pye Cnr) / p(Cdg0 | Pij Pye Cnr). High Odds values are good outcomes, low Odds are poor outcomes, while Odds near 1 have IV conditional probabilities that are close to the marginal probabilities for the whole sample. To the right of the Odds column is the p-value that assesses the significance of the difference between conditional and marginal probabilities. Comparing the (shaded) 3rd and 4th rows of TABLE III. shows that for orthopedic (control) injuries and high education, difference in performance (in bold) on the Reaction-time Test (Cnr) does not predict any difference in the Odds. Comparing the (shaded) 3rd and 7th rows shows that for high education and fast reaction time, difference in injury type (Pij) – either head injury or merely orthopedic (in italics) –also does not predict an Odds difference. All three of these rows (IV states) have the same Odds, namely 2.7. This table can be summarized in the decision tree shown in Fig. 2. The leaves of the tree are the Odds values followed by the p-value. Odds with significant p-values (at or near a 0.05 cutoff level) are shown in larger font. The decision tree can be summarized verbally as follows. For all patients, education predicts performance on the Digit Symbol Substitution Test: more education predicts better performance. Education is thus a confounding variable for the Digit Test in discriminating concussion, and must be controlled for. This is not surprising, given the complexity of the DSST. For orthopedic injury patients, reaction time does not predict digit symbol score. For patients with mild head injury, fast reaction time predicts better digit symbol performance beyond the influence of education

The table shows that Pij (patient injury type) is the best single predictor in terms both of uncertainty reduction and ΔBIC, but these two measures differ in their ranking of Pye (years of education). Pye is the fifth best predictor in terms of uncertainty reduction, but the second best in ΔBIC, because it adds only 1 degree of freedom to the independence model. In the fine search, BIC picks a model with Pij and Pye as predictors, not surprisingly since these are, by ΔBIC, the first and second best single predicting IVs in the coarse search. The fine search results illustrate the fact that BIC selects simpler models (Δdf = 4) than AIC (Δdf = 9) and IncrP (Δdf = 10). The additional degree of freedom in the IncrP model beyond the AIC model is due to adding Psx (sex) as an additional predictor.

TABLE III.

BEST (BIC) CDG MODEL Conditional probabilities of DV

The ultra-fine (state-based) search gives BIC model IV: Cdg : Pij2 Cnr1 Cdg : Pye0 Cdg. This very simple (Δdf = 2) model includes all three predictors from the more complex (Δdf = 9) AIC fine search model, but it selects only one state of each of these predictors as salient. It also shows Pij and Cnr interacting in their prediction of Cdg, which is not seen in the AIC fine search model. This ultra-fine BIC model is only about half as predictive (%ΔH = 13.5) as the fine BIC model (%ΔH = 25.5), but it is also half as complex. (Δdf = 2 as opposed to 4). Using the most conservative criterion to select models, either of these two BIC models could be chosen as the ‘best model,’ but because the state-based model has an additional predictor (Cnr), and is thus potentially more interesting, it has been selected as the Cdg best model.

Pij

IV states Pye

Cnr

N

orthop

low

fast

18

Data Cdg 0 Cdg 1 0.5

0.5

Model Cdg 0 Cdg 1 0.59

0.41

Odds

p

0.7

0.41

orthop

low

slow

22

0.68

0.32

0.59

0.41

0.7

0.36

orthop

high

fast

38

0.21

0.79

0.27

0.73

0.01

orthop

high

slow

20

0.35

0.65

0.27

0.73

2.7 2.7

head

low

fast

15

0.53

0.47

0.59

0.41

0.7

0.45

head

low

slow

24

0.88

0.13

0.86

0.14

0.2

0.00

head

high

fast

18

0.33

0.67

0.27

0.73

2.7

0.06

head

high

slow

20

0.6

0.4

0.62

0.38

0.6

0.26

175

0.49

0.51

0.49

0.51

1.00

IVs

Pij (patient injury type): orthopedic (control) vs head injury Pye (years of education): low vs high Cnr (Normalized Reaction-time Test): fast (normal) vs slow (deficit)

DV

.

135

Cdg (Digit Symbol Test): Cdg 0 low (deficit) vs Cdg 1 (high, normal)

0.05

The ultra-fine search retains several of the IVs found in the coarse search, but indicates specific states of these variables: Pph1 is previous head injury, Cdg1 is high Digit Test score; Gpt1 is the absence of amnesia. Note that this Δdf=2 ultra-fine BIC model has a higher uncertainty reduction (%ΔH = 12.4) than the more complex (Δdf=3) coarse BIC model (%ΔH = 10.6) and the equally complex (Δdf=2) fine BIC model (%ΔH = 8.8). Adding back IV: Cnr, the independence part of the ultra-fine model, the full state-based model is IV : Cnr : Pph1 Cdg1 Cnr : Cdg0 Gpt1 Cnr. This is selected as the best Cnr model. TABLE V. shows the conditional probability distribution for this model. The Odds value is the probability of fast (normal) reaction time divided by the probability of slow reaction time, given a particular IV state, , i.e.,

Fig. 2. Decision tree for BIC best Cdg model

B. Predicting performance on the Normalized Reaction Test TABLE IV. shows results of coarse, fine, ultra-fine searches for the Normalized Reaction-time Test (Cnr) after this DV has been rebinned to two equally sampled bins.

Odds = p(Cnr0 | Pph Cdg Gpt) / p(Cnr1 | Pph Cdg Gpt). Again, high values of Odds are good, low values point to a deficit, and values near 1 indicate similarity to the marginal probability distribution of the overall sample.

For the coarse search, the table lists models selected by the three criteria, rather than tabulating the best single predictors. Three IVs show up in these models: Cdg, performance on the Digit Symbol Substitution Test (since Cnr predicts Cdg, it’s not surprising that Cdg also predicts Cnr); Gpt, amnesia; and, for the IncrP model, also Pph, previous head injury. These IVs show up as 3- and 4-way joint interaction effects.

Comparing the (shaded) 2nd and 4th rows of TABLE V. shows that for those patients who score low on the Digit Symbol Substitution Test and have amnesia, the presence or absence (in bold) of a previous head injury does not matter: both have Odds = 0.2. Comparing the shaded 7th and 8th rows shows that if the patient has had a previous head injury and scores high (normal) on the Digit Symbol Test, the absence or presence (in italics) of amnesia also does not matter: both have Odds = 2.7.

The fine search BIC model, Cdg Cnr : Gpt Cnr, includes Cdg and Gpt as separate rather than as joint predictors, but, the more aggressive AIC and IncrP criteria highlight a Cdg Gpt Cnr interaction effect, and also add Pph plus two additional IVs not found in the best coarse models: Pri, recent illness, in the AIC model, and Pye, years of education, in the IncrP model. TABLE IV.

CNR MODEL SEARCHES Δdf

p

0

1.00

0.0

50.9

Cdg Gpt Cnr

3

0.00

10.6

64.6

BIC, AIC

Pph Cdg Gpt Cnr

7

0.00

13.1

66.9

IncrP

Model

% ΔH % c

N=175

REFERENCE Cnr

The table can be summarized in the decision tree shown in Fig. 3 which shows Odds followed by p-values. To summarize this decision tree: for low performance on Digit Symbol Test, amnesia predicts slow reaction time. For normal performance on Digit Symbol Test, previous head injury increases the probability of fast (normal) reaction time; this latter result is anomalous. TABLE V.

COARSE

FINE

IV states Pph Cdg

BEST (BIC) CNR MODEL Conditional probabilities of DV Model

Data Cnr 1 Cnr 0

Gpt

N

Cnr 0

Cnr 1

Odds

p

no

low

no

20

0.4

0.6

0.52

0.48

1.1

0.92

Cdg Cnr : Gpt Cnr

2

0.00

8.8

64.6

BIC

no

low

yes

19

0.16

0.84

0.16

0.84

0.2

0.00

Pri Cnr : Pph Cnr : Cdg Gpt Cnr

6

0.00

14.7

70.3

AIC

yes

low

no

30

0.57

0.43

0.52

0.48

1.1

0.90

Pye Cnr : Pph Cnr : Cdg Gpt Cnr

5

0.00

12.9

67.4

IncrP

yes

low

yes

18

0.17

0.83

0.16

0.84

0.2

0.00

no

high

no

24

0.50

0.50

0.52

0.48

1.1

0.91

2

0.00

12.4

64.8

BIC

no

high

yes

13

0.61

0.39

0.52

0.48

1.1

0.93

yes

high

no

38

0.76

0.23

0.73

0.27

2.7

0.01

yes

high

yes

14

0.64

0.36

0.73

0.27

2.7

0.09

176

0.51

0.49

0.51

0.49

1.0

ULTRA-FINE Pph1 Cdg 1 Cnr : Cdg 0 Gpt1 Cnr Cdg = Digit Symbol Substitution Test

Pri = recent illness

Gpt = amnesia;

Pye = years education

Pph = previous head injury

IVs

Pph (previous head injury): no vs yes Cdg (Digit Symbol Substitution Test): low(deficit) vs high (normal) Gpt (amnesia): no vs yes

DV

136

Cnr (Reaction-time Test): Cnr0 fast (normal) vs Cnr1 slow (deficit)

[5] no

M. Zwick, “An overview of reconstructability analysis,” Kybernetes vol. 33, 2004, pp. 877-905. https://www.pdx.edu/sysc/sites/www.pdx.edu.sysc/files/overview.pdf [6] M. H. W. Preece, PhD Dissertation: The Effect of Traumatic Brain Injury on Drivers’ Hazard Perception. University of Queensland, 2012. [7] M. H. W. Preece, M. S. Horswill, and G. M. Geffen, “Assessment of drivers’ ability to anticipate traffic hazards after traumatic brain injury,” J Neurol Neurosurg Psychiatry vol. 82, 2011, pp. 447-451. doi:10.1136/jnnp.2010.215228 [8] M. H. W. Preece, G. M. Geffen, and M. S. Horswill, “Return-to-driving expectations following mild traumatic brain injury,” Brain Injury, vol. 27, no. 1, January 2013, pp. 83–91 [9] M. H. W. Preece, M. S. Horswill, and G. M. Geffen, “Driving after concussion: the acute effect of mild traumatic brain injury on drivers’ hazard perception,” Neuropsychology, 2010, vol. 24, no. 4, pp. 493–503 [10] K. Willett and M. Zwick, “A software architecture for reconstructability analysis,”Kybernetes, vol. 33, 2004, pp. 997-1008. http://www.sysc.pdx.edu/download/papers/kenpitf.pdf [11] https://www.pdx.edu/sysc/research-discrete-multivariate-modeling [12] M. Zwick, “Reconstructability Analysis of Epistasis,” Annals of Human Genetics, vol. 75, issue 1, 2011, pp. 157-171. DOI: 10.1111/j.14691809.2010.00628.x. https://www.pdx.edu/sites/www.pdx.edu.sysc/files/AHG_final__unform atted-1.pdf

1.1 .92, .90

Amnesia low

yes

Digit symbol score high

no

.2

.00

1.1 .91, .93

Previous head injury yes

2.7

.01, .09

Fig. 3. Decision tree for BIC best Cnr model

IV. SUMMARY This analysis of Preece data is a test bed for future analyses of other TBI data, which hopefully will include other types of IVs, such as imaging, genomic, and proteomic measures. Specific findings reported here are tentative and should be subjected to confirmatory tests with new data. This is particularly true of the anomalous finding in the Cnr model in which previous head injury predicted better reaction-time scores than the absence of previous injury. One possible explanation of this anomaly is that prior exposure to the Reaction Time test introduces a practice effect. But if reaction time is so vulnerable to a practice effect that it no longer discriminates concussed from non-concussed, then it’s probably not an appropriate measure for this purpose. Another finding of potential interest is the indication by the Cdg model that level of education may be a confounding factor in assessing TBI patients with the Digit Symbol Test.

1

Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. Army Contracting Command, Aberdeen Proving Ground, Natick Contracting Division, Stanford University, or the Brain Trauma Foundation.

This illustrates the type of results that can be obtained from exploratory modeling with RA and demonstrates the possibility of using RA to better understand – and potentially to improve – clinical outcomes. Analyses can be done at three different levels of refinement. Models are conditional probability distributions of a DV given the states of IV predictors, distributions that are readily summarized with easily interpretable decision trees. Since RA is conceptually transparent and can handle both nominal and continuous data and both deterministic and stochastic relations, it is well-suited for exploratory analyses of biomedical data. REFERENCES [1]

[2] [3] [4]

National Center for Injury Prevention and Control (Division of Unintentional Injury Prevention, Epidemiology, and Rehabilitation), Centers for Disease Control and Prevention Report to Congress on Traumatic Brain Injury in the United States, Atlanta, 2015. U. Samadani and S. Daly, “When will a clinical trial for traumatic brain injury succeed?,” Neurosurgeon vol. 5, no. 4, 2016. G. Klir, The Architecture of Systems Problem Solving. New York: Plenum Press, 1985. K. Krippendorff, Information Theory. Structural Models for Qualitative Data (Quantitative Applications in the Social Sciences Monograph #62), Beverly Hills: Sage, 1986.

137

Smile Life

Show life that you have a thousand reasons to smile

Get in touch

© Copyright 2024 ELIB.TIPS - All rights reserved.