Purpose In epidemiological research, measurements affected by medication, for example, blood pressure lowered by antihypertensives, are common. Different ways of handling medication are required... Show morePurpose In epidemiological research, measurements affected by medication, for example, blood pressure lowered by antihypertensives, are common. Different ways of handling medication are required depending on the research questions and whether the affected measurement is the exposure, the outcome, or a confounder. This study aimed to review handling of medication use in observational research. Methods PubMed was searched for etiological studies published between 2015 and 2019 in 15 high-ranked journals from cardiology, diabetes, and epidemiology. We selected studies that analyzed blood pressure, glucose, or lipid measurements (whether exposure, outcome or confounder) by linear or logistic regression. Two reviewers independently recorded how medication use was handled and assessed whether the methods used were in accordance with the research aim. We reported the methods used per variable category (exposure, outcome, confounder). Results A total of 127 articles were included. Most studies did not perform any method to account for medication use (exposure 58%, outcome 53%, and confounder 45%). Restriction (exposure 22%, outcome 23%, and confounders 10%), or adjusting for medication use using a binary indicator were also used frequently (exposure: 18%, outcome: 19%, confounder: 45%). No advanced methods were applied. In 60% of studies, the methods' validity could not be judged due to ambiguous reporting of the research aim. Invalid approaches were used in 28% of the studies, mostly when the affected variable was the outcome (36%). Conclusion Many studies ambiguously stated the research aim and used invalid methods to handle medication use. Researchers should consider a valid methodological approach based on their research question. Show less
Advancement of gene expression measurements in longitudinal studies enables the identification of genes associated with disease severity over time. However, problems arise when the technology used... Show moreAdvancement of gene expression measurements in longitudinal studies enables the identification of genes associated with disease severity over time. However, problems arise when the technology used to measure gene expression differs between time points. Observed differences between the results obtained at different time points can be caused by technical differences. Modeling the two measurements jointly over time might provide insight into the causes of these different results. Our work is motivated by a study of gene expression data of blood samples from Huntington disease patients, which were obtained using two different sequencing technologies. At time point 1, DeepSAGE technology was used to measure the gene expression, with a subsample also measured using RNA-Seq technology. At time point 2, all samples were measured using RNA-Seq technology. Significant associations between gene expression measured by DeepSAGE and disease severity using data from the first time point could not be replicated by the RNA-Seq data from the second time point. We modeled the relationship between the two sequencing technologies using the data from the overlapping samples. We used linear mixed models with either DeepSAGE or RNA-Seq measurements as the dependent variable and disease severity as the independent variable. In conclusion, (1) for one out of 14 genes, the initial significant result could be replicated with both technologies using data from both time points; (2) statistical efficiency is lost due to disagreement between the two technologies, measurement error when predicting gene expressions, and the need to include additional parameters to account for possible differences. Show less
Nab, L.; Groenwold, R.H.H.; Welsing, P.M.J.; Smeden, M. van 2019
In randomised trials, continuous endpoints are often measured with some degree of error. This study explores the impact of ignoring measurement error and proposes methods to improve statistical... Show moreIn randomised trials, continuous endpoints are often measured with some degree of error. This study explores the impact of ignoring measurement error and proposes methods to improve statistical inference in the presence of measurement error. Three main types of measurement error in continuous endpoints are considered: classical, systematic, and differential. For each measurement error type, a corrected effect estimator is proposed. The corrected estimators and several methods for confidence interval estimation are tested in a simulation study. These methods combine information about error-prone and error-free measurements of the endpoint in individuals not included in the trial (external calibration sample). We show that, if measurement error in continuous endpoints is ignored, the treatment effect estimator is unbiased when measurement error is classical, while Type-II error is increased at a given sample size. Conversely, the estimator can be substantially biased when measurement error is systematic or differential. In those cases, bias can largely be prevented and inferences improved upon using information from an external calibration sample, of which the required sample size increases as the strength of the association between the error-prone and error-free endpoint decreases. Measurement error correction using already a small (external) calibration sample is shown to improve inferences and should be considered in trials with error-prone endpoints. Implementation of the proposed correction methods is accommodated by a new software package for R. Show less
Spoel, E. van der; Choi, J.; Roelfsema, F.; Cessie, S. le; Heemst, D. van; Dekkers, O.M. 2019
Whenever parameter estimates are uncertain or observations are contaminated by measurement error, the Pearson correlation coefficient can severely underestimate the true strength of an association.... Show moreWhenever parameter estimates are uncertain or observations are contaminated by measurement error, the Pearson correlation coefficient can severely underestimate the true strength of an association. Various approaches exist for inferring the correlation in the presence of estimation uncertainty and measurement error, but none are routinely applied in psychological research. Here we focus on a Bayesian hierarchical model proposed by Behseta, Berdyyeva, Olson, and Kass (2009) that allows researchers to infer the underlying correlation between error-contaminated observations. We show that this approach may be also applied to obtain the underlying correlation between uncertain parameter estimates as well as the correlation between uncertain parameter estimates and noisy observations. We illustrate the Bayesian modeling of correlations with two empirical data sets; in each data set, we first infer the posterior distribution of the underlying correlation and then compute Bayes factors to quantify the evidence that the data provide for the presence of an association. Show less