Lung congestion is a risk factor for all-cause and cardiovascular mortality in patients on chronic hemodialysis, and its estimation by ultrasound may be useful to guide ultrafiltration and drug... Show moreLung congestion is a risk factor for all-cause and cardiovascular mortality in patients on chronic hemodialysis, and its estimation by ultrasound may be useful to guide ultrafiltration and drug therapy in this population. In an international, multi-center randomized controlled trial (NCT02310061) we investigated whether a lung ultrasound-guided treatment strategy improved a composite end point (all-cause death, non-fatal myocardial infarction, decompensated heart failure) vs usual care in patients receiving chronic hemodialysis with high cardiovascular risk. Patient-Reported Outcomes (Depression and the Standard Form 36 Quality of Life Questionnaire, SF36) were assessed as secondary outcomes. A total of 367 patients were enrolled: 183 in the active arm and 180 in the control arm. In the active arm, the pre-dialysis lung scan was used to titrate ultrafiltration during dialysis and drug treatment. Three hundred and seven patients completed the study: 152 in the active arm and 155 in the control arm. During a mean follow-up of 1.49 years, lung congestion was significantly more frequently relieved in the active (78%) than in the control (56%) arm and the intervention was safe. The primary composite end point did not significantly differ between the two study arms (Hazard Ratio 0.88; 95% Confidence Interval: 0.63-1.24). The risk for all-cause and cardiovascular hospitalization and the changes of left ventricular mass and function did not differ among the two groups. A post hoc analysis for recurrent episodes of decompensated heart failure (0.37; 0.15-0.93) and cardiovascular events (0.63; 0.41-0.97) showed a risk reduction for these outcomes in the active arm. There were no differences in patientreported outcomes between groups. Thus, in patients on chronic hemodialysis with high cardiovascular risk, a treatment strategy guided by lung ultrasound effectively relieved lung congestion but was not more effective than usual care in improving the primary or secondary end points of the trial. Show less
Janse, R.J.; Hoekstra, T.; Jager, K.J.; Zoccali, C.; Tripepi, G.; Dekker, F.W.; Diepen, M. van 2021
The correlation coefficient is a statistical measure often used in studies to show an association between variables or to look at the agreement between two methods. In this paper, we will discuss... Show moreThe correlation coefficient is a statistical measure often used in studies to show an association between variables or to look at the agreement between two methods. In this paper, we will discuss not only the basics of the correlation coefficient, such as its assumptions and how it is interpreted, but also important limitations when using the correlation coefficient, such as its assumption of a linear association and its sensitivity to the range of observations. We will also discuss why the coefficient is invalid when used to assess agreement of two methods aiming to measure a certain value, and discuss better alternatives, such as the intraclass coefficient and Bland-Altman's limits of agreement. The concepts discussed in this paper are supported with examples from literature in the field of nephrology. Show less
Epidemiological studies often aim to investigate the causal contribution of a risk factor to a disease or other outcome. In etiological research, one is usually interested in the (biological)... Show moreEpidemiological studies often aim to investigate the causal contribution of a risk factor to a disease or other outcome. In etiological research, one is usually interested in the (biological) mechanism(s) underlying the studied relationship. Inappropriate conduct of an etiological study may have major implications for the correctness of the results and interpretation of the findings. Therefore, in this paper, we aim to describe step by step how etiological research should be carried out, together with its common pitfalls. These steps involve finding and formulating a well-defined etiological research question, choosing an appropriate study design including a suitable comparison group, adequate modelling, and adequate reporting and interpretation of the results. Show less
Roumeliotis, S.; Abd ElHafeez, S.; Jager, K.J.; Dekker, F.W.; Stel, V.S.; Pitino, A.; ... ; Tripepi, G. 2021
Ecological studies are observational studies commonly used in public health research. The main characteristic of this study design is that the statistical analysis is based on pooled (i.e.,... Show moreEcological studies are observational studies commonly used in public health research. The main characteristic of this study design is that the statistical analysis is based on pooled (i.e., aggregated) rather than on individual data. Thus, patient-level information such as age, gender, income and disease condition are not considered as individual characteristics but as mean values or frequencies, calculated at country or community level. Ecological studies can be used to compare the aggregated prevalence and incidence data of a given condition across different geographical areas, to assess time-related trends of the frequency of a pre-defined disease/condition, to identify factors explaining changes in health indicators over time in specific populations, to discriminate genetic from environmental causes of geographical variation in disease, or to investigate the relationship between a population-level exposure and a specific disease or condition. The major pitfall in ecological studies is the ecological fallacy, a bias which occurs when conclusions about individuals are erroneously deduced from results about the group to which those individuals belong. In this paper, by using a series of examples, we provide a general explanation of the ecological studies and provide some useful elements to recognize or suspect ecological fallacy in this type of studies. Show less
Background: Uncontrolled hypertension notwithstanding the use of at least three drugs or hypertension controlled with at least four drugs, the widely accepted definition of treatment-resistant... Show moreBackground: Uncontrolled hypertension notwithstanding the use of at least three drugs or hypertension controlled with at least four drugs, the widely accepted definition of treatment-resistant hypertension (TRH), is considered as a common problem in the hemodialysis population. However, to date there is no estimate of the prevalence of this condition in hemodialysis patients. Method: We estimated the prevalence of TRH by 44-h ambulatory BP monitoring (ABPM) in 506 hemodialysis patients in 10 renal units in Europe included in the registry of the European Renal and Cardiovascular Medicine (EURECAm,), a working group of the European Association, European Dialysis and Transplantation Association (ERA EDTA). In a sub-group of 114 patients, we tested the relationship between fluid overload (Body Composition monitor) and TRH. Results: The prevalence of hypertension with 44-h ABPM criteria was estimated at 85.6% (434 out of 506 patients). Of these, 296 (58%) patients were classified as uncontrolled hypertensive patients by 44-h ABPM criteria (>= 130/80 mmHg). Two hundred and thirteen patients had uncontrolled hypertension while on treatment with less than three drugs and 210 patients were normotensive while on drug therapy (n = 138) or off drug treatment (n = 72). The prevalence of TRH was 24% (93 among 386 treated hypertensive patients). The prevalence of predialysis fluid overload was 33% among TRH patients, 34% in uncontrolled hypertensive patients and 26% in normotensive patients. The vast majority (67%) of hemodialysis patients with TRH had no fluid overload. Conclusion: TRH occurs in about one in four treated hypertensive patients on hemodialysis. Fluid overload per se only in part explains TRH and the 67% of these patients show no fluid overload. Show less
In nephrology, a great deal of information is measured repeatedly in patients over time, often alongside data on events of clinical interest. In this introductory article we discuss how these two... Show moreIn nephrology, a great deal of information is measured repeatedly in patients over time, often alongside data on events of clinical interest. In this introductory article we discuss how these two types of data can be simultaneously analysed using the joint model (JM) framework, illustrated by clinical examples from nephrology. As classical survival analysis and linear mixed models form the two main components of the JM framework, we will also briefly revisit these techniques. Show less
Study quality depends on a number of factors, one of them being internal validity. Such validity can be affected by random and systematic error, the latter also known as bias. Both make it more... Show moreStudy quality depends on a number of factors, one of them being internal validity. Such validity can be affected by random and systematic error, the latter also known as bias. Both make it more difficult to assess a correct frequency or the true relationship between exposure and outcome. Where random error can be addressed by increasing the sample size, a systematic error in the design, the conduct or the reporting of a study is more problematic. In this article, we will focus on bias, discuss different types of selection bias (sampling bias, confounding by indication, incidence-prevalence bias, attrition bias, collider stratification bias and publication bias) and information bias (recall bias, interviewer bias, observer bias and lead-time bias), indicate the type of studies where they most frequently occur and provide suggestions for their prevention. Show less
Background. Population-specific consensus documents recommend that the diagnosis of hypertension in haemodialysis patients be based on 48-h ambulatory blood pressure ( ABP) monitoring. However,... Show moreBackground. Population-specific consensus documents recommend that the diagnosis of hypertension in haemodialysis patients be based on 48-h ambulatory blood pressure ( ABP) monitoring. However, until now there is just one study in the USA on the prevalence of hypertension in haemodialysis patients by 44-h recordings. Since there is a knowledge gap on the problem in European countries, we reassessed the problem in the European Cardiovascular and Renal Medicine working group Registry of the European Renal Association-European Dialysis and Transplant Association.Methods. A total of 396 haemodialysis patients underwent 48-h ABP monitoring during a regular haemodialysis session and the subsequent interdialytic interval. Hypertension was defined as (i) pre-haemodialysis blood pressure (BP) >= 140/90 mmHg or use of antihypertensive agents and (ii) ABP >= 130/80 mmHg or use of antihypertensive agents.Results. The prevalence of hypertension by 48-h ABP monitoring was very high (84.3%) and close to that by prehaemodialysis BP (89.4%) but the agreement of the two techniques was not of the same magnitude (j statistics = 0.648; P<0.001). In all, 290 participants were receiving antihypertensive treatment. In all, 9.1% of haemodialysis patients were categorized as normotensives, 12.6% had controlled hypertension confirmed by the two BP techniques, while 46.0% had uncontrolled hypertension with both techniques. The prevalence of white coat hypertension was 18.2% and that of masked hypertension 14.1%. Of note, hypertension was confined only to night-time in 22.2% of patients while just 1% of patients had only daytime hypertension. Predialysis BP >= 140/90 mmHg had 76% sensitivity and 54% specificity for the diagnosis of BP >= 130/80 mmHg by 48-h ABP monitoring.Conclusions. The prevalence of hypertension in haemodialysis patients assessed by 48-h ABP monitoring is very high. Prehaemodialysis BP poorly reflects the 48 h-ABP burden. About a third of the haemodialysis population has white coat or masked hypertension. These findings add weight to consensus documents supporting the use of ABP monitoring for proper hypertension diagnosis and treatment in this population. Show less
The internal validity of an epidemiological study can be affected by random error and systematic error. Random error reflects a problem of precision in assessing a given exposure-disease... Show moreThe internal validity of an epidemiological study can be affected by random error and systematic error. Random error reflects a problem of precision in assessing a given exposure-disease relationship and can be reduced by increasing the sample size. On the other hand, systematic error or bias reflects a problem of validity of the study and arises because of any error resulting from methods used by the investigator when recruiting individuals for the study, from factors affecting the study participation (selection bias) or from systematic distortions when collecting information about exposures and outcomes (information bias). Another important factor which may affect the internal validity of a clinical study is confounding. In this article, we focus on two categories of bias: selection bias and information bias. Confounding will be described in a future article of this series. Copyright (C) 2010 S. Karger AG, Basel Show less
Tripepi, G.; Jager, K.J.; Dekker, F.W.; Zoccali, C. 2010
Stratification allows to control for confounding by creating two or more categories or subgroups in which the confounding variable either does not vary or does not vary very much. The Mantel... Show moreStratification allows to control for confounding by creating two or more categories or subgroups in which the confounding variable either does not vary or does not vary very much. The Mantel-Haenszel formula is applied in cohort and in case-control studies to calculate an overall, unconfounded, effect estimate of a given exposure for a specific outcome by combining stratum-specific relative risks (RR) or odds ratios (OR). Stratum-specific RRs or ORs are calculated within each stratum of the confounding variable and compared with the corresponding effect estimates in the whole group (that is, with the unstratified RR or OR). The use of the Mantel-Haenszel formula presents some limitations: (1) if there is more than a single confounder, the application of this formula is laborious and demands a relatively large sample size, and (2) this method requires continuous confounders to be constrained into a limited number of categories thus potentially generating residual confounding (a phenomenon particularly relevant when the variable is categorized into few strata). In the stratified analysis, residual confounding can be minimized by increasing the number of strata, a possibility strictly dependent on sample size. Copyright (C) 2010 S. Karger AG, Basel Show less
Tripepi, G.; Jager, K.J.; Dekker, F.W.; Zoccali, C. 2010
Standardization is a method used to compare observed and expected rates of a given disease/outcome by removing the influence of factors that may confound the comparison. There are two major... Show moreStandardization is a method used to compare observed and expected rates of a given disease/outcome by removing the influence of factors that may confound the comparison. There are two major standardization methods: one is used when the 'standard' is the structure of a population (direct method) and the other when the 'standard' is a set of specific event rates (indirect method). The direct standardization is commonly used for large populations while the indirect one is applied to populations of relatively small dimensions. Copyright (C) 2010 S. Karger AG, Basel Show less
Tripepi, G.; Jager, K.J.; Dekker, F.W.; Zoccali, C. 2010
The study of the relationship between risk factors and outcomes is important both in etiological and prognostic research. To assess the strength of a given risk factor-outcome relationship we use... Show moreThe study of the relationship between risk factors and outcomes is important both in etiological and prognostic research. To assess the strength of a given risk factor-outcome relationship we use measures that are calculated in relative and absolute terms. Risk ratio, incidence rate ratio and odds ratio are relative measures of this relationship. Risk difference (or attributable risk) and rate difference (or attributable rate) are absolute measures of the same relationship. Risk difference and rate difference are calculated by subtracting the risk and the incidence rate in exposed individuals from that in unexposed individuals, respectively. The choice of these measures depends on the study aim. Relative measures are commonly used in etiological studies while absolute measures are mainly used in public health research. Copyright (C) 2010 S. Karger AG, Basel Show less