This paper proposes a novel framework for the development of interventions in vulnerable populations. The framework combines a complex systems lens with syndemic theory. Whereas funding bodies,... Show moreThis paper proposes a novel framework for the development of interventions in vulnerable populations. The framework combines a complex systems lens with syndemic theory. Whereas funding bodies, research organizations and reporting guidelines tend to encourage intervention research that (i) focuses on singular and predefined health outcomes, (ii) searches for generalizable cause-effect relationships, and (iii) aims to identify universally effective interventions, the paper suggests that a different direction is needed for addressing health inequities: We need to (i) start with exploratory analysis of population-level data, and (ii) invest in contextualized in-depth knowledge of the complex dynamics that produce health inequities in specific populations and settings, while we (iii) work with stakeholders at multiple levels to create change within systems. Show less
The GDPR poses special requirements for the processing of sensitive data, but it is not clear whether these requirements are sufficient to prevent the risk associated with this processing because...
The GDPR poses special requirements for the processing of sensitive data, but it is not clear whether these requirements are sufficient to prevent the risk associated with this processing because this risk is not clearly defined.
Furthermore, the GDPR’s clauses on the processing of—and profiling based on—sensitive data do not sufficiently account for the fact that individual data subjects are parts of complex systems, whose emergent properties betray sensitive traits from non-sensitive data.
The algorithms used to process big data are largely opaque to both controllers and data subjects: if the output of an algorithm has discriminatory effects coinciding with sensitive traits because the algorithm accidentally discerns an emergent property, this may remain unnoticed. At the moment, there are no remedies that can prevent the discovery of sensitive traits from non-sensitive data.
Managing the risks resulting from processing data that can reveal sensitive traits requires a strategy combining precautionary measures, public discourse, and enforcement until the risks are more completely understood. Insights from complex systems science are likely to be useful in better understanding these risks.
In this thesis, we extend the concept of null models as canonical ensembles of multi-graphs with given constraints and present new metrics able to characterize real-world layered systems based on... Show moreIn this thesis, we extend the concept of null models as canonical ensembles of multi-graphs with given constraints and present new metrics able to characterize real-world layered systems based on their correlation patterns. We make extensive use of the maximum-entropy method in order to find the analytical expression of the expectation values of several topological quantities; furthermore, we employ the maximum-likelihood method to fit the models to real datasets. One of the main contributions of the present work is providing models and metrics that can be directly applied to real data. We introduce improved measures of overlap between layers of a multiplex and exploit such quantities to provide a new network reconstruction method applicable to multi-layer graphs. It turns out that this methodology, applicable to a specific class of multi-layer networks, can be successfully employed to reconstruct the World Trade Multiplex. Furthermore, we illustrate that the maximum-entropy models also allow us to find the so-called backbone of a real network, i.e. the information which is irreducible to the single-node properties and is therefore peculiar to the network itself. We conclude the thesis moving our attention to a different dataset, namely the scientific publication system. Show less