This dissertation focuses on developing new mathematical and statistical methods to properly represent time-varying covariates and model them within the context of time-to-event analysis. This... Show moreThis dissertation focuses on developing new mathematical and statistical methods to properly represent time-varying covariates and model them within the context of time-to-event analysis. This research topic is motivated by specific clinical questions aimed at gaining insights into personalised treatments for cardiological and oncological patients. The main purpose is to enrich the knowledge available for modelling patients’ survival with relevant features related to the time-varying processes of interest.The efforts of this work address the complexities of both (i) developing adequate dynamic characterizations of the processes under study (i.e., representation problem) and (ii) identifying and quantifying the association between time-varying processes and patient survival (i.e., time-to-event modelling problem). In both cases, the main issue is dealing with complex data sources while taking into account the nature of the processes and managing the complex trade-off between clinical interpretability and mathematical formulation.By solving the aforementioned statistical complexities, this work is not only impacting the community of researchers in mathematics and statistics. The development of these novel methodologies may represent a significant step forward in the definition of customized and flexible monitoring tools to support doctors and clinicians in their work.*********This doctoral dissertation was part of a cotutelle agreement between the Politecnico di Milano and Leiden University Show less
Estimates differ, but in general it is assumed that about half of our spoken and written language consists of stock phrases, formulaic sequences, and common, semantically transparent combinations... Show moreEstimates differ, but in general it is assumed that about half of our spoken and written language consists of stock phrases, formulaic sequences, and common, semantically transparent combinations of words, also known as lexical bundles. There is a growing body of experimental work showing that lexical bundles are read, understood, and pronounced faster than their infrequent matched controls, which seems to suggest that these bundles function as units in processing, much like single words.This thesis focuses on the processing of Dutch lexical bundles, and does so by considering them from different angles: How do we read lexical bundles? Are there differences in processing between age groups? How do we process spoken lexical bundles? And how do we produce them? In answering these questions, a wide range of experimental methods, and both statistical and computational modeling are employed. Show less
Grouping techniques employ similarities within data to create new entities,which lend themselves to the interpretation process. This article presents three different grouping approaches, each... Show moreGrouping techniques employ similarities within data to create new entities,which lend themselves to the interpretation process. This article presents three different grouping approaches, each originally developed independently, and applied to a common dataset of archaeological finds. The aim is not to search for the right approach or results, in a competing way, but rather to present the methods as complementary. It is also our intention to stress that a tight connection between theory and statistical modelling is indispensable. Indeed, the use of a particular methodology must be supported by an adapted theory; similarly, a theory without a proper methodological realisation will often not have any actual utility. The integration of theory and method is exemplified in the three case methods. The first method uses metal objects as cultural indicators. The study area is divided into a set of identical geographical units, characterized according to the type and proportions of indicators and grouped using hierarchical clustering. The second approach deals with cultures as standardisations between individuals, using ‘Typenspektrum’ as significant data for identifying different cultures. Groups are defined through kernel density estimation and a cluster analysis, followed by internal and external validation techniques. A third method characterizes the funerary ritual and grave-goods, using a similarity algorithm coupled with clustering procedures to compare the graves with one another. The outcome is validated with exploratory methods and compared to patterns from different contexts. The complementarity of the results shows that each approach sheds light on a certain facet of the same whole. Show less