In analogy to mathematical proofs, the goal of a proof system is for a prover to convince a verifier of the correctness of a claim. However, by contrast, probabilistic proofs allow the verifier to... Show moreIn analogy to mathematical proofs, the goal of a proof system is for a prover to convince a verifier of the correctness of a claim. However, by contrast, probabilistic proofs allow the verifier to make mistakes, i.e., to accept false claims or reject true claims. Further, probabilistic proofs may have multiple rounds of interaction between the prover and the verifier, in which case they are also referred to as interactive proofs. These two relaxations revolutionized the theory of proofs. For instance, by trading absolute certainty for high probability and allowing interaction, it is possible to prove claims without revealing anything beyond their correctness, i.e., in zero-knowledge. Nowadays, zero-knowledge proofs are widely deployed; they are for instance essential in the public-key infrastructures (PKIs) that manage digital identities and secure communication channels on the internet. Especially the theory of Σ-protocols provides a well-understood basis for the modular design of zero-knowledge proof systems in a wide variety of application domains. However, recently a new folding mechanism was introduced as a drop-in replacement for Σ-protocols, significantly reducing the communication costs in many practical scenarios. In this dissertation, we show that the folding mechanism can be cast as a significant strengthening, rather than a replacement, of Σ-protocol theory, thereby reconciling it with the established theory. In addition, we close several gaps in the theory of probabilistic proofs that were exposed due to the introduction of these efficiency improvements. Show less
Combinatorial games are games for two competing players, moving in a turn-by-turn fashion, in which there is no chance nor hidden information. Chess, checkers and the simpler tic tac toe are well... Show moreCombinatorial games are games for two competing players, moving in a turn-by-turn fashion, in which there is no chance nor hidden information. Chess, checkers and the simpler tic tac toe are well-known examples of this class of games, as well as game of go. Though these games are by no means simple, there does exist a beautiful mathematical framework for their analysis. Using this theory, it is possible to efficiently determine the outcome of a given position of a game without having to explicitly compute the results for every possible move. Moreover, the theory provides a measure of how profitable a given position is to either player, often denoted by the `value’ of a position. An example application of the theory is research on endgames in go.However, not all games are combinatorial games. The game of poker, for example, introduces hidden information. In practice, impressive results have been obtained for these non-combinatorial games using artificial intelligence, but theory and understanding are perhaps lacking. In this thesis, the main question we address is whether the existing theory for combinatorial games can be adapted or extended to non-combinatorial ones. Show less
Organisms often need to adapt more efficiently and devise new strategies for surviving difficult ecological circumstances. Mammals indeed spend the winter in hibernation to conserve energy, food,... Show moreOrganisms often need to adapt more efficiently and devise new strategies for surviving difficult ecological circumstances. Mammals indeed spend the winter in hibernation to conserve energy, food, etc., for future purposes. Microbial populations also possess similar characteristics, where organisms enter into a state of low metabolic activity in response to adverse environmental conditions. In plant populations, the analogous strategy is the suspension of seed germination for an extended period of time. Several studies suggest that this bet-hedging strategy has important evolutionary consequences and plays a crucial role in maintaining genetic diversities in a population. In this thesis, we draw motivations from biological populations featuring this trait and investigate its effect in a probabilistic framework. In particular, we introduce a mathematical notion of dormancy in several well-known stochastic interacting systems and study how it changes the qualitative and quantitative properties of the systems by characterizing their behaviors in the long run. The construction of our model is built upon a well-known stochastic process in mathematical population genetics called the Moran model. The Moran model describes the genetic evolution of a single, reproductively active, finite population without seed-bank. We modify the model to include dormancy and extend it to the context of spatially structured populations with varying sizes. Show less
Explaining treatment response variability between and within patients can support treatment and dosing optimization, to improve treatment of individual patients. This thesis discussed multiple... Show moreExplaining treatment response variability between and within patients can support treatment and dosing optimization, to improve treatment of individual patients. This thesis discussed multiple aspects of treatment variability and the associated statistical learning techniques which can be used to explain and/or predict part of that variability. Even though in recent times the availability of several high-throughput measurement technologies has created many new opportunities to develop improved treatment strategies, deriving actionable insights from such data remains a challenge. To this end, the use of longitudinal and high-dimensional data analysis techniques is needed to explore omics data for explaining treatment response and clinical course, and to answer clinical questions from routine healthcare data from hospitals and research institutes. Show less
This dissertation consists of two parts, each of which considers a different research area related to random interval maps. In the first part we are interested in random interval maps that are... Show moreThis dissertation consists of two parts, each of which considers a different research area related to random interval maps. In the first part we are interested in random interval maps that are critically intermittent. In Chapter 2 we consider a large class of such systems and demonstrate the existence of a phase transition, where the absolutely continuous invariant measure changes between finite and infinite. For a closely related class we derive in Chapter 3 statistical properties like decay of correlations and the Central Limit Theorem. In Chapter 4 we investigate whether a similar phase transition remains to exist when the critical behaviour is toned down. Random interval maps can also be used to generate number expansions, which will be the main object of study in the second part. In Chapter 5 we generalize Lochs’ Theorem, which compares the efficiency between representing real numbers in decimal expansions and regular continued fraction expansions, to a wide class of pairs of random interval maps that produce number expansions. Closely related to this result, we study in Chapter 6 the efficiency of beta-encoders as a potential source for pseudo-random number generation by comparing the output of a beta-encoder with its corresponding binary expansion. Show less
The main theme of this thesis is the theoretical study of Gaussian processes as a tool in Bayesian nonparametric statistics. We are interested in the frequentist properties of Bayesian... Show moreThe main theme of this thesis is the theoretical study of Gaussian processes as a tool in Bayesian nonparametric statistics. We are interested in the frequentist properties of Bayesian nonparametric techniques in an asymptotic regime. We will be focusing specifically on consistency, convergence rates, uncertainty quantification and adaptation. These properties will be studied in the context of non-parametric problems, that is to say models with few modeling constraints. Moroever, the thesis will cover the topic of scalability of Bayesian techniques. Indeed, Bayesian methods are computation-hungry and rapidly become intractable as the number of observations grows. This issue led to the introduction of distributed Bayesian methods in order to decrease the computational complexity of the techniques. Show less