This thesis mainly focuses on multimodal understanding and Visual Question Answering (VQA) via deep learning methods. For technical contributions, this thesis first focuses on improving multimodal... Show moreThis thesis mainly focuses on multimodal understanding and Visual Question Answering (VQA) via deep learning methods. For technical contributions, this thesis first focuses on improving multimodal fusion schemes via multi-stage vision-language interactions. Then, the thesis seeks to overcome the language bias challenges to build robust VQA models, and also extend the bias problem into the more complex audio-visual-textual question answering tasks. Furthermore, this thesis explores the open-world applicability of VQA algorithms from the aspects of lifelong learning and federated learning, thereby expanding the continuous and distributed training ability. The efficacy of the proposed methods in this thesis is verified by extensive experiments. This thesis also gives an overview of challenges, benchmarks and strategies for robust VQA algorithms. Show less
Dreuning, H.; Bal, H.E.; Nieuwpoort, R.V. van 2023
Deep Learning (DL) model sizes are increasing at a rapid pace, as larger models typically offer better statistical performance. Modern Large Language Models (LLMs) and image processing models... Show moreDeep Learning (DL) model sizes are increasing at a rapid pace, as larger models typically offer better statistical performance. Modern Large Language Models (LLMs) and image processing models contain billions of trainable parameters. Training such massive neural networks incurs significant memory requirements and financial cost. Hybrid-parallel training approaches have emerged that combine pipelining with data and tensor parallelism to facilitate the training of large DL models on distributed hardware setups. However, existing approaches to design a hybrid-parallel partitioning and parallelization plan for DL models focus on achieving high throughput and not on minimizing memory usage and financial cost. We introduce CAPTURE, a partitioning and parallelization approach for hybrid parallelism that minimizes peak memory usage. CAPTURE combines a profiling-based approach with statistical modeling to recommend a partitioning and parallelization plan that minimizes the peak memory usage across all the Graphics Processing Units (GPUs) in the hardware setup. Our results show a reduction in memory usage of up to 43.9% compared to partitioners in state-of-the-art hybridparallel training systems. The reduced memory footprint enables the training of larger DL models on the same hardware resources and training with larger batch sizes. CAPTURE can also train a given model on a smaller hardware setup than other approaches, reducing the financial cost of training massive DL models. Show less