Co-located interaction in interactive art takes place among two or more co-located audience members and the technical system of an artwork. In this paper, we aim to assess the descriptive and... Show moreCo-located interaction in interactive art takes place among two or more co-located audience members and the technical system of an artwork. In this paper, we aim to assess the descriptive and comparative qualities of our previously developed relational model for describing and analysing such forms of interaction. The model focuses on specifying the actions of the interacting elements, such as the audience and art system, and the various forms of communication between them. To assess its significance, we first develop selection criteria and classification dimensions to select eight artworks that are representative of diverse forms of co-located interaction. The relational model is shown to be suitable for describing the selected artworks and comparing their similarities and differences. As outcome, it reveals different types of relationships between the actions of interacting elements that would otherwise not be highlighted. As such, it provides a context for analysing and discussing strategies for co-located interaction and points to opportunities for research and creation in this field. Show less
Pipeline-parallel training has emerged as a popular method to train large Deep Neural Networks (DNNs), as it allows the use of the combined compute power and memory capacity of multiple Graphics... Show morePipeline-parallel training has emerged as a popular method to train large Deep Neural Networks (DNNs), as it allows the use of the combined compute power and memory capacity of multiple Graphics Processing Units (GPUs). However, with the sustaining increase in Deep Learning (DL) model sizes, pipeline parallelism provides only a partial solution to the memory bottleneck in large-scale DNN training. Careful partitioning of the DL model over the available GPUs based on memory usage is required to further alleviate the memory bottleneck and train larger DNNs. mCAP is such a memory-oriented partitioning approach for pipeline parallel systems, but it does not scale to models with many layers and very large hardware setups, as it requires extensive profiling and fails to efficiently navigate the partitioning space to find the most memory-friendly partitioning. In this work, we propose CAPSlog, a scalable memory-centric partitioning approach that can recommend model partitionings for larger and more heterogeneous DL models and for larger hardware setups than existing approaches. CAPSlog introduces a new profiling method and a new, much more scalable algorithm for recommending memory-efficient partitionings. CAPSlog reduces the profiling time by 67% compared to existing approaches, searches the partitioning space for the optimal solution orders of magnitude faster and can train significantly larger models. Show less