Scalability of Learning Tasks on 3D CAE Models Using Point Cloud Autoencoders

Geometric Deep Learning (GDL) methods have recently gained interest as powerful, high-dimensional models for approaching various geometry processing tasks. However, training deep neural network models on geometric input requires considerable computational effort. Even more so, if one considers typical problem sizes found in application domains such as engineering tasks, where geometric data are often orders of magnitude larger than the inputs currently considered in GDL literature. Hence, an assessment of the scalability of the training task is necessary, where model and data set parameters can be mapped to the computational demand during training. The present paper therefore studies the effects of data set size and the number of free model parameters on the computational effort of training a Point Cloud Autoencoder (PC-AE). We further review pre-processing techniques to obtain efficient representations of high-dimensional inputs to the PC-AE and investigate the effects of these techniques on the information abstracted by the trained model. We perform these experiments on synthetic geometric data inspired by engineering applications using computing hardware with particularly recent graphics processing units (GPUs) with high memory specifications. The present study thus provides a comprehensive evaluation of how to scale geometric deep learning architectures to high-dimensional inputs to allow for an application of state-of-the-art deep learning methods in real-world tasks.


I. INTRODUCTION
Geometric deep learning (GDL) methods have recently gained interest as powerful, high-dimensional models for the application in geometry processing tasks such as segmentation [1], classification [2], object recognition [3], and others [4], [5].The increasing availability of 3D data and powerful computing hardware drive the adaption of successful deep learning architectures to the 3D-and non-Euclidean domain in general [6].However, training deep neural network models on geometric input requires considerable computational effort and often real-world input data sizes are prohibitively large.For example, in engineering applications, where 3D data is ubiquitous, typical problem sizes are orders of magnitude larger than the inputs currently considered in GDL literature.For these domains it is necessary to assess whether deep learning models can be applied in their current form, and where further development is necessary to harness the power of recent architectures for solving real-world tasks.
Central to an efficient processing of 3D data is the chosen data representation.Here, various approaches have been proposed [4], [5]: For example, volumetric approaches represent shapes as occupied voxels in a 3D grid, which due to its Euclidean structure allows for an adaption of deep learning concepts from image processing to 3D input.However, computational and storage demand for voxel representations is cubic in the input size, which severely limits the maximum achievable resolution [5], [7].More efficient representations have been proposed in the non-Euclidean domain, such as polygon meshes and point clouds.
Polygon meshes represent 3D shapes as vertices and their Cartesian coordinates, with an embedded description of the connectivity between these vertices.Meshes are popular due to their efficiency in describing surfaces with high-resolution, however, translating successful concepts like shift-invariance from the 2D to the non-Euclidean domain is not straightforward [6].Attempts to adapt these concepts have been made by replacing convolutional filters with local patch operators in the 3D domain [8], [9].Yet, calculating local operators per node is computationally expensive and becomes infeasible for large mesh sizes.On the other hand, mesh-based approaches that do not define local operators on the shape such as spectral approaches are currently limited to topologically similar shapes, due to requirements like mesh isomorphism or close correspondence between vertices (e.g., [10]).
Point clouds have recently been introduced as powerful and efficient representations of shapes for geometric deep learning [11], [12].In comparison to voxel or mesh representations, they are memory efficient while being able to preserve a high amount of geometric detail.Architectures proposed for the processing of point clouds do not require topological similarity in the input such as mesh-based models.Point clouds have further been popularized through advances in dataacquisition and 3D scanning technologies, especially in the fields of computer vision, robotics and autonomous driving, which use point clouds for the representation of objects and scenes [13]- [15].Generating point clouds is computationally cheap, especially compared to other data generation algorithms such as meshing: They may either be acquired directly from physical objects via various data-acquisition techniques that typically require only minor post-processing for denoising and interpolation of occluded regions [16]- [18]; on the other hand, point clouds may be acquired from other 3D representations through virtual sampling of surfaces or by re-using mesh vertices, where the latter approach allows to preserve valuable information like surface normals.
The discussed characteristics make point clouds a promising representation, e.g. for 3D-data from computer aided engineering (CAE), where shapes are typically represented as surface meshes.For CAE data, point clouds can be sampled virtually from an initial representation based on meshes or from non-parametric functions (e.g.NURBS) [19].The quality of the obtained point cloud and how well it can be used in machine learning tasks then depends on the characteristics of the sampling algorithm, in particular, the density and regularity of the sampling, which determines how and which geometric information is preserved.Therefore, algorithm choice is essential for the performance of the model subsequently trained on the data.
Yet, point clouds also pose challenges for GDL.In particular, point clouds are unordered sets, which require operations on point clouds to be invariant under permutations of the ordering of input points.Several architectures addressing this problem using permutation-invariant operations within the network have been proposed [11], [20], [21].A second challenge is the dimensionality of the input, especially if a high resolution in the input representations is required.In CAE applications this may generally be the case, even more so if the machine learning task to be solved is to learn a predictive or regression model from data [22].
Currently, the scalability of point cloud GDL architectures to input sizes allowing for the application in data analysis tasks typically encountered in various application domains is lacking.This paper therefore investigates how point cloud models scale for increasing input size, where we extend an existing point cloud autoencoder (PC-AE) architecture and loss function to investigate a (re-)sampling scheme as a preprocessing technique in order to encode relevant geometric information more efficiently.Autoencoders learn low-dimensional, latent representations of complex input data, either in order to identify latent variables underlying the data generation process or to use the low-dimensional representations as input to further machine learning tasks [23].We perform experiments on synthetic geometric data inspired by engineering applications, where we have full control over the data generation process.We evaluate whether the PC-AE is able to recover the latent features underlying the data generation process as a function of the sampling scheme used.In particular, by focusing on the learning of constantly varying features within an object class, we evaluate to what extend the learned variables are able to represent such finer geometric features, which may be relevant in tasks such as predictive modelling.We complement our experiments with an evaluation of practical running times on particularly recent graphics processing units (GPUs) with high memory specifications.The present study thus provides a first evaluation of how targeted sampling of input data can be used to scale geometric deep learning architectures to highdimensional inputs to allow for an application of state-of-theart deep learning methods in real-world tasks.

II. LITERATURE REVIEW
In order to address the scalability of a PC-AE to highdimensional point clouds, it is important to first identify what are the constraints for training such models.Mandikal and Babu explore in [24] a deep learning architecture for handling the reconstruction of dense 3D point clouds from RGB images.The authors point out that scaling the size of the point clouds increases the number of free parameters, therefore also the difficulty to abstract the dataset.Furthermore, permutationinvariant loss functions such as the Chamfer Distance (CD) [11], [21], [25], are computed point-wise and thus might lead to prohibitive computational costs when applied to large point clouds.Starting with the first, in the following we will therefore review existing GDL architectures for processing point clouds with a focus on the number of model parameters and how this number scales as a function of input size.
Qi et al. [11] were the first to propose an architecture taking point clouds as inputs, called PointNet, which consists of multiple fully connected layers with a pre-processing step to make the input representation order invariant.The work was later extended in [20] to PointNet++, which consists of stacked PointNet architectures and a clustering operation to support the identification of local features.
In order to address unsupervised learning tasks, while significantly reducing network size, Yang et al. proposed an autoencoder architecture named FoldingNet [26].On the encoder side, FoldingNet uses the PointNet architecture, while the decoder architecture is fundamentally different from previous approaches.It uses two 3-layer perceptrons to perform an operation analogous to folding a 2D grid into the format of the output shape in order to generate a point cloud from the latent representation.The authors claim that the approach reduced the number of parameters in the decoder to 7 % of the number achieved with conventional fully connected layers.
The PC-AE proposed by Achlioptas et al. in [21] is an example of the most recent architectures, where convolutions substitute fully connected layers and the input data remains in point cloud format.The encoder-part of the network consists of five layers with 1-D convolutions followed by rectified linear unit (ReLU) activations, which is in turn followed by a maximum pooling operation over the calculated features (Fig 1).The unidimensional convolutions allow the network to address each point individually, which, together with the maximum pooling operation, makes the network order-invariant, overcoming one of the major difficulties in data processing with point clouds.
It is expected that the PC-AE approach in [21] scales better to larget input sizes than the previous approaches in [11], [20].In the PC-AE, the convolution operation shares the weights between the input points, instead of assigning a parameter to each input node as in fully connected layers.This approach reduces the number of free parameters of the model and, therefore, the architectures in [11], [20] should scale less favorably to larger input sizes.
However, when increasing the size of input point clouds, oversampling of the shapes becomes an issue.In fact, Tenbrinck et al. report in [27] that dense point clouds are prone to represent redundant data.Hence, when increasing the sampling density one should exclude irrelevant or redundant geometric information, which unnecessarily increases the computational effort and might mislead the optimizer during the training.
Along the lines of avoiding an oversampling of the shape, Gadelha et al. propose a multi-resolution PC-AE [28], where a large base point cloud was downsampled to two different lower-dimensional ones and provided as parallel input to the autoencoder.Throughout the architecture, the convolutions applied to each resolution were dependent, therefore, the features related to fine details contained in the highest-resolution point cloud could be abstracted, while the characteristics related to the positioning and distribution of the geometry over space was enforced by the lower-dimensional representations.Nevertheless, the results achieved for classification and shape reconstruction were comparable to the state-of-the-art [11], [20], while the complexity of the network increased.
An alternative approach to modifying the architecture to handle large inputs, is to change how the point cloud is sampled either from physical objects or from other 3D representations.In particular, a sampling scheme targeted at certain features can be used to adapt the dimensionality of input shapes to PC-AEs available in the literature.In [27] the authors propose a weighted graph-based sparsification method for point clouds, inspired by the Cut Pursuit algorithm [29] and motivated by the sensitivity of the current methods, such as random sampling and tree-based selection, to noise and lack of adaptability.In the paper, the authors claim Fig. 2. From the left to the right, top view of a point cloud followed by downsampled representations obtained with random uniform selection, lowand high-pass filters, according to the proposal in [30].
that the method can be extended to higher dimensional data structures and higher computational efficiency.Interestingly, it is common to represent physical domains in CAE application as undirected graphs (meshes) and, therefore, graph-based sampling has great potential to reduce the dimensionality of such representations and increase their efficiency in machine learning tasks.
Chen et al. discuss in [30] fast resampling methods for point clouds based on graph operators, where the problem is approached from a theoretical signal processing perspective.Their approach aims at achieving an optimum sampling distribution while maintaining certain features contained in the shape.In order to do so, the authors define filters using an approximation of the adjacency matrix, which is used to calculate a metric that indicates the probability for each point of being kept in the representation.Fig. 2 shows examples of resampled point clouds using three sampling schemes: random-uniform sampling, low-and high-pass filtering as proposed in the paper.The low-pass approach leads to higher number of samples in smooth areas, such as the flat faces in the plate, while the high-pass filter increases the probability of point on edges and abrupt changes in the mesh to be selected.
In contrast to graph-based sampling approaches, Öztireli et al. in [31] point out that manifold sampling is essentially a hard problem, since standard signal processing methods are inapplicable and there is not a single parameterization for the complete domain.In order to overcome these difficulties and avoid the use of piece-or patch-wise methods, the authors propose sampling techniques based on spectral properties of manifolds: instead of operating on graphs, the method assumes that a set of points with corresponding normal vectors and a defined kernel function are provided, and the essence of the algorithm relies on measuring the relevance of a point to the manifold using the Laplace-Beltrami spectrum and matrix perturbation theory.According to their research, the spectral characteristics of a manifold are nearly unique, i.e. different manifolds rarely share the same spectrum, enabling the use of those properties as metrics for comparing manifolds.The quality of the surface reconstruction achieved in the experiments was comparable or better than the methods available in the literature, within reasonable computational effort.However, there is no theoretical guarantee of optimal sampling and all experiments were performed using nearly uniformly sampled geometries.Hence, the reported performance might decrease in the general case.

III. EXPERIMENTAL SET-UP FOR EVALUATING PC-AE TRAINING EFFICIENCY AND SAMPLING STRATEGIES
We investigated the scalability of PC-AEs using a two-step approach: First, we investigated the computational demand of the training for increasing point cloud sizes in terms of running time and GPU memory usage.In particular, we determined limits in computing capabilities with respect to the maximum point cloud size that could be processed on current GPU hardware.All tests were performed using a PC-AE, which was adapted from [21] in order to reduce computational demand during training and to improve the ability to learn latent representations for the use in further machine learning applications.Second, we investigated whether through targeted sampling of shapes, a better trade-off between point cloud size and the encoding of relevant geometric information could be achieved.To this end, we applied a high-pass filtering sampling-approach that specifically encoded high-frequency features on the shape.We compared the high-pass filtered sampling to random uniform sampling in terms of reconstruction loss in training and test set.The control over the data generation process also allowed us to evaluate the learned latent features by correlating them to the known true parameters underlying the generation of the input data set.
All experiments were performed on a machine with two Intel Xeon CPUs, clocked at 2.10 GHz (16 cores, times 2 hyperthreaded), with two Nvidia Quadro RTX 8000 GPUs (48 GB).Each model was trained using a single GPU such that two models could be trained simultaneously per experiment.The methods were implemented in Python 2.7, with TensorFlow 1.3.0,CUDA 8.0 and TFLearn 0.3.2.The point clouds used in the experiments were stored as *.xyz files and read as data frames using functions from Pandas library.

A. Dataset Generation
To be able to analyze learned latent features, we generated a synthetic data set from a controlled number of parameters, such that the behaviour of the learned latent variables could be related to the true parameters underlying the data generation process.In particular, we investigated the performance of the PC-AE on finer, i.e., high-frequency, features that may be of particular interest in application domains such as CAE.
The proposed synthetic dataset was generated from a parameterized 3D shape, based on the model used in [32]: a thin plate as base shape, to which an elliptic orifice was added (Fig. 3).The orifice was parameterized by coordinates (x 1 , x 2 ) of the ellipsoids center, the orientation x 3 of the principal axis, and the aspect ratio between the lengths principal and secondary axis, x 4 .The range of parameter values was constrained such that the external borders of the plate were preserved and any pair of designs were asymmetric to each other.
For the study, 1000 geometries with parameter values drawn randomly from a uniform distribution were generated, using a pipeline of custom Python code, FreeCAD, and Meshlab.Data were generated in FreeCAD as solid and base STL meshes.The latter was then refined in Meshlab to a size of approx- imately 25000 nodes, which was considered comparable to resolutions found in CAE applications.

B. Point Cloud Sampling
As a baseline for all experiments, we used random-uniform sampling (RUS), which does not rely on domain-specific assumptions, and can be easily implemented in several applications.We compared RUS to sampling using the graphbased high-pass filter (HPF) proposed in [30].The filter was applied to the vertices of the refined mesh, where we obtain the filter-response of the i th point in the graph vertex as where N represents the point cloud domain, x i the vector of the i th point's coordinates, and A is the transition matrix, where W is the symmetric adjacency matrix that representing connectivity between nodes, and D is the diagonal degree matrix, obtained from the sum of the columns of W .When using the transition matrix, the filter reflects how much information is known about a point based on its neighborhood, and it is furthermore shift-and rotation-invariant.The optimal resampling distribution π * is proportional to the response of the graph to the filter, which is the vector of the graph response normalized by its 1 norm, such that the components sum to 1. Components of higher probability are associated with points close to abrupt changes in the geometries, e.g.edges, while components of lower probability are associated with smoother surfaces.In [30], the authors assume that typically only the point cloud is known, but not the adjacency matrix, W , which has to be estimated.This step is omitted in the present work since point clouds are extracted from polygonal meshes with known connectivity.The proposed modification was verified through visual inspection of re-sampled point clouds from geometries randomly selected from the generated dataset (Fig. 4).

C. Point Cloud Autoencoder Architecture and Training
The PC-AE implemented for the experiments is based on the architecture presented in [21].The architecture was modified by replacing the ReLU activation functions in the last convolutional layer of the encoder by hyperbolic tangent functions, and adding a sigmoid activation after the last decoder layer.We chose the sigmoid function such that the range of the activation function matched the normalized coordinate values of the shapes in the dataset.In the latent layer, the hyperbolic tangent yields value-bounded latent variables, potentially increasing the interpretability of the latent variables.
Further modifications were performed on the training procedure.Since the loss function is one of the potential bottlenecks for training large network architectures, the training was divided into two parts: In the first, a computationally cheaper coarse training was performed, using the mean square distance between points as loss function, thus imposing the order of the points to be the same for input and output-hence, the costly search part of the Chamfer Distance (CD) was avoided [25].In the second part, indicated by a stagnation of the loss function, the conventional CD approach was used.The CD algorithm used in the experiments was implemented according to [21] and the epoch when the loss function changed its behavior was defined based on observations made during tests with the dataset, but kept the same for all the experiments.
In order to impose an ordering on the point clouds, a data partitioning tree algorithm was adopted as a pre-processing step [28]: the point cloud was recursively divided into two parts according to the mean value of the points' coordinates, for each axis and partitioning.Hence, the points were organized as a list of patches which should be consistent among all geometries in the dataset.A disadvantage of the method is that the size of the point clouds becomes constrained to the power of the number of partitions, which in this case was set to two.
In order to verify effects of the modification in the architecture, a model was trained on the car class from ShapeNet Core [33] and compared to the performance reported in [21].In total, 6350 shapes in batches of 50 were used for training and 375 shapes were used for testing.Parameters were fitted using the Adam optimizer [34] with learning rate of 5 × 10 −4 , β 1 = 0.9 and β 2 = 0.99 over 500 generations.The switch to the CD was performed after 200 iterations, where this value was identified manually.The reconstruction loss obtained with the proposed architecture was comparable to the results reported in [21] (Table I).We therefore considered the architecture feasible for the performed experiments.

D. Experiments
We ran experiments in order to evaluate RUS compared to sampling with the HPF in terms of running time of the training, as well as reconstruction loss.We varied the input point cloud size between, 2048, 4096, and 8192 points, where 2048 is the input size commonly used in the literature (e.g., [11], [20], [21]).We furthermore analyzed 8-, 16-and 32dimensional latent representations, using multiples of four, i.e., the number of parameters underlying the generation of the input data set.Most of the training hyperparameters were kept the same as in [21], except for the number of iterations, which was increased to 8000.We furthermore applied data augmentation by generating three random rotations within the interval [-π/2, π/2] around the z-axis for each geometry [21].
In order to analyze the maximum point cloud size that could be processed with the given hardware and PC-AE, we performed a run-to-crash test by systematically increasing the size of the point clouds with redundant data.
Finally, the analysis of learned or abstracted features was performed in two steps: First, models were evaluated using shapes from different datasets and different sampling schemes, in order to identify overfitting and potential for generalization.In a second step, the models were verified by visually comparing the reconstructed point clouds with their respective inputs, as well as by performing interpolations within the learned latent variables.

A. Running Time for PC-AE Training
Neglecting the pre-processing costs, we evaluated the running time with respect to latent representation and the point cloud sizes.The latter had higher influence on the elapsed time per iteration (Table II), as was expected, since it increases the number of operations in the encoder and parameters of the decoder.

B. Evaluation of Maximum Input Size
In this experiment, we gradually increased the size of the point clouds by replicating points, and started a training process limited to 20 iterations.The maximum number of samples before crashing the system was 131,072 (2 17 ), with and average GPU memory usage of (56.49± 14.58)%, for a single graphic card, and elapsed time of 260s over 10 iterations; increased to 200,000 points, when the data partitioning tree is not performed, leading to (45.38 ± 19.91)% of memory usage, on average, and 450s of elapsed time every 10 iterations.

C. Reconstruction Quality for RUS and HPF Sampling Schemes
We compared the reconstruction error from a PC-AE trained on data sampled either using RUS or HPF and found that both sampling schemes had losses of comparable order of magnitude (Table III).However, networks trained on geometries sampled with HPF performed better during training, while the RUS led to better performance on unseen data.
In order to verify the capacity of the network to generalize to unseen data, the models were tested on 500 shapes sampled with the opposing approach, i.e. networks trained on RUSsampled geometries were tested on HPF data, and vice versa (Table IV, where CD a,b indicates training on dataset a and testing on dataset b).Results indicate that models trained using the RUS dataset tended to handle unseen data better, except for increasing sizes of point clouds and latent representations.The behavior observed in the experiment may be explained by the variety of information contained in the dataset, which is expected to be higher with the RUS approach, since the method is not feature-aware and the sampling follows a uniform distribution.Hence, when the number of samples increases, the HPF approach starts to select points less relevant with respect to changes in the geometry, approximating itself to the RUS method, and increasing generalization-ability.An increase in the dimensionality of the latent space furthermore means increasing the number of parameters in the network and, therefore, its ability to abstract data, which is also in line with the observed results.

D. Evaluation of Learned Latent Representations
To illustrate that training a network on point clouds applying HPF learned different features from training the network using RUS, we show the reconstruction of an exemplary shape, once for the sampling scheme for the example shape and matching the sampling in the training data set, and once for the sampling schemes not matching (Fig. 5).When comparing the reconstruction (red) to the reference point clouds (blue), it can be seen that both models provide a good approximation of the geometry when the sampling scheme matches the one used in the training data set and that this is no longer the case for a mismatch in sampling schemes.Furthermore, the PC-AE trained on RUS training data lead to a visullay better  approximation of the input than the HPF example, matching the quantitative results reported above.
As a second visual verification, we interpolated between two shapes in the latent space, using a PC-AE trained on different datasets (see Fig. 6 for an example).Regardless of the accuracy in the reconstruction, the trained models were capable of smoothly interpolating between several different shapes, indicating that the latent representation was able to successfully abstract and represent the geometric features.
We further verified that the PC-AE indeed learned the latent features underlying the used synthetic data set by calculating the Pearson correlation coefficient between design variables (DVs), x i , and learned latent variables (LV) on 150 shapes from each dataset.As a validation, we calculated the pairwise correlation between DVs to assert that design variables were indeed independent, where the highest magnitude among the coefficients was 0.24 between variables x 2 and x 3 , and all other correlations were close to zero.
Next, correlations between LVs and DVs were calculated.For simplicity, only the models with eight latent variables and trained on the datasets with 8096 points were considered for the PC-AE trained on the RUS and HPF datasets, respectively (Fig. 7).Based on the similar magnitude of correlations found in both autoencoders, we conclude that both sampling schemes lead to an equal ability of the PC-AE to abstract the features of the dataset.The model trained on the RUS dataset presented a higher correlations overall.
Finally, when analyzing the pair-wise correlation between LVs for both models, the HPF method yielded less correlated variables than the RUS (Fig. 8).Potentially, HPF lead to   a more homogeneous problem and hence to more uniform features in the HPF dataset across models.On the other hand, when RUS is used, the process of sampling the points is independent for each model such that the models are less similar.Therefore, for RUS, abstracting the features used to generated the dataset becomes harder and the distributions of points in the models are mapped differently.This is in line with the superior, generalization capabilities observed for PC-AE models trained on RUS data.

V. CONCLUSION
The research on GDL has advanced considerably in the recent years and the boost provided by the development of powerful processing units such as GPUs played an important role in this process.Among the data representations used for GDL, point clouds have become a promising approach due to their simple structure, efficiency, and potential to resolve 3D objects with high resolution, which is required for an application in the automotive digital development.The present work investigated the scalability of PC-AE from two perspectives.
First, we proposed a modification in the loss function, in order to avoid using point-wise operations during the complete training phase, which is one of the bottlenecks for using large point cloud models.Our approach was tested on the car class from ShapeNet Core [33] and achieved comparable performance to the architecture proposed in [21].The performance of the PC-AE in the experiments was evaluated in terms of computational effort and abstraction of features.In terms of computational effort, our approach could be scaled up to 200,000 points, two orders of magnitude greater than the models used in most of the reviewed works.Comparing the runtime and memory usage for different combinations of sizes of the latent representation and the point cloud, the latter had a higher influence on the computational costs, due to the resulting increase in the number of operations and free parameters in the encoder and decoder, respectively.
Second, we proposed the reduction of the dimensionality of the point clouds by a sampling scheme based on finefeature detection (HPF).In order to assess the effects of the sampling scheme on the learned features, a synthetic dataset was generated allowing for full control over the data generation process in terms of known number and types of parameters, as well as their values.This dataset allowed for the straightforward evaluation of learned features through correlation of latent space activations and design parameters.
When comparing sampling HPF to random uniform sampling, achieved reconstruction losses indicated that models trained on randomly sampled geometries generalized better than the ones trained on the dataset sampled using a HPF, since they had more information about the overall geometry while the HPF focused on the detection of edges and vertices.
Finally, the latent variables of the PC-AE trained on the RUS dataset showed a stronger correlation to the design parameters, which supports the good generalization capability of the model.However, the model trained on the HPF sampled dataset showed weaker correlation within the latent space, which was a possible sign that the sampling method enabled the autoencoder to differentiate the geometric features more efficiently.Future work should investigate if tasks other than reconstruction, as solved by the autoencoder, benefit from the proposed pre-processing through feature detection.

Fig. 3 .
Fig. 3. Parameterization of the base geometry used for generating the dataset.

Fig. 4 .
Fig.4.Point cloud resampling according to the modified high-pass filtering approach proposed in[30].From the left to the right, full point cloud (25,000 points), 256-point and 2048-point representation.

Fig. 5 .
Fig. 5. Reconstruction of geometries using the architectures trained on different datasets.The blue markers indicate the points of the input point cloud.

Fig. 6 .
Fig.6.Interpolation between geometries using the dataset sampled with the HPF method.The progression goes from left to right, top to bottom.

Fig. 7 .
Fig. 7. Pearson correlation between the design variables x i and latent variables (LV) obtained from the network trained on the RUS (left) and HPF dataset (right).

Fig. 8 .
Fig. 8. Pearson correlation between latent variables (LV) obtained from the network trained on the RUS (left) and HPF dataset (right).

TABLE I RECONSTRUCTION
LOSS OF THE PC-AE TRAINED ON THE CAR CLASS FROM SHAPNET CORE

TABLE II TIME
AND MEMORY REQUIREMENTS FOR DIFFERENT POINT CLOUD (PC) AND LATENT REPRESENTATION (LR) SIZES

TABLE III CHAMFER
DISTANCE (CD) CALCULATED ON THE TRAINING AND TEST SETS SAMPLED ACCORDING TO THE RUS APPROACH, FOR DIFFERENT POINT CLOUD (PC) AND LATENT REPRESENTATION (LR) SIZES