![]() The MaterialManipulator and MaterialRandomizer modules have been implemented to produce variation in appearance. Sampler modules can also select objects based on user-defined conditions and manipulate their properties (for example, enable physics simulation for 3D models ). So, for example, the object positions can be generated on a spherical surface, but the objects will not be too close (for example, the cameras will not look straight at the wall) and/or will not collide with each other. ![]() Sampler modules can generate positions for object placement (cameras, lights, 3D models) with various distributions and constraints such as proximity checks. Sampler modules provide randomization capabilities, which are the most interesting part of the pipeline. Unlike other works mentioned earlier, a specialized content creation pipeline is proposed in and, as was stated by the authors precisely, their goal was to design an open-source and universal pipeline. It is important to note that authors reported that accurate modeling of the scene context was more significant (+16% for CNN precision) in comparison to accurate light transport simulation (+6% for CNN precision). Therefore, reasonably fast creation of a dataset is possible only if there are significant computational resources. They needed 15 to 720 seconds per image (depending on quality) on a 16-core Xeon CPU with 112 GB of RAM. Other disadvantages of Hodan’s approach include the use of non-freely available tools Maya and Arnold (which limits the adoption of this work) and slow rendering reported by authors. This is fundamentally different from previous approaches. used only six scenes with 30 different objects. Obviously, this approach suffers from the main disadvantage of current film production pipelines: high cost and high labor input. achieved high quality by using the existing film production content creation pipeline with Autodesk Maya and Arnold rendering system, high quality 3D models and physics simulation as the main randomizing tool. Some of the results of our work are shown in Figure 1. Therefore, our contribution is in a successfully validated software solution that allows one to augment and expand training datasets in a controlled way. Unlike existing works, we propose a general dataset augmentation pipeline that we have tested on many different scenarios and datasets. The mechanism we propose allows us to investigate its impact by adding objects with certain features, and in some cases, the impact of the particular feature may be negligible. In most cases, it is impossible to a priori determine the initial distribution of the features we are interesting in. This allows us to both improve the accuracy of CNN-based methods and to conduct experiments to study the influence of various factors on the accuracy. The purpose of our work is to propose a controllable and customizable way for inserting the virtual objects and variating their distribution and appearance in a real dataset. ![]() By using synthetic data in training, we have improved the accuracy of CNN-based sensors compared to using only real-life data. Our technique allows rendering of a single 3D object or 3D scene in a variety of ways, including changing of geometry, materials and lighting. We developed a content creation pipeline targeted to create realistic image sequences with highly variable content. Our solution is well-controlled and allows us to generate datasets in a reproducible manner with the desired distribution of features which is essential to conduct specific experiments in computer vision. We present a pipeline for image dataset augmentation by synthesis with computer graphics and generative neural networks approaches. Our solution improves training datasets and validates it in several different applications: object classification and detection, depth buffer reconstruction, panoptic segmentation. In this paper, we propose an approach aimed to solve both of these problems and investigate its efficiency. The preparation of such datasets faces two fundamental challenges: data quantity and data quality. In practice, the accuracy of CNN-based sensors is highly dependent on the quality of the training datasets. The rapid development of machine learning technologies in recent years has led to the emergence of CNN-based sensors or ML-enabled smart sensor systems, which are intensively used in medical analytics, unmanned driving of cars, Earth sensing, etc.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |