Publications

Journals (12)

The creation of truly believable simulated natural environments remains an unsolved problem in Computer Graphics. This is, in part, due to a lack of visual variety. In nature, apart from variation due to abiotic and biotic growth factors, a significant role is played by disturbance events, such as fires, windstorms, disease, and death and decay processes, which give rise to both standing dead trees (snags) and downed woody debris (logs). For instance, snags constitute on average 10% of unmanaged forests by basal area, and logs account for 2.5 times this quantity. While previous systems have incorporated individual elements of disturbance (e.g., forest fires) and decay (e.g., the formation of humus), there has been no unifying treatment, perhaps because of the challenge of matching simulation results with generated geometric models. In this paper, we present a framework that combines an ecosystem simulation, which explicitly incorporates disturbance events and decay processes, with a model realization process, which balances the uniqueness arising from life history with the need for instancing due to memory constraints. We tested our hypothesis concerning the visual impact of disturbance and decay with a two-alternative forced-choice experiment (n = 116). Our findings are that the presence of dead wood in various forms, as snags or logs, significantly improves the believability of natural scenes, while, surprisingly, general variation in the number of model instances, with up to 8 models per species, and a focus on disturbance events, does not.

@article{Peytavie2024deadwood,
    title = {DeadWood: Including disturbance and decay in the depiction of digital nature},
    author = {Peytavie, Adrien and Gain, James and Gu\'{e}rin, Eric and Argudo, Oscar and Galin, Eric},
    journal = {ACM Transactions on Graphics},
    year = {2024},
    volume = {},
    number = {}
}

High-end Terrestrial Lidar Scanners are often equipped with RGB cameras that are used to colorize the point samples. Some of these scanners produce panoramic HDR images by encompassing the information of multiple pictures with different exposures. Unfortunately, exported RGB color values are not in an absolute color space, and thus point samples with similar reflectivity values might exhibit strong color differences depending on the scan the sample comes from. These color differences produce severe visual artifacts if, as usual, multiple point clouds colorized independently are combined into a single point cloud. In this paper we propose an automatic algorithm to minimize color differences among a collection of registered scans. The basic idea is to find correspondences between pairs of scans, i.e. surface patches that have been captured by both scans. If the patches meet certain requirements, their colors should match in both scans. We build a graph from such pair-wise correspondences, and solve for the gain compensation factors that better uniformize color across scans. The resulting panoramas can be used to colorize the point clouds consistently. We discuss the characterization of good candidate matches, and how to find such correspondences directly on the panorama images instead of in 3D space. We have tested this approach to uniformize color across scans acquired with a Leica RTC360 scanner, with very good results.

@article{MunozPandiella2022,
  title = {Gain compensation across LIDAR scans},
  journal = {Computers & Graphics},
  volume = {106},
  pages = {174-186},
  year = {2022},
  issn = {0097-8493},
  author = {Imanol Munoz-Pandiella and Marc {Comino Trinidad} and Carlos Andújar and Oscar Argudo and Carles Bosch and Antonio Chica and Beatriz Martínez},
}

Glaciers are some of the most visually arresting and scenic elements of cold regions and high mountain landscapes. Although snow-covered terrains have previously received attention in computer graphics, simulating the temporal evolution of glaciers as well as modeling their wide range of features has never been addressed. In this paper, we combine a Shallow Ice Approximation simulation with a procedural amplification process to author high-resolution realistic glaciers. Our multiresolution method allows the interactive simulation of the formation and the evolution of glaciers over hundreds of years. The user can easily modify the environment variables, such as the average temperature or precipitation rate, to control the glacier growth, or directly use brushes to sculpt the ice or bedrock with interactive feedback. Mesoscale and smallscale landforms that are not captured by the glacier simulation, such as crevasses, moraines, seracs, ogives, or icefalls are synthesized using procedural rules inspired by observations in glaciology and according to the physical parameters derived from the simulation. Our method lends itself to seamless integration into production pipelines to decorate reliefs with glaciers and realistic ice features.

@article{Argudo2020glaciers,
    title = {Simulation, Modeling and Authoring of Glaciers},
    author = {Argudo,Oscar and Galin,Eric and Peytavie,Adrien and Paris,Axel and Gu\'{e}rin,Eric},
    journal = {ACM Transactions on Graphics (SIGGRAPH Asia 2020)},
    year = {2020},
    volume = {39},
    number = {6}
}

Mountainous digital terrains are an important element of many virtual environments and find application in games, film, simulation and training. Unfortunately, while existing synthesis methods produce locally plausible results they often fail to respect global structure. This is exacerbated by a dearth of automated metrics for assessing terrain properties at a macro level. We address these issues by building on techniques from orometry, a field that involves the measurement of mountains and other relief features. First, we construct a sparse metric computed on the peaks and saddles of a mountain range and show that, when used for classification, this is capable of robustly distinguishing between different mountain ranges. Second, we present a synthesis method that takes a coarse elevation map as input and builds a graph of peaks and saddles respecting a given orometric distribution. This is then expanded into a fully continuous elevation function by deriving a consistent river network and shaping the valley slopes. In terms of authoring, users provide various control maps and are also able to edit, reposition, insert and remove terrain features all while retaining the characteristics of a selected mountain range. The result is a terrain analysis and synthesis method that considers and incorporates orometric properties, and is, on the basis of our user study, more visually plausible than existing terrain generation methods.

@article{Argudo2019orometry,
    title = {Orometry-based Terrain Analysis and Synthesis},
    author = {Argudo,Oscar and Galin,Eric and Peytavie,Adrien and Paris,Axel and Gain,James and Gu\'{e}rin,Eric},
    journal = {ACM Transactions on Graphics (SIGGRAPH Asia 2019)},
    year = {2019},
    volume = {38},
    number = {6}
}

We present an interactive aeolian simulation to author hot desert scenery. Wind is an important erosion agent in deserts which, despite its importance, has been neglected in computer graphics. Our framework overcomes this and allows generating a variety of sand dunes, including barchans, longitudinal and anchored dunes, and simulates abrasion which erodes bedrock and sculpts complex landforms. Given an input time varying high altitude wind field, we compute the wind field at the surface of the terrain according to the relief, and simulate the transport of sand blown by the wind. The user can interactively model complex desert landscapes, and control their evolution throughout time either by using a variety of interactive brushes or by prescribing events along a user-defined time-line.

@article {Paris2019Desert,
    author = {Paris, Axel and Peytavie, Adrien and Guérin, Eric and Argudo, Oscar and Galin, Eric},
    title = {Desertscapes Simulation},
    journal = {Computer Graphics Forum},
    volume = {38},
    number = {7},
    year = {2019},
}

The automatic generation of realistic vegetation closely reproducing the appearance of specific plant species is still a challenging topic in computer graphics. In this paper, we present a new approach to generate new tree models from a small collection of frontal RGBA images of trees. The new models are represented either as single billboards (suitable for still image generation in areas such as architecture rendering) or as billboard clouds (providing parallax effects in interactive applications). Key ingredients of our method include the synthesis of new contours through convex combinations of exemplar countours, the automatic segmentation into crown/trunk classes and the transfer of RGBA colour from the exemplar images to the synthetic target. We also describe a fully automatic approach to convert a single tree image into a billboard cloud by extracting superpixels and distributing them inside a silhouette‐defined 3D volume. Our algorithm allows for the automatic generation of an arbitrary number of tree variations from minimal input, and thus provides a fast solution to add vegetation variety in outdoor scenes.

@article{Argudo2020treevariations,
    title = {Image-Based Tree Variations},
    author = {Argudo, Oscar and Andújar, Carlos and Chica, Antoni},
    journal = {Computer Graphics Forum},
    year = {2020},
    volume = {39},
    number = {1},
    pages = {174-184}}

Despite recent advances in surveying techniques, publicly available Digital Elevation Models (DEMs) of terrains are low-resolution except for selected places on Earth. In this paper we present a new method to turn low-resolution DEMs into plausible and faithful high-resolution terrains. Unlike other approaches for terrain synthesis/amplification (fractal noise, hydraulic and thermal erosion, multi-resolution dictionaries), we benefit from high-resolution aerial images to produce highly-detailed DEMs mimicking the features of the real terrain. We explore different architectures for Fully Convolutional Neural Networks to learn upsampling patterns for DEMs from detailed training sets (high-resolution DEMs and orthophotos), yielding up to one order of magnitude more resolution. Our comparative results show that our method outperforms competing data amplification approaches in terms of elevation accuracy and terrain plausibility.

@article{Argudo2018terrain, 
    title = {Terrain Super-resolution through Aerial Imagery and Fully Convolutional Networks},
    author = {Argudo, Oscar and Chica, Antoni and And\'{u}jar, Carlos},
    journal = {Computer Graphics Forum},
    volume = {37},
    number = {2},
    pages = {101--110},
    year = {2018}}

The visual enrichment of digital terrain models with plausible synthetic detail requires the segmentation of aerial images into a suitable collection of categories. In this paper we present a complete pipeline for segmenting high-resolution aerial images into a user-defined set of categories distinguishing e.g. terrain, sand, snow, water, and different types of vegetation. This segmentation-for-synthesis problem implies that per-pixel categories must be established according to the algorithms chosen for rendering the synthetic detail. This precludes the definition of a universal set of labels and hinders the construction of large training sets. Since artists might choose to add new categories on the fly, the whole pipeline must be robust against unbalanced datasets, and fast on both training and inference. Under these constraints, we analyze the contribution of common per-pixel descriptors, and compare the performance of state-of-the-art supervised learning algorithms. We report the findings of two user studies. The first one was conducted to analyze human accuracy when manually labeling aerial images. The second user study compares detailed terrains built using different segmentation strategies, including official land cover maps. These studies demonstrate that our approach can be used to turn digital elevation models into fully-featured, detailed terrains with minimal authoring efforts.

@article{Argudo2017segmentation, 
    title = { Segmentation of Aerial Images for Plausible Detail Synthesis }, 
    author = { Argudo, Oscar and Comino, Marc and Chica, Antoni and And\'{u}jar, Carlos and Lumbreras, Felipe }, 
    journal = { Computers \& Graphics }, 
    year = { 2018 }, 
    volume = { 71 }, 
    pages = { 23--34 } }

We present an efficient method for generating coherent multi-layer landscapes. We use a dictionary built from exemplars to synthesize high-resolution fully featured terrains from input low-resolution elevation data. Our example-based method consists in analyzing real-world terrain examples and learning the procedural rules directly from these inputs. We take into account not only the elevation of the terrain, but also additional layers such as the slope, orientation, drainage area, the density and distribution of vegetation, and the soil type. By increasing the variety of terrain exemplars, our method allows the user to synthesize and control different types of landscapes and biomes, such as temperate or rain forests, arid deserts and mountains.

@article{Argudo2017coherent, 
    title = { Coherent multi-layer landscape synthesis }, 
    author = { Argudo, Oscar and And\'{u}jar, Carlos and Chica, Antoni and Gu\'{e}rin, Eric and Digne, Julie and Peytavie, Adrien and Galin, Eric }, 
    journal = { The Visual Computer }, 
    year = { 2017 }, 
    volume = { 33 }, 
    number = { 6 }, 
    pages = { 1005--1015 }, }

State-of-the-art approaches for tree reconstruction either put limiting constraints on the input side (requiring multiple photographs, a scanned point cloud or intensive user input) or provide a representation only suitable for front views of the tree. In this paper we present a complete pipeline for synthesizing and rendering detailed trees from a single photograph with minimal user effort. Since the overall shape and appearance of each tree is recovered from a single photograph of the tree crown, artists can benefit from georeferenced images to populate landscapes with native tree species. A key element of our approach is a compact representation of dense tree crowns through a radial distance map. Our first contribution is an automatic algorithm for generating such representations from a single exemplar image of a tree. We create a rough estimate of the crown shape by solving a thin-plate energy minimization problem, and then add detail through a simplified shape-from-shading approach. The use of seamless texture synthesis results in an image-based representation that can be rendered from arbitrary view directions at different levels of detail. Distant trees benefit from an output-sensitive algorithm inspired on relief mapping. For close-up trees we use a billboard cloud where leaflets are distributed inside the crown shape through a space colonization algorithm. In both cases our representation ensures efficient preservation of the crown shape. Major benefits of our approach include: it recovers the overall shape from a single tree image, involves no tree modeling knowledge and minimal authoring effort, and the associated image-based representation is easy to compress and thus suitable for network streaming.

@article{Argudo2016singlepicture, 
    title = { Single-picture reconstruction and rendering of trees for plausible vegetation synthesis }, 
    author = { Argudo, Oscar and Chica, Antoni and And\'{u}jar, Carlos }, 
    journal = { Computers \& Graphics }, 
    year = { 2016 }, 
    volume = { 57 }, 
    pages = { 55--67 }, }

The use of virtual prototypes and digital models containing thousands of individual objects is commonplace in complex industrial applications like the cooperative design of huge ships. Designers are interested in selecting and editing specific sets of objects during the interactive inspection sessions. This is however not supported by standard visualization systems for huge models. In this paper we discuss in detail the concept of rendering front in multiresolution trees, their properties and the algorithms that construct the hierarchy and efficiently render it, applied to very complex CAD models, so that the model structure and the identities of objects are preserved. We also propose an algorithm for the interactive inspection of huge models which uses a rendering budget and supports selection of individual objects and sets of objects, displacement of the selected objects and real-time collision detection during these displacements. Our solution–based on the analysis of several existing view-dependent visualization schemes–uses a Hybrid Multiresolution Tree that mixes layers of exact geometry, simplified models and impostors, together with a time-critical, view-dependent algorithm and a Constrained Front. The algorithm has been successfully tested in real industrial environments; the models involved are presented and discussed in the paper.

@article{Argudo2016interactive, 
    title = "Interactive inspection of complex multi-object industrial assemblies", 
    author = "Argudo, Oscar and Besora, Isaac and Brunet, Pere and Creus, Carles and Hermosilla, Pedro and Navazo,Isabel and Vinacua,\`{A}lvar"
    journal = "Computer-Aided Design ", 
    volume = "79", 
    number = "", 
    pages = "48--59", 
    year = "2016" }

We discuss bi-harmonic fields which approximate signed distance fields. We conclude that the bi-harmonic field approximation can be a powerful tool for mesh completion in general and complex cases. We present an adaptive, multigrid algorithm to extrapolate signed distance fields. By defining a volume mask in a closed region bounding the area that must be repaired, the algorithm computes a signed distance field in well-defined regions and uses it as an over-determined boundary condition constraint for the biharmonic field computation in the remaining regions. The algorithm operates locally, within an expanded bounding box of each hole, and therefore scales well with the number of holes in a single, complex model. We discuss this approximation in practical examples in the case of triangular meshes resulting from laser scan acquisitions which require massive hole repair. We conclude that the proposed algorithm is robust and general, and is able to deal with complex topological cases.

@article{Argudo2015biharmonic, 
    title = { Biharmonic Fields and Mesh Completion }, 
    author = { Argudo, Oscar and Brunet, Pere and Chica, Antoni and Vinacua, \`{A}lvar }, 
    journal = { Graphical Models }, 
    year = { 2015 }, 
    volume = { 82 }, 
    pages = { 137–148 }, }

Conferences (4)

Sant Quirze de Pedret is a Romanesque church located in Cercs (Catalonia, Spain) at the foothills of the Pyrenees. Its walls harbored one of the most important examples of mural paintings in Catalan Romanesque Art. However, in two different campaigns (in 1921 and 1937) the paintings were removed using the strappo technique and transferred to museums for safekeeping. This detachment protected the paintings from being sold in the art market, but at the price of breaking the integrity of the monument. Nowadays, the paintings are exhibited in the Museu Nacional d’Art de Catalunya - MNAC (Barcelona, Catalonia) and the Museu Diocesà i Comarcal de Solsona - MDCS (Solsona, Catalonia). Some fragments of the paintings are still on the walls of the church. In this work, we present the methodology to digitally reconstruct the church building at its different phases and group the dispersed paintings in a single virtual church, commissioned by the MDCS. We have combined 3D reconstruction (LIDAR and photogrammetric using portable artificial illumination) and modeling techniques (including texture transfer between different shapes) to recover the integrity of the monument in a single 3D virtual model. Furthermore, we have reconstructed the church building at different significant historical moments and placed actual paintings on its virtual walls, based on archaeological knowledge. This set of 3D models allows experts and visitors to better understand the monument as a whole, the relations between the different paintings, and its evolution over time.

@inproceedings{ MunozPandiella2022, 
    title = { Digital Reintegration of Distributed Mural Paintings at Different Architectural Phases: the Case of St. Quirze de Pedret }, 
    author = { Munoz-Pandiella, Imanol and Argudo, Oscar and Lor\'{e}s Otzet, Imma and Font Comas, Joan, and \`{A}vila Casademont, Gen\'{i}s and Pueyo, Xavier and Andujar, Carlos  }, 
    year = { 2022 },
    booktitle = {Eurographics Workshop on Graphics and Cultural Heritage},
    publisher = {The Eurographics Association},
    ISSN = {2312-6124},
    ISBN = {978-3-03868-178-6},
    DOI = {10.2312/gch.20221227}
}

Structure-from-motion along with multi-view stereo techniques jointly allow for the inexpensive scanning of 3D objects (e.g. buildings) using just a collection of images taken from commodity cameras. Despite major advances in these fields, a major limitation of dense reconstruction algorithms is that correct depth/normal values are not recovered on specular surfaces (e.g. windows) and parts lacking image features (e.g. flat, textureless parts of the facade). Since these reflective properties are inherent to the surface being acquired, images from different viewpoints hardly contribute to solve this problem. In this paper we present a simple method for detecting, classifying and filling non-valid data regions in depth maps produced by dense stereo algorithms. Triangles meshes reconstructed from our repaired depth maps exhibit much higher quality than those produced by state-of-the-art reconstruction algorithms like Screened Poisson-based techniques.

@inproceedings{Andujar2018depthmap, 
    title = { Depth Map Repairing for Building Reconstruction }, 
    author = { Andújar, Carlos and Argudo, Oscar and Besora, Isaac and Brunet, Pere and Chica, Antoni and Comino, Marc }, 
    booktitle = {Spanish Computer Graphics Conference (CEIG)},
    year = { 2018 } }

The cost-effective generation of realistic vegetation is still a challenging topic in computer graphics. The simplest representation of a tree consists of a single texture-mapped billboard. Although a tree billboard does not support top views, this is the most common representation for still image generation in areas such as architecture rendering. In this paper we present a new approach to generate new tree models from a small collection of RGBA images of trees. Key ingredients of our method are the representation of the tree contour space with a small set of basis vectors, the automatic crown/trunk segmentation, and the continuous transfer of RGBA color from the exemplar images to the synthetic target. Our algorithm allows the efficient generation of an arbitrary number of tree variations and thus provides a fast solution to add variety among trees in outdoor scenes.

@inproceedings{Argudo2017treevariations, 
    title = { Tree variations }, 
    author = { Argudo, Oscar and And\'{u}jar, Carlos and Chica, Antoni }, 
    booktitle = {Spanish Computer Graphics Conference (CEIG)},
    year = { 2017 } }

In this paper we propose a photon mappingbased technique for the efficient rendering of urban landscapes. Unlike traditional photon mapping approaches, we accumulate the photon energy into a collection of persistent 2D photon buffers encoding the incoming radiance for a superset of the surfaces contributing to the current image. We define an implicit parameterization to map surface points onto photon buffer locations. This is achieved through a cylindrical projection for the building blocks plus an orthogonal projection for the terrain. An adaptive scheme is used to adapt the resolution of the photon buffers to the viewing conditions. Our customized photon mapping algorithm combines multiple acceleration strategies to provide efficient rendering during walkthroughs and flythroughs with minimal temporal artifacts. To the best of our knowledge, the algorithm we present in this paper is the first one to address the problem of interactive global illumination for large urban landscapes.

@inproceedings{Argudo2012interactive, 
    title = { Interactive rendering of urban models with global illumination }, 
    author = { Argudo, Oscar and And\'{u}jar, Carlos and Patow, Gustavo A. }, 
    booktitle = { Proceedings of Computer Graphics International }, 
    year = { 2012 } }

Other (1)

During the last years, we have witnessed significant improvements in digital terrain modeling, mainly through photogrammetric techniques based on satellite and aerial photography, as well as laser scanning. These techniques allow the creation of Digital Elevation Models (DEM) and Digital Surface Models (DSM) that can be streamed over the network and explored through virtual globe applications like Google Earth or NASA WorldWind. The resolution of these 3D scenes has improved noticeably in the last years, reaching in some urban areas resolutions up to 1m or less for DEM and buildings, and less than 10 cm per pixel in the associated aerial imagery. However, in rural, forest or mountainous areas, the typical resolution for elevation datasets ranges between 5 and 30 meters, and typical resolution of corresponding aerial photographs ranges between 25 cm to 1 m. This current level of detail is only sufficient for aerial points of view, but as the viewpoint approaches the surface the terrain loses its realistic appearance.

One approach to augment the detail on top of currently available datasets is adding synthetic details in a plausible manner, i.e. including elements that match the features perceived in the aerial view. By combining the real dataset with the instancing of models on the terrain and other procedural detail techniques, the effective resolution can potentially become arbitrary. There are several applications that do not need an exact reproduction of the real elements but would greatly benefit from plausibly enhanced terrain models: videogames and entertainment applications, visual impact assessment (e.g. how a new ski resort would look), virtual tourism, simulations, etc.

In this thesis we propose new methods and tools to help the reconstruction and synthesis of high-resolution terrain scenes from currently available data sources, in order to achieve realistically looking ground-level views. In particular, we decided to focus on rural scenarios, mountains and forest areas. Our main goal is the combination of plausible synthetic elements and procedural detail with publicly available real data to create detailed 3D scenes from existing locations. Our research has focused on the following contributions: - An efficient pipeline for aerial imagery segmentation - Plausible terrain enhancement from high-resolution examples - Super-resolution of DEM by transferring details from the aerial photograph - Synthesis of arbitrary tree picture variations from a reduced set of photographs - Reconstruction of 3D tree models from a single image - A compact and efficient tree representation for real-time rendering of forest landscapes