Reflectance transformation imaging (RTI) is a computational photography technique widely used in the cultural heritage and material science domains to characterize relieved surfaces. It basically consists of capturing multiple images from a fixed viewpoint with varying lights. Handling the potentially huge amount of information stored in an RTI acquisition that consists typically of 50–100RGB values per pixel, allowing data exchange, interactive visualization, and material analysis, is not easy. The solution used in practical applications consists of creating “relightable images” by approximating the pixel information with a function of the light direction, encoded with a small number of parameters. This encoding allows the estimation of images relighted from novel, arbitrary lights, with a quality that, however, is not always satisfactory. In this paper, we present NeuralRTI, a framework for pixel-based encoding and relighting of RTI data. Using a simple autoencoder architecture, we show that it is possible to obtain a highly compressed representation that better preserves the original information and provides increased quality of virtual images relighted from novel directions, especially in the case of challenging glossy materials. We also address the problem of validating the relight quality on different surfaces, proposing a specific benchmark, SynthRTI, including image collections synthetically created with physical-based rendering and featuring objects with different materials and geometric complexity. On this dataset and as well on a collection of real acquisitions performed on heterogeneous surfaces, we demonstrate the advantages of the proposed relightable image encoding.

Neural reflectance transformation imaging

Dulecha, Tinsae G.
;
Fanni, Filippo A.;Ponchio, Federico;Pellacini, Fabio;Giachetti, Andrea
2020-01-01

Abstract

Reflectance transformation imaging (RTI) is a computational photography technique widely used in the cultural heritage and material science domains to characterize relieved surfaces. It basically consists of capturing multiple images from a fixed viewpoint with varying lights. Handling the potentially huge amount of information stored in an RTI acquisition that consists typically of 50–100RGB values per pixel, allowing data exchange, interactive visualization, and material analysis, is not easy. The solution used in practical applications consists of creating “relightable images” by approximating the pixel information with a function of the light direction, encoded with a small number of parameters. This encoding allows the estimation of images relighted from novel, arbitrary lights, with a quality that, however, is not always satisfactory. In this paper, we present NeuralRTI, a framework for pixel-based encoding and relighting of RTI data. Using a simple autoencoder architecture, we show that it is possible to obtain a highly compressed representation that better preserves the original information and provides increased quality of virtual images relighted from novel directions, especially in the case of challenging glossy materials. We also address the problem of validating the relight quality on different surfaces, proposing a specific benchmark, SynthRTI, including image collections synthetically created with physical-based rendering and featuring objects with different materials and geometric complexity. On this dataset and as well on a collection of real acquisitions performed on heterogeneous surfaces, we demonstrate the advantages of the proposed relightable image encoding.
2020
Reflectance Transformation Imaging , Relighting · Neural RTI , Autoencoder, Benchmark
File in questo prodotto:
File Dimensione Formato  
10.1007_s00371-020-01910-9.pdf

accesso aperto

Licenza: Creative commons
Dimensione 3.29 MB
Formato Adobe PDF
3.29 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/1021905
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 12
  • ???jsp.display-item.citation.isi??? 9
social impact