Originally in ArXiv under the name: Natural Illumination from Multiple Materials Using Deep Learning
Now to appear in ICCV'17
How much does a single image reveal about the environment it was taken in? In this paper, we investigate how much of that information can be retrieved from a foreground object, combined with the background (i.e. the visible part of the environment). Assuming it is not perfectly diffuse, the foreground object acts as a complexly shaped and far-from-perfect mirror. An additional challenge is that its appearance confounds the light coming from the environment with the unknown materials it is made of.
We propose a learning-based approach to predict the environment from multiple reflectance maps that are computed from approximate surface normals. The proposed method allows us to jointly model the statistics of environments and material properties. We train our system from synthesized training data, but demonstrate its applicability to real-world data. Interestingly, our analysis shows that the information obtained from objects made out of multiple materials often is complementary and leads to better performance.
This webpage contains the following material:
To verify the effectiveness in a real relighting application, we show how re-rendering with a new material looks like when illumination is captured using our method vs. a light probe. In the traditional setup (which we also used to acquire the reference for our test data) one encounters multiple exposures, (semi-automatic) image alignment, and a mirror ball with known reflectance and geometry. Instead, we have an unknown object with unknown material and a single LDR image. The following video shows how similar the two rendered results are for several real examples (similar format as Figure 9 in the paper). This is only possible when the HDR is also correctly acquired. Instead, a nearest-neighbor oracle approach already performs worse; the reflection alone is plausible, but far from the reference.