Dr. Kenneth Vanhoey

Chercheur à l'ÉPF Zürich - Laboratoire de vision assistée par ordinateur

Sujets d'intérêt

  • Reconstruction 3D: géométrie et apparence (de dimension supérieure)
  • Texturage: temps-réel, à la volée, par l'exemple
  • Photographie computationnelle: transfert de texture/couleur/style, inpainting, etc.
  • Perception en synthèse d'images: métriques de qualité perceptuelle d'objets 3D, perception d'artefacts dans des textures
  • Apprentissage profond: appliqué à l'informatique graphique et la vision



Articles en conférences internationales
  • Unavailable image
    DARN: a Deep Adversarial Residual Network for Intrinsic Image Decomposition
    L. Lettry, K. Vanhoey, L. Van Gool In proceedings of the IEEE Winter Conference on Applications of Computer Vision 2018 conference.
    Abstract :

    We present a new deep supervised learning method for intrinsic decomposition of a single image into its albedo and shading components. Our contributions are based on a new fully convolutional neural network that estimates absolute albedo and shading jointly. Our solution relies on a single end-to-end deep sequence of residual blocks and a perceptually-motivated metric formed by two adversarially trained discriminators. As opposed to classical intrinsic image decomposition work, it is fully data-driven, hence does not require any physical priors like shading smoothness or albedo sparsity, nor does it rely on geometric information such as depth. Compared to recent deep learning techniques, we simplify the architecture, making it easier to build and train, and constrain it to generate a valid and reversible decomposition. We rediscuss and augment the set of quantitative metrics so as to account for the more challenging recovery of non scale-invariant quantities. We train and demonstrate our architecture on the publicly available MPI Sintel dataset and its intrinsic image decomposition, show attenuated overfitting issues and discuss generalizability to other data. Results show that our work outperforms the state of the art deep algorithms both on the qualitative and quantitative aspect.


Articles en revues internationales
  • Unavailable image
    Visual Quality Assessment of 3D Models: on the Influence of Light-Material Interaction
    PDF BibteX (Supplemental material here)
    K. Vanhoey, B. Sauvage, P. Kraemer, G. Lavoué ACM Transactions on Applied Perception, vol. 15, issue 1 (October 2017).
    Abstract :

    Geometric modifications of 3D digital models are commonplace for the purpose of efficient rendering or compact storage. Modifications imply visual distortions which are hard to measure numerically. They depend not only on the model itself but also on how the model is visualized. We hypothesize that the model’s light environment and the way it reflects incoming light strongly influences perceived quality. Hence, we conduct a perceptual study demonstrating that the same modifications can be masked, or conversely highlighted, by different light-matter interactions. Additionally, we propose a new metric that predicts the perceived distortion of 3D modifications for a known interaction. It operates in the space of 3D meshes with the object’s appearance, i.e. the light emitted by its surface in any direction given a known incoming light. Despite its simplicity, this metric outperforms 3D mesh metrics and competes with sophisticated perceptual image-based metrics in terms of correlation to subjective measurements. Unlike image-based methods, it has the advantage of being computable prior to the costly rendering steps of image projection and rasterization of the scene for given camera parameters.

Articles en conférences internationales
  • Image not yet available
    DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks
    PDF BibteX (project page)
    A. Ihnatov, N. Kobyshev, R. Timofte, K. Vanhoey, L. Van Gool Will be presented by A. Ihnatov at the International Conference on Computer Vision 2017 (ICCV), October 22-29, 2017, Venice, Italy. In proceedings of the International Conference on Computer Vision 2017.
    Abstract :

    Despite a rapid rise in the quality of built-in smartphone cameras, their physical limitations - small sensor size, compact lenses and the lack of specific hardware, -- impede them to achieve the quality results of DSLR cameras. In this work we present an end-to-end deep learning approach that bridges this gap by translating ordinary photos into DSLR-quality images. We propose learning the translation function using a residual convolutional neural network that improves both color rendition and image sharpness. Since the standard mean squared loss is not well suited for measuring perceptual image quality, we introduce a composite perceptual error function that combines content, color and texture losses. The first two losses are defined analytically, while the texture loss is learned in an adversarial fashion. We also present DPED, a large-scale dataset that consists of real photos captured from three different phones and one high-end reflex camera. Our quantitative and qualitative assessments reveal that the enhanced image quality is comparable to that of DSLR-taken photos, while the methodology is generalized to any type of digital camera.

  • Image not yet available
    Repeated Pattern Detection using CNN activations
    PDF BibteX (Slides) (Supplemental material here)
    L. Lettry, M. Perdoch, K. Vanhoey, L. Van Gool Presented by L. Lettry at the IEEE Winter Conference on Applications of Computer Vision 2017 conference, March 27-29, 2017, Santa Rosa, California, USA.
    (also presented by K. Vanhoey at the J-FIG 2016 annual meeting, December 2nd, 2016, Grenoble, France)
    In proceedings of the IEEE Winter Conference on Applications of Computer Vision 2017 conference.
    Abstract :

    We propose a new approach for detecting repeated patterns on a grid in a single image. To do so, we detect repetitions in the space of pre-trained deep CNN filter responses at all layer levels. These encode features at several conceptual levels (from low-level patches to high-level semantics) as well as scales (from local to global). As a result, our repeated pattern detector is robust to challenging cases where repeated tiles show strong variation in visual appearance due to occlusions, lighting or background clutter. Our method contrasts with previous approaches that rely on keypoint extraction, description and clustering or on patch correlation. These generally only detect low-level feature clusters that do not handle variations in visual appearance of the patterns very well. Our method is simpler, yet incorporates high level features implicitly. As such, we can demonstrate detections of repetitions with strong appearance variations, organized on a nearly-regular axis-aligned grid Results show robustness and consistency throughout a varied database of more than 150 images.

  • Unavailable image
    Comparison of Texture Synthesis Methods for Content Generation in Ultrasound Simulation for Training
    PDF BibteX PDF
    O. Mattausch, E. Ren, M. Bajka, K. Vanhoey, O. Göksel Presented by O. Mattausch at the SPIE Medical Imaging 2017 conference, Orlando, Florida, USA. In proceedings of the SPIE Medical Imaging 2017 conference.
    Abstract :

    Navigation and interpretation of ultrasound (US) images require substantial expertise, the training of which can be aided by virtual-reality simulators. However, a major challenge in creating plausible simulated US images is the generation of realistic ultrasound speckle. Since typical ultrasound speckle exhibits many properties of Markov Random Fields, it is conceivable to use texture synthesis for generating plausible US appearance. In this work, we investigate popular classes of texture synthesis methods for generating realistic US content. In a user study, we evaluate their performance for reproducing homogeneous tissue regions in B-mode US images from small image samples of similar tissue and report the best-performing synthesis methods. We further show that regression trees can be used on speckle texture features to learn a predictor for US realism.

Communications grand public
  • VarCity: the Video YouTube K. Vanhoey (Director, Writer and co-Producer)
    C. E. Porto de Oliveira (Animation, Edition and Composition)
    H. Riemenschneider (Associate Writer and co-Producer)
    L. Van Gool (co-Producer)
    A. Bódis-Szomorú (Associate Writer)
    S. Manén Freixa (Associate Writer)
    D. P. Paudel (Associate Writer)
    Premiere at the Xenix movie theater, May 19th, 2017, Zürich.
    K. Vanhoey and C. E. Porto de Oliveira presented a SIGGRAPH Talk PDF BibteX (Slides) about this video production on August 2nd, 2017 in Los Angeles, California, USA.
    Abstract :

    VarCity - The Video showcases some of the buildings blocks of creating an entire city from images.

    VarCity was a 5-year research project financed by the European Research Council and obtained by ETH Professor Luc Van Gool in 2012. After 5 years of research at the Computer Vision Lab, ETH Zurich, it resulted in over 70 research papers published in top-tier conference of Computer Vision and Graphcs. We summarized the achievements in a documentary video.

    Details can be found on the VarCity website and on IMDB.


Articles en conférences internationales avec publication des actes dans une revue
  • Unavailable image
    Simplification of Meshes with Digitized Radiance
    PDF BibteX Video (Slides) (Supplemental material here)
    K. Vanhoey, B. Sauvage, P. Kraemer, F. Larue, J.-M. Dischler Presented by K. Vanhoey at the Computer Graphics International 2015 conference, June 24-26, Strasbourg, France. Acceptance rate: 21%. The Visual Computer, vol. 31, issue 6-8. (Proceedings of Computer Graphics International 2015) Impact factor: 1.29 (source: ResearchGate)
    Abstract :

    View-dependent surface color of virtual objects can be represented by outgoing radiance of the surface. In this paper we tackle the processing of outgoing radiance stored as a vertex attribute of triangle meshes. Data resulting from an acquisition process can be very large and computationally intensive to render. We show that when reducing the global memory footprint of such acquired objects, smartly reducing the spatial resolution is an effective strategy for overall appearance preservation. Whereas state-of-the-art simplification processes only consider scalar or vectorial attributes, we conversely consider radiance functions defined on the surface for which we derive a metric. For this purpose, several tools are introduced like coherent radiance function interpolation, gradient computation, and distance measurements.l Both synthetic and acquired examples illustrate the benefit and the relevance of this radiance-aware simplification process.

  • Unavailable image
    Unifying Color and Texture Transfer for Predictive Appearance Manipulation
    PDF BibteX (Slides) (Supplemental material in html or ZIP archive)
    F. Okura, K. Vanhoey, A. Bousseau, A. Efros, G. Drettakis Presented by Adrien Bousseau at the Eurographics Symposium on Rendering 2015 conference, June 24-26, Darmstardt, Germany.
    (also presented by K. Vanhoey at the GT Rendu (GdR IGRV) meeting, June 10th 2015)
    Acceptance rate: 30%. /Computer Graphics Forum, vol. 34, issue 4. (Proceedings of Eurographics symposium on Rendering 2015) Impact factor: 2.24 (source: ResearchGate)
    Abstract :

    Recent color transfer methods use local information to learn the transformation from a source to an exemplar image, and then transfer this appearance change to a target image. These solutions achieve very successful results for general mood changes, e.g., changing the appearance of an image from ``sunny'' to ``overcast''. However, such methods have a hard time creating new image content, such as leaves on a bare tree. Texture transfer, on the other hand, can synthesize such content but tends to destroy image structure. We propose the first algorithm that unifies color and texture transfer, outperforming both by leveraging their respective strengths. A key novelty in our approach resides in teasing apart appearance changes that can be modeled simply as changes in color versus those that require new image content to be generated. Our method starts with an analysis phase which evaluates the success of color transfer by comparing the exemplar with the source. This analysis then drives a selective, iterative texture transfer algorithm that simultaneously predicts the success of color transfer on the target and synthesizes new content where needed. We demonstrate our unified algorithm by transferring large temporal changes between photographs, such as change of season -- e.g., leaves on bare trees or piles of snow on a street -- and flooding.


Articles en conférences internationales avec publication des actes dans une revue
  • Unavailable image
    Local random-phase noise for procedural texturing
    PDF BibteX Video (Slides) (text of talk here) (Supplemental material here)
    G. Gilet, B. Sauvage, K. Vanhoey, J.-M. Dischler, D. Ghazanfarpour Presented by G. Gilet at the ACM SIGGRAPH Asia 2014 conference, December 3-6, Shenzhen, China. Acceptance rate: 18%. ACM Transactions on Graphics, vol. 33, issue 6. (Proceedings of SIGGRAPH Asia 2014) Impact factor: 5.70 (source: ResearchGate)
    Abstract :

    Local random-phase noise is an efficient noise model for procedural texturing. It is defined on a regular spatial grid by local noises, which are sums of cosines with random phase. Our model is versatile thanks to separate samplings in the spatial and spectral domains. Therefore, it encompasses Gabor noise and noise by Fourier series. A stratified spectral sampling allows for a faithful yet compact and efficient reproduction of an arbitrary power spectrum. Noise by example is therefore obtained faster than state-of-the-art techniques. As a second contribution we address texture by example and generate not only Gaussian patterns but also structured features present in the input. This is achieved by fixing the phase on some part of the spectrum. Generated textures are continuous and non-repetitive. Results show unprecedented framerates and a flexible visual result: users can modify noise parameters to interactively edit visual variants.

  • Unavailable image
    Traitement conjoint de la géométrie et de la radiance d'objets 3D numérisés
    Thèse de doctorat
    PDF BibteX (Slides) (Slides)
    K. Vanhoey Presenté publiquement par K. Vanhoey à l'Université de Strasbourg le 18 février 2014.
    Résumé :

    Depuis quelques décennies, les communautés d’informatique graphique et de vision ont contribué à l’émergence de technologies permettant la numérisation d’objets 3D. Une demande grandissante pour ces technologies vient des acteurs de la culture, notamment pour l’archivage, l’étude à distance et la restauration d’objets du patrimoine culturel : statuettes, grottes et bâtiments par exemple. En plus de la géométrie, il peut être intéressant de numériser la photométrie avec plus ou moins de détail : simple texture (2D), champ de lumière (4D), SV-BRDF (6D), etc. Nous formulons des solutions concrètes pour la création et le traitement de champs de lumière surfaciques représentés par des fonctions de radiance attachés à la surface. Nous traitons le problème de la phase de construction de ces fonctions à partir de plusieurs prises de vue de l’objet dans des conditions « sur site »: échantillonnage non structuré voire peu dense et bruité. Un procédé permettant une reconstruction robuste générant un champ de lumière surfacique variant de « prévisible » et sans artefacts à excellente, notamment en fonction des conditions d’échantillonnage, est proposé. Ensuite, nous suggérons un algorithme de simplification permettant de réduire la complexité mémoire et calculatoire de ces modèles parfois lourds. Pour cela, nous introduisons une métrique qui mesure conjointement la dégradation de la géométrie et de la radiance. Finalement, un algorithme d’interpolation de fonctions de radiance est proposé afin de servir une visualisation lisse et naturelle, peu sensible à la densité spatiale des fonctions. Cette visualisation est particulièrement bénéfique lorsque le modèle est simplifié.


Articles en revues internationales
  • Unavailable image
    Robust Fitting on Poorly Sampled Data for Surface Light Field Rendering and Image Relighting
    PDF BibteX (Slides) (Slides)
    K. Vanhoey, B. Sauvage, O. Génevaux, F. Larue, J.-M. Dischler Computer Graphics Forum, vol. 32, issue 6. Impact factor: 2.68 (source: ResearchGate) Presented (invited CGF paper) by K. Vanhoey at the 25th Eurographics Symposium on Rendering, June 25-27 2014, Lyon, France.
    (also presented at the GT Rendu (GdR IGRV) meeting, March 8th 2013, and the journées "De l'acquisition à la compression des objets 3D" (GDR ISIS), May 23rd 2013: see alternative slides)
    Abstract :

    2D parametric color functions are widely used in Image-Based Rendering and Image Relighting. They make it possible to express the color of a point depending on a continuous directional parameter: the viewing or the incident light direction. Producing such functions from acquired data is promising but difficult. Indeed, an intensive acquisition process resulting in dense and uniform sampling is not always possible. Conversely, a simpler acquisition process results in sparse, scattered and noisy data on which parametric functions can hardly be fitted without introducing artifacts.

    Within this context, we present two contributions. The first one is a robust least-squares based method for fitting 2D parametric color functions on sparse and scattered data. Our method works for any amount and distribution of acquired data, as well as for any function expressed as a linear combination of basis functions. We tested our fitting for both image-based rendering (surface light fields) and image relighting using polynomials and spherical harmonics. The second one is a statistical analysis to measure the robustness of any fitting method. This measure assesses a trade-off between precision of the fitting and stability w.r.t. input sampling conditions. This analysis along with visual results confirm that our fitting method is robust and reduces reconstruction artifacts for poorly sampled data while preserving the precision for a dense and uniform sampling.

Articles en conférences internationales avec publication des actes dans une revue
  • Unavailable image
    On-the-Fly Multi-Scale Infinite Texturing from Example
    PDF BibteX Video (Slides) (Supplemental material here)
    K. Vanhoey, B. Sauvage, F. Larue, J.-M. Dischler Presented by K. Vanhoey at the ACM SIGGRAPH Asia 2013 conference, November 19-22, Hong Kong, Hong Kong.
    (also presented at the GT Rendu (GdR IGRV) meeting, October 17th 2013)
    Acceptance rate: 22%. ACM Transactions on Graphics, vol. 32, issue 6. (proceedings of SIGGRAPH Asia 2013) Impact factor: 6.53 (source: ResearchGate)
    Abstract :

    In computer graphics, rendering visually detailed scenes is often achieved through texturing. We propose a method for on-the-fly non-periodic infinite texturing of surfaces based on a single image. Pattern repetition is avoided by defining patches within each texture whose content can be changed at runtime. In addition, we consistently manage multi-scale using one input image per represented scale. Undersampling artifacts are avoided by accounting for fine-scale features while colors are transferred between scales. Eventually, we allow for relief-enhanced rendering and provide a tool for intuitive creation of height maps. This is done using an ad-hoc local descriptor that measures feature self-similarity in order to propagate height values provided by the user for a few selected texels only. Thanks to the patch-based system, manipulated data are compact and our texturing approach is easy to implement on GPU. The multi-scale extension is capable of rendering finely detailed textures in real-time.

Posters en conférences internationales
  • Unavailable image
    Simplification of triangle meshes with radiance attribute
    PDF BibteX
    K. Vanhoey, B. Sauvage, F. Larue, J.-M. Dischler Presented by K. Vanhoey at the 11th Eurographics Symposium on Geometry Processing, July 3-5 2013, Genova, Italy
    Abstract :

    View-dependent surface colour of virtual objects can be represented by outgoing radiance. Data resulting from an acquisition process can be very large and computationally intensive to visualise. Mesh simplification can provide a trade-off between efficiency and precision. In this paper we propose a new metric for simplifying meshes with radiance attribute through successive edge collapses. Whereas state-of-the-art simplification methods consider scalar or vectorial attributes, we conversely consider hemispherical functions. Our approach exploits a symmetrised representation of radiance functions. This enables us to define coherently distance measurement, interpolation and improved rendering, as shown by our results. Both synthetic and acquired examples illustrate the benefit and the relevance of this process.


Autres communications
  • Image non disponible
    Reconstruction robuste et simplification de champs de lumière
    K. Vanhoey Présentation pour évaluation du projet doctoral à mi-parcours, Juin 2012


Autres communications
  • Unavailable image
    Construction et simplification de fonctions de couleur sur maillages surfaciques
    K. Vanhoey, B. Sauvage, J.-M. Dischler Poster pour la "Journée posters" de l'école doctorale MSII, octobre 2011 à l'Université de Strasbourg
    Résumé :

    La numérisation, notamment la numérisation d'objets d'art, est une problématique d'actualité en raison des applications nombreuses : création de contenu numérique (base de données, créations de médias, etc.), services aux acteurs publics ou privés du secteur de l'art et des médias, etc.
    Le nombre de problèmes liés à la numérisation d'objets 3D sont cependant encore nombreux. Une copie numérique fidèle doit rendre possible la restitution de l'apparence de l'objet concerné sous des conditions d'éclairage variable. Il est donc nécessaire d'acquérir non seulement sa géométrie, mais également l'ensemble des caractéristiques physiques liées à son apparence (donc sa texture, voire sa réflectance bi-directionnelle).
    La très grande masse de données engendrées par les instruments de mesure est telle qu'une utilisation directe des données s'avère impossible : il faut d'une part reconstruire un objet virtuel compressé (maillage à attributs) puis simplifier ce dernier.

    Reconstruction de l'apparence de l'objet : la chaîne de traiment considérée permet de reconstruire un maillage surfacique depuis un ensemble d'échantillons de position (points 3D issus de scans géométriques de l'objet) et d'apparence (photographies permettant d'associer des couleurs à un point 3D en fonction des conditions d'observation). Il en résulte une déscription de la couleur en fonction des conditions d'observation. Le travail principal consiste à déterminer d'une part la nature de cette fonction (sa forme), d'autre part la méthode de construction de la fonction à partir des échantillons (le fitting), de façon à représenter au mieux les variations de couleur mesurées.

    Visualisation : le maillage surfacique généré est volumineux et pour obtenir une visualisation temps-réel, la mise en place d'un mécanisme de choix de compromis entre rapidité et fidélité visuelle est indispensable. Une façon d'atteindre un tel compromis consiste à simplifier (approximer) le maillage et ses attributs avec perte minimale de fidélité par rapport au modèle détaillé. Il s'agit ainsi de déterminer une méthode de simplification permettant de discriminer détails (visuels) importants à préserver et détails peu visibles à approximer.


  • Image non disponible
    Simplification de maillages surfaciques avec champs de lumière
    Thèse de master :
    (Rapport) BibteX
    K. Vanhoey, encadré par B. Sauvage, J.-M. Dischler Mémoire de master d'informatique, spécialité Informatique et Sciences de l'Image
    Stage de janvier à juin 2010, Laboratoire LSIIT, Université de Strasbourg
    Résumé :

    La restitution de modèles avec champs de lumière ajoute beaucoup de réalisme aux objets virtuels numérisés. Ils permettent de faire varier la couleur d'un objet en fonction du point de vue de l'utilisateur, ce qui permet de modéliser des reflets spéculaires notamment.

    Cependant, les maillages représentant de tels modèles sont souvent très détaillés et difficiles à afficher en temps réel.

    Aucune publication ne traite à ce jour de la simplification de maillages avec champs de lumière. Ainsi, nous analysons d'abord les méthodes de simplification de maillages plus simples pour ensuite les comparer et en établir un bilan.

    Ensuite, nous proposons une première méthode de simplification basée sur des contractions successives d'arêtes. Pour cela, nous associons à chaque arête une mesure d'erreur locale permettant d'évaluer le biais qui sera introduit par sa contraction. Cette mesure est constituée d'une mesure d'erreur en géométrie et d'une mesure d'erreur sur les champs de lumière.

    De plus, nous proposons plusieurs façons d'associer un nouveau champ de lumière au sommet résultant d'une contraction d'arêtes.

Articles en conférences nationales
  • Image non disponible
    Simplification de maillages avec champs de lumière
    PDF BibteX (Slides)
    K. Vanhoey, B. Sauvage, J.-M. Dischler Actes des 23èmes journées de l'Association Française d'Informatique Graphique, 17-19 novembre 2010, Dijon Taux d'acceptation : comité de relecture sans avis d'acceptation Presenté par K. Vanhoey à la conférence AFIG 2010, 17-19 novembre 2010, Dijon, France.
    Résumé :

    La demande pour un rendu réaliste d'objets numérisés est conséquente, notamment dans le domaine de l'art ou de la préservation du patrimoine. Pour garantir ce réalisme, il est utile d'ajouter aux modèles surfaciques non pas uniquement de la couleur, mais des champs de réflectance. Ceci génère cependant des modèles complexes et difficiles à rendre en temps réel, d'où ressort une nécessité de simplification.

    Alors que la simplification de maillages sans attributs ou avec attributs vectoriels (couleur) a été fortement étudiée, il n'existe aucune méthode de simplification pour des modèles à attributs fonctionnels tels que des fonctions de réflectance ou des champs de lumière surfaciques.

    Dans cet article, nous proposons une première méthode de simplification de modèles polygonaux avec champs de lumière basée sur des contractions d'arêtes. Nous définissons une mesure d'erreur locale permettant d'évaluer le biais introduit par une contraction d'arête en termes de géométrie et de champs de lumière. Nous proposons par ailleurs plusieurs façons d'associer de nouveaux attributs au modèle simplifié.

    Finalement, nous comparons les méthodes proposées sur deux modèles afin de mettre en avant les caractéristiques de chacune d'entre elles.