<?xml version="1.0"?>
                <!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v3.0 20080202//EN" "journalpublishing3.dtd">
                <article article-type="research-article" xmlns:mml="http://www.w3.org/1998/Math/MathML"
                xmlns:xlink="http://www.w3.org/1999/xlink"
                xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                dtd-version="3.0">
                <front>
                    <journal-meta>
                    <journal-id journal-id-type="publisher-id">ei</journal-id>
                    <journal-title>Electronic Imaging</journal-title>
                    <issn pub-type="ppub">2470-1173</issn><issn pub-type="epub">2470-1173</issn>
                    <publisher>
                        <publisher-name>Society for Imaging Science and Technology</publisher-name>
                        <publisher-loc>IS&amp;T 7003 Kilworth Lane, Springfield, VA 22151 USA</publisher-loc>
                    </publisher>
                    </journal-meta>
                    <article-meta>
                    <article-id pub-id-type="doi">10.2352/EI.2025.37.8.IMAGE-274</article-id>
                    <article-id pub-id-type="publisher-id">IMAGE-274</article-id>
                    <article-categories>
                        <subj-group>
                        <subject>Proceedings Paper</subject>
                        </subj-group>
                    </article-categories>
                    <title-group>
                        <article-title>RGBD Routed Blending: A 3D Reconstruction Pipeline for Video Conferencing</article-title>
                    </title-group><contrib-group content-type="all"><contrib contrib-type="author"><name>
                            <surname>Bu</surname>
                            <given-names>Fan </given-names>
                           </name> <xref ref-type="aff" rid="aff1author1"/></contrib><aff id="aff1author1">Purdue University, US</aff></contrib-group><contrib-group content-type="all"><contrib contrib-type="author"><name>
                            <surname>Lin</surname>
                            <given-names>Qian </given-names>
                           </name> <xref ref-type="aff" rid="aff2author2"/></contrib><aff id="aff2author2"> HP Labs, HP Inc.,  US</aff></contrib-group><contrib-group content-type="all"><contrib contrib-type="author"><name>
                            <surname>Allebach</surname>
                            <given-names>Jan </given-names>
                           </name> <xref ref-type="aff" rid="aff1author3"/></contrib><aff id="aff1author3">Purdue University, US</aff></contrib-group><abstract>
                    <title>Abstract</title>
                    <p>With the widespread use of video conferencing technology for remote communication in the workforce, there is an increasing demand for face-to-face communication between the two parties. To solve the problem of difficulty in acquiring frontal face images, multiple RGB-D cameras have been used to capture and render the frontal faces of target objects. However, the noise of depth cameras can lead to geometry and color errors in the reconstructed 3D surfaces. In this paper, we proposed RGBD Routed Blending, a novel two-stage pipeline for video conferencing that fuses multiple noisy RGB-D images in 3D space and renders virtual color and depth output images from a new camera viewpoint. The first stage is the geometry fusion stage consisting of an RGBD Routing Network followed by a Depth Integrating Network to fuse the RGB-D input images to a 3D volumetric geometry. As an intermediate product, this fused geometry is sent to the second stage, the color blending stage, along with the input color images to render a new color image from the target viewpoint. We quantitatively evaluate our method on two datasets, a synthetic dataset (DeformingThings4D) and a newly collected real dataset, and show that our proposed method outperforms the state-of-the-art baseline methods in both geometry accuracy and color quality.</p>
                    </abstract><pub-date>
                        <day>2</day>
                        <month>2</month>
                        <year>2025</year>
                        </pub-date><volume>37</volume>
                    <issue-acronym></issue-acronym>
                    <issue-title>Imaging and Multimedia Analytics at the Edge 2025</issue-title>
                    <issue seq="274">8</issue>
                    <fpage>274-1</fpage>
                    <lpage>274-11</lpage>
                    <permissions>
                         <copyright-statement>© 2025, Society for Imaging Science and Technology</copyright-statement>
                        <copyright-year></copyright-year>
                    </permissions><kwd-group><kwd>3D Reconstruction</kwd><kwd>3D Video Conferencing</kwd><kwd>Deep learning</kwd></kwd-group></article-meta>
                </front>
                </article>