Three-dimensional reconstruction of surfaces and objects is a central problem in computer vision. Often such reconstructions are obtained using images or 3D scans taken from multiple viewpoints and locations. Consequently a key problem in reconstruction is calibrating the geometry of individual cameras. Most 3D reconstruction approaches involve highly complex non-linear optimizations that simultaneously solve for the requisite geometric calibration and the desired 3D reconstruction. An alternative approach is that of motion averaging wherein the problem of geometric calibration is separated from that of 3D reconstruction. Here a redundant set of pairwise relative motion measurements is averaged into a global solution of the camera geometry. The geometry of the Lie group formulation of motion averaging allows for the solutions to be much more efficient, accurate and robust compared to conventional approaches. This has lead to the adoption of motion averaging in state-of-the-art 3D reconstruction pipelines. In this talk, I will outline and develop the motion averaging framework and present multiple real-world 3D reconstructions using both camera images and 3D scanners.