Channel 10 video: Interview with PhotoSynth team.
This is a very good video as it gives you an over view of how PhotoSynth works. The video also shows how a scientific prototype was converted into an application that runs on almost any machine out there. (The first version took days and ran on a cluster of machines. My synths from 11 images took just a little over a minute on my old machine).
One of the break-throughs that the team used in creating the 3D environment was in figuring out what was the least amount of processing and scene generation that was required to give us a sense of the true 3D environment that the 2D photographs represent. In their case it turned out to be a subset of the blobs used to perform the matching, which were then projected on to planes that represent the location of the images as perceived by the algorithm. (In other words - the algorithm is not trying to determine each plane defined by different objects in the scene - but the big plane that the entire image could be projected onto - it is sort of like trying to figure out where one should place the projector screen - so that the projected image would look as close as possible to the real world).