In 2012 i got contacted by the company behind Panomax, the provider of highres 360°  panoramic live camera streams, to do some research on how to integrate 3d rendered content in their panocam live streams. This would enable them to place virtual, nonexisting objects for any thinkable purpose inside their live panoramas.

The goal was create renderings which respect the ever changing light and weather conditions, and still produce a plausible image composition every few minutes. Thus creating an automated solution that does all this in a fraction of the panocam update timeframe was key. So i came up with a scripted backend for 3ds Max, which solves this using  the following steps:

  • launch 3ds Max in headless mode and instruct it to watch a specific folder for incoming 360° panoramic images
  • pull new panoramic images ( usually well above 6K in resolution ) into a customized 3d Max scene
  • Adapt the scene’s environment and lighting conditions depending on time of day, season and weather conditions
  • render the region of interest (virtual object’s location), using matte techniques catching shadows and lighting
  • compose the rendered output onto the original panoramic image at correct position
  • send updated panoramic image to the Panomax hosting server
  • loop through above steps indefinitely

Finally here are some image sequences produced by this method:

P3D_Car_RealVsRendered
P3D_Cars_At_DifferenteTime
The following example image shows a 3d rendered xDrive Cup column seamlessly integrated in the scenerie

P3D_xDrive_Cup

Here’s a full day sequence with 3d rendered architecture and a red BMW  integrated in the environment

P3D_Arch&CarTest

Research: automated realworld-environment matching rendering and compositing
Please share this: