The virtual camera vas done during summer project but the idea IIRC is quite simple.
I will try some hasty writeup ;)
1) Create a virtual image plane (or cylinder/sphere for panoramic camera) in the 3D space. For this you need to specify the virtual camera's position, orientation, focal length and FOV.
2) Define pixels on the image plane. You need to specify the image resolution and just create grid on the the image plane. Each pixel corresponds to a 3D point in the world coordinate system.
3) To get a color of each pixel you project it's corresponding 3D point to all real cameras. Usually only some of the cameras see the point. Each camera that sees the point gives you a color according to the color of the pixel in it's image. You have to perform blending of these colors. What worked well in our case was to give more weight to cameras where the point projected close to the center of their image.
As you can see, there will definitely be some artifacts, usually more the farther away the virtual camera is from the real cameras. However, it worked to my surprise quite well. It was even possible to look at the robot from 3rd person even though all of the real cameras were on it.
Also it is crucial to have well calibrated cameras. Apart from their projection matrices you also need to consider their radial and tangential distortion which was very significant with our camera models.
I will try some hasty writeup ;)
1) Create a virtual image plane (or cylinder/sphere for panoramic camera) in the 3D space. For this you need to specify the virtual camera's position, orientation, focal length and FOV.
2) Define pixels on the image plane. You need to specify the image resolution and just create grid on the the image plane. Each pixel corresponds to a 3D point in the world coordinate system.
3) To get a color of each pixel you project it's corresponding 3D point to all real cameras. Usually only some of the cameras see the point. Each camera that sees the point gives you a color according to the color of the pixel in it's image. You have to perform blending of these colors. What worked well in our case was to give more weight to cameras where the point projected close to the center of their image.
As you can see, there will definitely be some artifacts, usually more the farther away the virtual camera is from the real cameras. However, it worked to my surprise quite well. It was even possible to look at the robot from 3rd person even though all of the real cameras were on it.
Also it is crucial to have well calibrated cameras. Apart from their projection matrices you also need to consider their radial and tangential distortion which was very significant with our camera models.
https://www.wikiwand.com/en/Pinhole_camera_model
https://www.wikiwand.com/en/Camera_matrix
https://www.wikiwand.com/en/Distortion_(optics)