Eliot Han, Michael Tu, Alejandro Garcia
We created a nonlinear ray-tracer capable of creating images of a Schwarzschild black hole. We referenced an existing implementation in C# and converted it to C++ and added multithreading to achieve a 6x speed-up.
Ray tracing is a computer graphics technique used to generate images of 3-D scenes by tracing the paths of "light rays" sent from the observer into the scene. A ray tracing algorithm simulates the behaviour of light as it hits objects in the scene, taking into account various interactions like shading, reflection and refraction. Ray tracing is a good way to simulate various physical phenomena and is capable of rendering artificial images with highly detailed levels of realism.
In this project, we use ray tracing to render a black hole. A black hole is an extreme astronomical object that has a gravitational field so intense that it bends the path of light rays. Black holes are expected to form when massive stars collapse at the end of their life cycles. Their incredible gravitational field causes all kinds of visible distortions in a phenomenon known as gravitational lensing. Our ray tracer renders the simplest kind of black hole, the Schwarzshild Black Hole, which has no charge or angular momentum.
To render a black hole, we built a nonlinear raytracer - one that works in the curved space-time near the black hole. As we can no longer rely on the geometry of straight lines, we calculate the path of each ray in a stepwise fashion. In each time step, we update the velocity & position of the ray using a linear equation that reduces the equations of motions for a null geodesic or massless particle around a Schwarzschild black hole. The derivation of this equation is beyond our understanding as it requires knowledge on astrophysics and the laws of General Relativity which none of us are familiar with. However, we chose to use it due to its simplicity and speed. The derivation is done by Riccardo Antonelli in his project Starless.
We ray trace by drawing a ray from the camera position to a pixel in our scene. However, instead of traveling in a straight line, our ray will curve based on our equation of motion. We step the ray and stop on intersection with our scene.
Our scene consists of primitives which the ray can intersect. These are the main parts of our black hole: the accretion disk, event horizon and background sky. Upon intersection with a primitive, using texture mapping techniques we retrieve the rgb values for our pixels. The sky and horizon use spherical mapping and the disk, which is specified by an inner and outer radius, uses disc mapping.
Illustration: ESO, ESA/Hubble, M.Kornmesser/N.Bartmann
Something of interest to note in our renderings is that the black circle in the middle of the black hole is not actually the event horizon but is the photon sphere, a location where gravity is so strong that light travels in circles at the limit. This means what you see at that location is not actually there in space. This is because photons that travel into the photon sphere from the outside cannot escape and are "sucked into" the event horizon. As depicted below, parallel light rays from an observer will bend into the black hole’s event horizon unless the light ray is outside the photon sphere or 2.6 Schwarzschild radii away from the center.
Taken from youtu.be/zUyH3XhpLTo
Instead of directly converting Starless, we decided to try converting a C# project by Dmitry Brant to C++ instead. We chose to do this because Brant's project was based off parts of Starless and because we thought the conversion from C# to C++ would be easier than Python to C++. Our group started the project by experimenting with various libraries to replace the Microsoft System package used by the C# project. For image manipulation and Bitmap operations, we settled on OpenCV's imRead/imWrite functions and Mat class. We ended up replacing other C# built ins like ARGBColor, Matrix/Vector operations with our own implementations similar to the ones found in the CGL library we used in the other class projects.
Something that was pretty scary throughout most of the project was that we were unable to render anything until almost all the pieces fit together. Even after we thought we completed the necessary components, we were unable to produce any good results for a long time. One of the major problems we had to deal with was color space conversion issues from OpenCV's bgr format in their Mats to our custom ArcgbColor which stored color in float values like CGL. We fixed this when we stored the a,r,g,b values in bytes instead and ensured that all C++ operations worked using uint8_t. (side note, OpenCV data types are confusing: 8UCV3)
First render where something happened
You can see rays of some kind
There were also a lot of reference/pointer issues when converting from C#. An interesting one was that at one point, our images were completely white due to the ray never detecting that it "hit" an object in our core loop. The "hit" value was passed by value to the primitives' hit function and thus never became "true" in the core loop. For our multithreading implementation, we emulated the style of multithreading found in project 3-1's Pathtracer. We had anticipated the multithreading to be much more difficult and started the project aiming for a single threaded approach at first. It was surprising to have it work on the first go.
First image with some kind of black hole
First image with correct orientation
C# implementation running on 8 threads rendering at 1920 by 1080: 294.67 s
Our C++ implementation running on 8 threads at 1920 by 1080 with same textures: 53.25 s
Our ray tracer was up to 6 times faster than Brant's C# one. We also ran the same test on the Starless python project which took 113.5s to render the same scene. We ran all these tests on a single computer and averaged our results but something to note is that results varied even when running the same scene on the same project. For more accurate testing, we should have used a VM on one of the instructional machines.
Despite our imperfect testing methodology, our implementation was always faster. We surmise this speed increase is due to several factors: our choice of a highly optimized open source project in OpenCV as our image manipulation library, a C++ executable being faster than the JIT compiled C# project or interpreted Python, and finally our well optimized multithreading implementation.
Going forwards, some things we could take on next are simulating Kerr black holes, a kind of black hole that spins, or rendering diffuse bodies like planets in our scene.