Why Some Cameras Are Better in Low Light - Video Tech Explained

Why Some Cameras Are Better in Low Light - Video Tech Explained

Show Video

[ Tripod Rustling ] [ Footsteps ] OK. Let's see how this looks. OK, it's still underexposed. I knew I should have brought some more lights out here, it's way too dark [ Transporter Buzzing ] DARK CAMON: Dude, you should have known better than to film in low light with a camera like that. It's super high resolution, like 33 megapixels. That means each pixel is only, what, 5 micrometers across? That's way too small. They're hardly

collecting any light at all. CAMON: Well, what would you suggest then? DARK CAMON: You should be using this instead. It's a Sony A7S III, and they do super welll in low light because they have huge pixels that can collect a lot of light, even from dim environments. - What? That's not how that works. - Yeah, it is. I mean, why else would the A7S cameras have a reputation for their great low light performance? They're only 12 megapixels, so each individual pixel has a lot of surface area, and so it can collect a lot of light.

- But it's actually not 12 megapixels, though. - Wait, what? - Yeah, actually, someone put the A7S III sensor under a microscope and found out that it actually has a 48 megapixel quad-bayer sensor. - Wait, so that would mean it actually has a higher resolution than that other camera? - Yep. - So then, why does it do so well in low light? [ Music Swells ] - Well, how much time do you have? [ Music Swells ] Wait, no, no, no, no, no, no! [ Music ] All right, so let's talk about low light performance.

It's fairly well known that if you take a photograph of a scene without a lot of light, then that photo will tend to have a lot more ugly noise in it than a photo taken with plenty of light. But why is that? Well, there are a number of factors which can cause this noise to appear, and we will talk about those in detail. But there are also some common myths about low light performance that I'd like to dispel. As I alluded to in the opening, there's a common idea among photographers and videographers that pixel size determines the amount of light a camera can collect. The idea is that pixels with more surface area collect more light from the environment, and therefore produce less noisy images.

I understand where this idea came from, because there is a correlation between the size of a camera's pixels and its performance in low light, but the explanation as to why this is the case is wrong. In reality, the size of the pixels has a negligible impact on the amount of light the camera collects from the environment. And to understand why, let's start with an analogy. Imagine that the photons making up the light in a particular environment are like raindrops falling down and hitting the ground. A camera's sensor is sort of like a bucket aimed up at the sky, collecting rainfall in order to measure its intensity.

The larger the bucket is, the more rain it will collect in a certain time frame. If the bucket in this analogy is like a camera sensor, then this would seem to imply that it's the size of the sensor which determines the amount of light a camera can collect. But this isn't quite right, because in the real world we don't leave our sensors exposed to the open air like this. If we did, light coming in from all angles would randomly impact the sensor and create an indistinct blur of an image. In order to create usable images, we have to use lenses to focus the incoming light down onto the sensor. In this analogy, the camera lens would be like a big funnel that collects light and redirects it down into a collection area.

And notice that with the introduction of a funnel, the bucket size no longer matters. If the funnels were the same size in proportion of their respective buckets, then the collection rate would be different. But if we make the funnel over the small bucket have the same size opening as the one over the large bucket, then the system will have the same amount of surface area facing the sky and will therefore collect the same amount of rainfall.

In the real world, if the funnel represents a lens, then the funnel diameter represents the diameter of the lens's entrance pupil. Every lens has an aperture inside which serves to limit the amount of light which reaches the sensor. Some portion of the light which enters the front of a lens will pass through freely, while some will hit the aperture and reflect back out. The larger the opening created by the aperture, the more light will pass through the lens.

Technically though, the optics in front of the aperture could also have an impact on how much of the incoming light passes through. So to be more precise, we consider the size of a lens's entrance pupil to be the size of its aperture when viewed through the front of the optical system. This size may be larger or smaller than the physical aperture.

Anyway, we can see that if two lenses have the same size entrance pupil, they'll collect the exact amount of light from the environment. But it's at this point that our analogy comparing lenses to funnels breaks down. When talking about a real camera lens, there's no guarantee that all of the light passing through will actually end up hitting the sensor. Some of it may miss if the sensor is too small.

In that case, the sensor would only be receiving a fraction of the total light collected. However, it's not hard to see why this is the case. If we follow the light paths in reverse, we can see that a smaller sensor actually has a narrower field of view than a larger one. The extra light that it's missing corresponds to the parts of the image which are just outside its field of view. This problem can be fixed

by changing the focal length of the lens while keeping its entrance pupil the same. The focal length of a lens essentially determines how wide of a field of view the resulting image will have. When the focal length is adjusted such that the field of view between the two systems matches, the two sensors will end up collecting the same amount of light. And this is important to remember. If two camera systems have the same field of view and the same size entrance pupil, then they will always collect the same amount of light.

Okay, so let's take what we've learned so far and try applying it to a real world scenario. Let's say that we have a pair of cameras with two different sensor sizes and we like to equalize the amount of light that each sensor receives. So first of all, let's make sure that the field of view is the same between both cameras. The camera on the right has an APS-C sized sensor and a lens with a focal length of 30 millimeters. The camera on the left has a full frame sensor which is one and a half times the size. If the focal

lengths of the two cameras were the same, then the camera with the larger sensor would have a field of view that's one and a half times wider. So to counteract this, we need to use a lens with a focal length that's one and a half times longer. In this case, 45 millimeters. Now we can see that both cameras have the same field of view, so we can turn our attention to the lens's aperture. Let's

set both cameras to use the same aperture setting, f2.8, and that should give us an equal exposure between the two, right? Actually, no. They don't have the same exposure. Remember, we're trying to make sure that both lenses have the same diameter entrance pupil, and if we look through the front of our two lenses, we can see that the entrance pupil is actually smaller on the smaller camera. This is because the

f number in our camera's menu actually doesn't directly correspond to the size of the lens's aperture. It would be easy to make this mistake because the setting is often referred to as the aperture setting. The f number actually refers to the density of light transmitted from the lens to the sensor, that is, the amount of light per square millimeter. But the sensors are different sizes, so the total amount of light collected is different. We can see that this is true when we look at the equation used to calculate a lens's f number. The f number is equal to the lens's focal length divided by the diameter of its entrance pupil. Note that the equation specifies

focal length, not field of view. Our cameras have the same f number and the same field of view, but the actual physical focal length of the two lenses are different. One is 30 millimeters and the other is 45 millimeters. So using this equation we can see that the 30 millimeter lens will have a smaller entrance pupil for the same focal length. To make sure that the entrance pupils are the same, we need to use an f number on the smaller camera that's one and a half times less than the one on the larger camera. And hey, would you look at that, that's the ratio between the sizes of the two sensors, which of course isn't a coincidence. In order for a lens

on a small sensor camera to collect the same amount of light as one on a large sensor camera, both its focal length and its f number need to be divided by the ratio of the two sensor sizes. And if we do this, we can see that yes, in fact, the two cameras are now receiving the same exposure. Okay, so I hope all of that was sufficient to convince you that it's the lens, not the sensor, which determines the total light collected by a camera. It

doesn't matter how the sensor is subdivided into pixels or how big that sensor is. The amount of light collected is dependent on the entrance pupil diameter, field of view, and exposure time. That's it. But with that established, let's go back to our original question and try to uncover what actually causes the correlation between pixel size and low light performance. Let's start by establishing what it means for a camera to be good in low light. As mentioned earlier,

when working in a low light environment, the images produced by a given camera will tend to have a lot more ugly noise than they do in well lit environments. If an image is too noisy, then it won't really be usable. So for camera to have good low light performance, it needs to be able to operate in darker environments and still produce images with acceptable levels of noise. So where does this noise come from? Well, in the realm of digital photography, noise comes from two main sources, shot noise and read noise. Let's look at shot noise first. In order

to understand what shot noise is, we need to consider what light is actually made of. While light may seem smooth and continuous on human scales, in reality it consists of discrete packets of energy called photons. And to see how these photons can end up causing noise, let's look at another analogy.

You may be familiar with demonstrations like this depicting a pair of pendulums with ever so slightly different starting positions. At first, the motion of the two pendulums is quite similar, but over time, small differences are magnified until the two pendulums are following completely different paths. Similarly, particles of light may start out with similar positions and directions of motion, but over time the smallest variances will be magnified, and eventually it will be next to impossible to predict the path of any given photon. When attempting to estimate the amount of light that reflects off of a given surface, it may be intuitive to expect a smooth, even distribution of photons based on the positions of nearby light sources. But in reality, every surface is exposed to a chaotic shower of photons that are all following very different paths. So if we want

to learn anything about the lighting of a particular surface, we have to approach it from a statistical perspective. The path of a single photon doesn't really tell us anything, but if we observe many different photons, then we'll eventually be able to build up a picture of what the lighting tends to look like on average. This is effectively what a camera's pixels do. They measure how many photons from a particular slice of the scene reflected towards the camera during the exposure. The value they record will be slightly random, but if the total number of photons collected is high enough, then a meaningful signal can emerge from the noise. To put it mathematically, the amount of noise in a pixel's measurement is proportional to the square root of the signal's total intensity.

And consider the implications of this. Dark scenes aren't just noisy because of a camera's limitations. They're noisy because there just aren't that many photons bouncing around. So it's really hard to get a clear picture of what the image is supposed to look like. I mean, take

a look at these military-grade night vision goggles that work by amplifying the ambient light in an analog way. They're still noisy because that's just the inherent nature of dark scenes. Shot noise like this is inescapable. It

doesn't matter how good your equipment is. There will always be a certain amount of noise that can be attributed to the nature of light itself. Now let's see what happens when we look at the shot noise recorded by a hypothetical large-pixel camera compared to a camera with four times the resolution. So let's imagine that the value recorded by our large pixel is equal to the true signal value, s, plus or minus some amount of noise, which as we established will be proportional to the square root of the signal value. This noise is unpredictable and can either increase or decrease the value the pixel records.

If the high-resolution camera has four times the resolution, then the same amount of light will be split among four pixels instead of one. We can see that the small pixels values are equal to a quarter of the signal captured by the large pixel, plus or minus a proportional amount of noise, so the square root of s over four. When we downscale the image produced by the high-resolution sensor, the values from the four pixels are combined together. The

signal values combine to the same value s as the low-resolution sensor, which makes sense, but we have to use a different approach to combine the noise values together. See, each of these noise values could be either positive or negative, so it's possible that the noise recorded by one pixel will slightly or entirely cancel out the noise recorded by another. To properly combine the noise values, we need to use a root-sum-square. That is, square each noise value, add them together, and then take the square root of the result. If we simplify this

equation down, we can discover that the resulting noise level is in fact identical to the noise level of the low-resolution sensor. In practical terms, this means that if a high-resolution image is scaled down to the same size as a low-resolution one, their noise levels will be more or less identical. There's no disadvantage to shooting with a high-resolution sensor. A higher-resolution sensor allows the user to trade resolution for noise performance in post. They could

choose between a higher resolution with more noise and a lower resolution with less noise after the image has already been captured. But while high-resolution sensors experience the same amount of shot noise, that's only half of the equation. Remember, there's another kind of noise that we need to take into consideration. Read noise. To put it simply, read noise occurs because the individual electronic components inside of the sensor are never going to be perfect. Essentially, all electrical components introduce small fluctuations in voltage due to things like environmental conditions and small manufacturing defects. When the value

recorded by a particular pixel is read and digitized, these tiny fluctuations will manifest as noise in the resulting image. The important point to consider here is that just because a pixel is smaller, that doesn't necessarily mean that there will be less read noise. The level of read noise will instead be determined by component choice and manufacturing. So if the level of read noise is kept constant, adding more pixels will also add more sources of read noise into the image.

Let's go back to our earlier example and demonstrate this mathematically. When factoring in both shot noise and read noise, the value recorded by a large pixel should be equal to the signal, S, plus the shot noise, the square root of S, plus some constant read noise. And looking at a sensor with four times the resolution, the signal and shot noise values will be the same as before, plus the same constant amount of read noise. When the values of neighboring pixels are combined in order to match the resolution of the large pixel sensor, the signal and shot noise values will match those of the other camera just like before, but the read noise values will not. The large pixel camera has just a single read noise quantity added to it, while the small pixel camera is averaging together four separate read noise values.

Using some of squares to compute the total amount of read noise, we find that the small pixel camera will, on average, have twice the amount of read noise as the large pixel one. So while there's no disadvantage to using a high resolution sensor when it comes to shot noise, there is a disadvantage when factoring in read noise. The severity of this disadvantage, though, can vary quite a bit depending on the specifics of the scenario. As I said earlier, this analysis assumes that the smaller pixels produce the same amount of read noise, and this may not always be the case. Sometimes smaller pixels do have lower read noise than larger ones, though usually not quite enough to totally eliminate the disadvantage. So let's see how our predictions match up with reality by comparing these two cameras again. In

order to measure the amount of read noise that each one produces, I'm going to cover the sensor so that it's receiving no light, and then take a picture to see how much read noise there is. Comparing the two images one to one can be difficult, so to see which one has more noise, I ran some math to show that when comparing pixels one to one, the higher resolution camera actually does have less read noise. To see why it was designed this way, we need to consider how read noise will affect the image if I open the cameras up to receive light again. The smaller pixels will each be receiving a smaller amount of light than the large ones, so the read noise will make up a larger proportion of the signal. If

the level of read noise were the same between the two cameras, that would mean the higher resolution one would have more read noise. Downscaling the images to the same size would mitigate the disadvantage, but not entirely eliminate it. So in order to compensate for this disadvantage, the camera had to use better designed less noisy pixels. So I hope you can see that high resolution sensors don't necessarily have to have more read noise, but bringing them up to the same level of performance could incur additional costs. To recap, the amount of shot noise in two similarly exposed images will always be the same as long as the images are scaled to the same size. But the amount of

read noise will vary depending on both the number of pixels and their specific design. It's important to be mindful of both sources of noise because they can both be important. In well-lit scenes with plenty of light, the signal reaching the sensor will easily overpower the read noise, rendering it insignificant. The limiting factor in these cases tends to be the shot noise in the scene itself. So when there's plenty of light, the pixel size doesn't matter much. But as the light level decreases,

the read noise starts to become more and more significant, until eventually it's the read noise rather than the shot noise that's limiting the clarity of the image. If the light level is too low, then it will be next to impossible to distinguish the signal from the read noise, so therefore it'll be impossible to create a usable image without introducing more light. Since the amount of read noise varies from camera to camera, this noise floor varies too, and it's the main factor limiting a camera's low light performance. So there you have it. All else being

equal, cameras with smaller pixels will perform worse in low light due to read noise. All else usually isn't equal though. This disadvantage can be overcome with better design, so it's important to actually measure a sensor's noise performance before making assumptions. Beyond the pixel size, there's also another factor which can play an important role in determining a camera's low light performance a second native ISO. But in order to explain what that is and how it works, let's examine how the electronics inside of a pixel actually work. Alright, here

it is. This is a model of all the major components that comprise a typical pixel. Now this model may not be 100% accurate, since the exact designs of each sensor's pixels are closely guarded secrets, but this should still serve as a fairly good approximation. First on the left we have a photo diode. A photo diode is a device that converts incoming light energy into electrical energy. It's kind of like a solar panel actually, just on a much smaller scale. The more light hits the diode, the

more electrical current it produces. However, at this point it's necessary to convert the increase in electrical current into a voltage that can be read by an analog to digital converter. So when it's time to read out the value recorded by a pixel, a switch connects the photo diode to a capacitor, and as the capacitor accumulates charge, the voltage across it increases, and this voltage is then fed into an amplifier which boosts the voltage by some amount. The output from

this amplifier is then fed into an analog to digital converter, or ADC, which passes a digital representation of the voltage level on to the rest of the camera's processing pipeline. So how does ISO factor into this? Well, when the user raises the ISO setting, the amplifier is turned up, so the signal will be boosted more before being digitized. The shot noise, as well as any read noise introduced prior to the amplifier, will be boosted by the same amount as the actual signal, so raising the ISO doesn't actually have any impact on the image's signal-to-noise ratio. Images captured at

high ISOs are noisy not because the ISO is high, but because they're under exposed. There isn't enough light to create a clean image, but the amplifier is boosting the image such that it doesn't look under exposed. ISO will only affect the signal-to-noise ratio if there's a significant amount of noise introduced by the amplifier itself or the analog-to-digital converter. If these sources of noise are negligible, then we can say that the sensor is ISO invariant. In this case, the noise levels would be the same, regardless of whether an image was captured with a high ISO in camera, or if it was captured at a low ISO and then the brightness was raised in post. Modern cameras tend to be fairly ISO-invariant, though you may still run into situations where the ADC introduces a meaningful amount of noise, and boosting the signal prior to conversion can actually improve noise levels.

There is something else you should know about ISO though. Keen-eyed viewers may have caught me in a bit of a lie earlier when I showed the two cameras with different sensor sizes having the same image brightness. Zoom and enhance! That's right, I had to set the ISO on the smaller camera lower than the ISO on the larger camera, and this is because ISO values actually cannot be compared between different sensors.

You might expect that ISO 100 on one camera would always be the same as ISO 100 on another camera, but that isn't the case, and to understand why, we need to talk about what these numbers actually mean. ISO stands for International Standardization Organization for... Wait, that might be International Organization for Standardization. This name is profoundly undescriptive, so to know where these values come from, we have to take a look at a bit of history. Back in the day, different film stocks had different inherent sensitivities to light, which kind of matters when trying to achieve proper exposure, so most stocks had their sensitivities measured by either the ISO or another organization like the ASA, who would provide a number that represented the stock's level of sensitivity. Higher ISOs meant

more sensitive film stocks, and these values could be used to decide what kind of stock was most appropriate for a given situation. Fast forward to the digital age, and cameras now had a built-in amplifier that could control how bright or dark an image appeared. And just like higher ISO film stocks produced brighter-looking images in dark scenes, turning up the amplifier in a digital camera had the same effect. So although the mechanism was entirely different, the amplification setting was called the ISO setting, so that photographers used to shooting on film could better understand what it did.

And although this is a bit of a misnomer, the name stuck around. Some cameras refer to the amplification setting as the exposure index, but most still call it ISO. Every camera has a base or native ISO, which corresponds to when the amplifier is set to its minimum intensity, and the values can be raised proportionally from there. But just like

with film, different digital sensors can also have different native sensitivities even before factoring in the amplifier. So if different cameras can have different sensitivities at their base ISO, how do manufacturers decide what number to use when labeling the base ISO in the menu? Well, unfortunately that decision can be kind of arbitrary. Manufacturers may choose to find a film stock with a similar sensitivity to the camera and set its base ISO to match that film, but they're also perfectly free to choose a totally arbitrary value. Since

digital sensors and film stocks are fundamentally different, it isn't wrong to define a sensor's base ISO arbitrarily. All this is to say, just because two digital cameras are set to the same ISO number, that doesn't mean that their sensitivity to light is the same. These two cameras are receiving the same amount of light, and their ISOs are set to the same value, and yet one of the images looks significantly brighter than the other. In this example, the manufacturer has decided that the base ISO should always be called ISO 100, even though these cameras have inherently different sensitivities. Since we can't rely on the

ISO number to tell us how sensitive a particular camera is to light, then how can we figure that out? Well, determining a sensor's true light sensitivity requires knowing how many photons must hit a given pixel in order for the signal value to reach a certain level. The smaller sensor is receiving the same number of photons, but that light level corresponds to a higher signal value because the sensor has a higher native sensitivity. So what is it in the pixel design that determines its native sensitivity? Well, there are a few factors that can play a role, but the biggest contributor is the capacitor right here. Remember, the photodiode collects photons and uses them to charge up the capacitor, and then the resulting voltage across it determines the signal value. And one of the properties of capacitors is that the voltage across them is related to both the stored charge and its maximum capacity. The maximum

capacity measured in farads is equal to the current amount of charge stored divided by the current voltage across the capacitor. Rearranging this equation, they can see that the voltage, and therefore the signal value, is equal to the stored charge divided by the total capacity. What this means is that different capacitors can give different output voltages for the same charge level. Assuming the photodiode always collects and transfers the same amount of charge to the capacitor, the signal value will be inversely proportional to the total capacity. If

one circuit has a capacitor of half the size, it'll produce a higher signal value for the same light level. This means that a more sensitive circuit will require less amplification to achieve acceptable brightness for a certain light level, and therefore the read noise will be less significant. Shot noise would still play a role, but at low light levels, it's usually the read noise which is more of a concern. So then why are all sensors designed for the maximum possible sensitivity for solid low light performance? Well, there's a trade-off of course. Smaller capacitors will make the sensor perform better in low light, but they'll also fill up quicker, meaning that brighter scenes will clip much earlier. The choice of capacitor therefore is a delicate balance between low light performance and normal light performance.

Ideally we would want a sensor that can perform well in low light, while also being able to capture well lit images without clipping. Maybe we can't achieve that with a single circuit, but what if we did this? Here we have two capacitors, one large and one small, and a gate to switch between them. In normal light, we can open up the lens's aperture to allow as much light in as possible, and then use the larger capacitor to capture this signal without clipping.

Meanwhile, in low light environments, we can choose to use the smaller capacitor to minimize the amount of read noise. A sensor like this effectively has two native sensitivities that it can switch between, almost like a dual native ISO. Now this may not be precisely what's going on inside of dual ISO cameras, since the details of their sensor designs are proprietary secrets, but it's almost certainly something similar to this.

Something else you may notice when comparing different cameras is that the ones with smaller sensors tend to have higher native sensitivities. This is because smaller sensors are often paired with smaller lenses that collect less light from the environment. In order to allow the user to achieve reasonable exposures with small lenses, the native sensitivity is set higher than it would be on a large sensor camera. This is also why smaller sensors tend to produce noisier images. Their lenses typically collect less light, but their higher sensitivity disguises this fact. But despite this, less light still means more shot noise. So this

higher level of shot noise combined with the sensor's read noise results in a noisier image overall. Wow, it's been a long journey, but we're finally ready to answer the question posed at the very beginning of this video. Why do some cameras perform better in low light than others, and what does pixel size have to do with it? Let's summarize what we've learned by comparing the Sony a7 IV to the Sony a7S III. Okay, so the a7 IV has a 33 megapixel full frame sensor, while the a7S III has a 48 megapixel full frame sensor with a quad-bayer filter arrangement. However, while the a7S III does have 48 megapixels, in all of its photo and video modes, it uses a technique called pixel binning to combine the charges of neighboring pixels together and then read them out as a single value. This effectively combines four pixels into one, and since they're read out as a single value, read noise is only applied once. So the a7S III

effectively functions as a 12 megapixel camera despite not technically being one. So if both cameras are using the same lens and the same aperture setting, then in well-lit environments the a7 IV will tend to perform a bit better. As I discussed earlier, its pixels have less read noise than the a7S III's, so even though they're receiving less light individually, they still manage to maintain a similar level of dynamic range. And when the a7 IV's images are

scaled down to the same size, the oversampling serves to reduce the noise even further, putting it in the lead. So to examine the low light performance of these two cameras, let's set them both up with the same exact exposure and then reduce that exposure one stop at a time. As we do so, we could increase the gain using the ISO setting, so we're mainly focused on the noise, and we'll observe any differences between the two cameras. And when we do this we see that the relative performance of these two cameras remains the same as before up until around two stops underexposed. The a7 IV has a second native ISO that's two stops more sensitive than the first base, so when a range two stops underexposed it switches over to that second circuit. Now when it does this, the shot

noise remains unchanged since the same amount of light is entering the camera, but we've gone from boosting the read noise of the first base ISO by two stops to boosting the read noise of the second base ISO, well not at all. So once this happens the shadows of the image become noticeably cleaner and dynamic range improves, but the a7S III doesn't get to benefit from this because its second native ISO is much higher up. And if we continue reducing the exposure past this point, the a7 IV remains in the lead up until around four and a third stops underexposed. At that point the a7S III switches over to its second native ISO and takes the lead. The a7S line is designed

to maximize low light performance, so its second native ISO is much more sensitive than typical. So as such, once the a7S III switches over to its second native ISO, its noise floor is much lower and it can see much further into the shadows before the image becomes unusable. Huh. So the reason the a7S III does so well in low light is just because of the second native ISO. The lower resolution has nothing to do with it? I mean kinda, yeah.

I mean the low resolution sensor does have some advantages, like allowing Sony to use noisier pixels without reducing dynamic range, but that's not the main reason it does so well in low light. Hang on, wait a minute, that means I gave you good advice at the start. The a7S III would have been better in low light, so why did you spend the last however long lecturing me if I was right in the first place? Because you were right for the wrong reason, and I think that it's important to understand the real mechanisms underlying all this stuff. Now as for why it took so long, it's because I'm kind of obsessive and I think about this stuff way too much. Most people find it off-putting, so now I can find my nerd rants to my YouTube channel where people are free to click away if they find me annoying. But if you're still watching this far into the video, then I assume that you like learning about this stuff too. And

well, there you have it. I hope you all enjoyed this video, I certainly put a lot of effort into making it, so I appreciate you watching. My name is Camon Crocker, signing off.

2024-10-23 08:46

Show Video

Other news