Conversation with Gemini
what is the sensor size of the samsung s26 sensor?
Gemini said
The Samsung Galaxy S26 series, which launched in February 2026, features different sensor sizes depending on the model and specific camera lens.
Galaxy S26 & S26+ Main Sensor
The standard S26 and the S26+ both use the same primary sensor:
Sensor Size: 1/1.56-inch
Resolution: 50 MP (Samsung ISOCELL GN3)
Pixel Size: 1.0
μm Aperture: f/1.8
Galaxy S26 Ultra Main Sensor
The Ultra model continues to use a significantly larger high-resolution sensor:
Sensor Size: 1/1.3-inch
Resolution: 200 MP (Samsung ISOCELL HP2)
Pixel Size: 0.6
μm Aperture: f/1.4 (An upgrade from the S25 Ultra's f/1.7, allowing for ~47% more light intake).
Full Sensor Breakdown (S26 Series)
| Camera Type | Galaxy S26 / S26+ | Galaxy S26 Ultra |
| Main (Wide) | 1/1.56" (50MP) | 1/1.3" (200MP) |
| Ultrawide | 1/2.55" (12MP) | 1/2.52" (50MP) |
| Telephoto (3x) | 1/3.94" (10MP) | 1/3.94" (10MP) |
| Periscope (5x) | N/A | 1/2.52" (50MP) |
| Front (Selfie) | 1/3.2" (12MP) | 1/3.2" (12MP) |
Key Takeaways
The "HPC" Rumor: While there were rumors of a new "ISOCELL HPC" sensor, the S26 Ultra launched using a refined version of the HP2 sensor paired with a much faster f/1.4 aperture lens to improve low-light performance.
AI Integration: A major part of this year's "sensor" story is the ProVisual Engine and the Snapdragon 8 Elite Gen 5's AI ISP, which handles noise reduction and "Nightography" more aggressively than previous hardware-only solutions.
Would you like me to compare the S26's sensor performance against the S25 or other flagships like the iPhone 17?
What are the dimensions of a 1/1.3" sensor?
Gemini said
It’s a bit of a "tech industry prank" that camera sensor sizes are measured in "optical inches," which don't actually match a ruler. This naming convention dates back to 1950s vacuum TV camera tubes, where the measurement referred to the outside diameter of the glass tube, not the light-sensitive area inside.
For a 1/1.3" sensor (like the one in the S26 Ultra), here are the actual physical dimensions:
The Real Measurements
Since almost all smartphone sensors use a 4:3 aspect ratio, we can calculate the exact footprint:
Diagonal: ~12.3 mm
Width: ~9.8 mm
Height: ~7.4 mm
Total Surface Area: ~72.5 mm2
How it Compares
To give you some perspective on where this sits in the smartphone hierarchy, look at the surface area jump between common sizes:
| Sensor Type | Diagonal (Approx.) | Surface Area | Notes |
| 1/2.55" | 6.3 mm | ~24 mm2 | Common for Ultrawide lenses |
| 1/1.56" | 10.2 mm | ~50 mm2 | Standard S26 / S26+ Main |
| 1/1.3" | 12.3 mm | ~72.5 mm2 | S26 Ultra Main |
| 1.0-inch | 15.9 mm | ~116 mm2 | Specialized "Camera" phones |
Why These Millimeters Matter
A 1/1.3" sensor is roughly 45% larger in surface area than the 1/1.56" sensor found in the base S26. This extra real estate allows for:
Better Low Light: Larger "buckets" (pixels) to catch photons.
Natural Bokeh: A larger physical sensor creates a shallower depth of field, giving you that blurry background without needing "Portrait Mode" software tricks.
Dynamic Range: Better ability to hold detail in both very bright clouds and dark shadows simultaneously.
Would you like me to calculate the "crop factor" of this sensor compared to a full-frame DSLR camera?
Yes
Gemini said
import math
# Full frame dimensions
ff_w = 36
ff_h = 24
ff_diag = math.sqrt(ff_w**2 + ff_h**2)
# 1/1.3" sensor dimensions (calculated previously)
# Diagonal is approx 12.3mm
# Width ~ 9.8mm, Height ~ 7.4mm
s_w = 9.8
s_h = 7.4
s_diag = math.sqrt(s_w**2 + s_h**2)
crop_factor = ff_diag / s_diag
print(f"{ff_diag=}")
print(f"{s_diag=}")
print(f"{crop_factor=}")
ff_diag=43.266615305567875
s_diag=12.280065146407
crop_factor=3.523321317088221
The "crop factor" is a number that tells you how much smaller a sensor is than a "full-frame" (35 mm) camera sensor. It is the key to understanding why your phone lens can be tiny but still have a "wide-angle" view.
For the 1/1.3" sensor in the Galaxy S26 Ultra, the crop factor is approximately 3.52×.
What This Means for Your Photos
1. Focal Length Equivalence
The physical focal length of the S26 Ultra’s main lens is roughly 6.8 mm. However, because of the 3.52× crop factor, it behaves like a 24 mm lens on a professional DSLR.
Calculation: 6.8 mm×3.52≈24 mm
2. Depth of Field (Background Blur)
The crop factor also applies to the aperture (f/1.4). While the lens lets in as much light as an f/1.4 lens, the depth of field (how much of the background is blurry) is equivalent to roughly f/4.9 on a full-frame camera.
Calculation: 1.4×3.52≈f/4.9
Comparison Across the S26 Lineup
| Model | Sensor Size | Diagonal | Crop Factor |
| Full-Frame DSLR | 36×24 mm | 43.27 mm | 1.0× |
| S26 Ultra (Main) | 1/1.3" | 12.28 mm | 3.52× |
| S26 / S26+ (Main) | 1/1.56" | 10.26 mm | 4.22× |
| S26 (Ultrawide) | 1/2.55" | 6.27 mm | 6.90× |
The Verdict
A crop factor of 3.52× is incredibly impressive for a smartphone. For context:
Standard budget phones often have crop factors of 6.0× or higher.
The S26 Ultra’s sensor is large enough that it is actually approaching the territory of "1-inch" enthusiast compact cameras (which have a crop factor of 2.7×).
This relatively low crop factor is why the S26 Ultra can produce natural-looking bokeh (background blur) without needing to use software-based Portrait Mode for every shot.
What are the side lengths of a 1/1.3" sensor?
Gemini said
For a 1/1.3" sensor, the physical side lengths are approximately 9.8 mm (width) and 7.4 mm (height).
As we touched on earlier, these numbers can be confusing because "1/1.3 inches" is an optical format, not a direct physical measurement. Here is the technical breakdown of how those side lengths are determined.
The Dimensions Breakdown
Most smartphone sensors, including those used by Samsung and Apple, use a 4:3 aspect ratio.
| Measurement | Value (Approx.) |
| Diagonal | 12.3 mm |
| Width (Long Side) | 9.8 mm |
| Height (Short Side) | 7.4 mm |
| Total Surface Area | 72.5 mm2 |
The "Why" Behind the Math
To understand why a 1/1.3" sensor isn't actually 0.77 inches (19.5 mm) wide, you have to look at the "Optical Inch" rule:
The 16mm Rule: In the world of digital sensors, 1 "optical inch" is actually equal to about 16 mm.
The Calculation: * 1÷1.3=0.769 optical inches
0.769×16 mm=12.3 mm (The Diagonal)
The Sides: Using the Pythagorean theorem (a2+b2=c2) for a 4:3 ratio, the diagonal of 12.3 mm translates to the 9.8×7.4 mm footprint.
How Large is This Relative to a Penny?
If you want a mental image of the scale:
A U.S. Penny has a diameter of
19.05 mm. The 1/1.3" sensor is small enough that you could fit roughly four of them inside the circle of a single penny.
While that sounds small, in the world of mobile phones, this is considered a "Large Format" sensor.
Since the S26 Ultra uses a 200MP resolution on this sensor, would you like to see how small each individual pixel is?
Yes
Gemini said
import math
# Sensor dimensions in mm
width_mm = 9.8
height_mm = 7.4
total_pixels = 200_000_000
# 4:3 Aspect ratio pixels
# w * h = 200,000,000
# w / h = 4 / 3 => w = 4/3 * h
# (4/3 * h) * h = 200,000,000
# h^2 = 200,000,000 * 3 / 4
h_pixels = math.sqrt(total_pixels * 3 / 4)
w_pixels = h_pixels * 4 / 3
pixel_size_w = width_mm / w_pixels * 1000 # in micrometers
pixel_size_h = height_mm / h_pixels * 1000 # in micrometers
# Samsung ISOCELL HP2 (S26 Ultra sensor) officially lists 0.6 um
print(f"{w_pixels=}, {h_pixels=}")
print(f"{pixel_size_w=}, {pixel_size_h=}")
# Binned pixels
binned_4_in_1 = pixel_size_w * 2
binned_16_in_1 = pixel_size_w * 4
print(f"{binned_4_in_1=}")
print(f"{binned_16_in_1=}")
w_pixels=16329.93161855452, h_pixels=12247.44871391589
pixel_size_w=0.6001249869818788, pixel_size_h=0.6042074698865173
binned_4_in_1=1.2002499739637575
binned_16_in_1=2.400499947927515
For a 200MP sensor packed into a 1/1.3" frame, the individual pixels are incredibly tiny—among the smallest ever used in a smartphone.
The Individual Pixel Size
Each of the 200 million pixels on the S26 Ultra sensor is approximately 0.6 μm (micrometers) in size.
To give you an idea of how small 0.6 μm really is:
Human Hair: A single strand of human hair is about 70 μm thick. You could fit roughly 115 of these pixels across the width of a single hair.
Bacteria: These pixels are roughly the same size as a single bacterium.
Pixel Binning: Why 0.6 μm Isn't Everything
On its own, a 0.6 μm pixel is too small to capture much light, which would normally result in "noisy" or grainy photos. To fix this, Samsung uses a technology called Tetra$^2$pixel (16-in-1 binning).
The camera dynamically changes its "effective" pixel size based on the lighting:
| Shooting Mode | Pixel Grouping | Effective Resolution | Effective Pixel Size |
| High Detail | None (1×1) | 200 MP | 0.6 μm |
| Balanced | 2×2 Binning | 50 MP | 1.2 μm |
| Low Light | 4×4 Binning | 12.5 MP | 2.4 μm |
When you take a standard photo in a dark room, the sensor treats groups of 16 pixels as one "super-pixel" (2.4 μm). This allows it to act like a much lower-resolution but much more light-sensitive camera.
Comparison: S26 vs. S26 Ultra
Even though the S26 Ultra has the "better" camera, its individual pixels are actually smaller than the base models because it tries to fit so many more of them on the chip.
S26 / S26+: Uses 1.0 μm pixels (50MP total).
S26 Ultra: Uses 0.6 μm pixels (200MP total).
The Trade-off: The Ultra gets much better digital zoom and detail in bright light, while the base S26 relies on its naturally larger 1.0 μm pixels to keep noise down without needing as much "binning" math.
Would you like to know how the S26's sensor area compares to the iPhone 17 Pro's 48MP sensor?
How can a 600 nm sensor pixel capture red light?
Gemini said
That is a brilliant observation. You’ve touched on one of the most difficult engineering hurdles in modern physics: the diffraction limit.
If red light has a wavelength between 620 nm and 750 nm, and your pixel is only 600 nm wide, it feels like you're trying to fit a large truck into a small garage. Here is how the S26 Ultra’s sensor manages to pull it off.
1. A Pixel is a Bucket, Not a Pipe
The most common misconception is that a light wave travels "sideways" across the pixel. In reality, the light hits the pixel head-on (vertically).
Think of a pixel as a bucket and light as a falling raindrop. Even if the raindrop is "wider" than the bucket (in terms of its wave-like probability), the bucket only needs to catch the energy of that photon to trigger an electrical signal. As long as the photon hits the silicon, it knocks an electron loose. The size of the wave doesn't prevent it from being absorbed by a smaller physical area.
2. The Real Problem: The "Airy Disk"
The issue isn't whether the pixel can detect the light; it’s whether the lens can focus the light tightly enough to hit only one pixel.
When light passes through a lens, it doesn't focus into a perfect point; it creates a blurred circle called an Airy Disk.
For red light at f/1.4, the Airy Disk is roughly 2,000 nm wide.
Your pixel is only 600 nm wide.
This means a single "point" of red light actually spills over onto 9 or more pixels at once. This is why 200MP photos often look "soft" or "mushy" when you zoom in 100%—the physics of light literally prevents the lens from being as sharp as the sensor.
3. How Samsung Engineers Around This
To prevent this "spillover" (crosstalk) from ruining the image, Samsung uses two key technologies in the S26's sensor:
Deep Trench Isolation (DTI): Samsung builds physical "walls" between the 600 nm pixels. These walls are filled with a reflective material so that if a red light photon strikes at an angle and tries to bounce into the neighbor's pixel, it gets reflected back into its own "bucket."
Microlenses: Every single one of those 200 million pixels has its own microscopic curved lens on top. These act like tiny funnels, grabbing light that would have landed on the "walls" and forcing it down into the center of the 600 nm pixel.
4. Absorption Depth
There is one more catch: Red light has less energy than blue light, so it penetrates deeper into the silicon before it’s absorbed.
Blue light is absorbed almost instantly at the surface.
Red light can travel several micrometers deep.
Because of this, 600 nm pixels have to be very tall/deep. Samsung makes the pixels look like deep wells rather than flat squares to ensure the red light has enough "room" to be captured before it passes through the sensor entirely.
The Bottom Line: The pixel doesn't need to be wider than the wavelength to capture it, but it does suffer from blur because the light wave is physically too "fat" for the lens to aim it perfectly into such a tiny target.
Would you like to see how "Pixel Binning" helps solve this blur by turning those tiny 600 nm pixels into one giant 2,400 nm pixel?

Gemini is AI and can make mistakes.
