My Lumix camera has a similar mode (uses sensor shift and 4 exposures to fake a composite 100MP image from my 25MP sensor), and I will say that even with a rock-solid tripod, you're not _actually_ getting a 4X (or whatever) increase in quality alongside the 4x resolution. If you zoom in Lightroom, the resulting image is at best a bit sharper in really high contrast areas, and at worst full of blatant overlapping and stitching artifacts.
I've never figured out a good use-case for that mode, and I've tried to use it quite a few times to shoot landscapes or static scenes with a tripod.
Maybe Canon does it better, but perhaps they dropped it because it's just not producing worthwhile results.
If it's the Lumix S5 I have the same camera and as far as I know you are getting genuinely more resolution but it relies on there being absolutely no movement in the scene. So, great for buildings and architecture, terrible in forests or city streets.
It's a GH6, so probably the same system roughly...
And yes that's what I said, sorry if I wasn't clear.
It does get more "pixels" resolution, the image file is 100MP, but it's certainly not 4X the image resolution in terms of image quality. Not that I was expecting that, but it's not even a middle ground that I consider usable, at least not good enough to use over a normal 25MP picture.
I've done a bunch of landscape shots (with a very solid tripod, camera in silent mode, shutter release delay, no wind, etc) and generally I've found the results mixed-to-bad..
Like I said, it seems like the algorithm has a hard time stitching the images together often so if you zoom into the details (and really if you're using this mode, it's ostensibly because you want more detail), things get fuzzy or muddy in most of the pictures I've taken.
To be fair I took maybe 20-30 test shots in the first 6 months I owned that camera, and I haven't gone back to the mode since.. Maybe some of it is user error, but I really did try to make it work because it seemed like a cool feature.
What I struggle with - isn't even let's say walking around, small air movements etc. producing enough tremor to influence this? We're talking about moving the sensor/scene by one half of a pixel size which is a miniscule amount (2 micrometers or so).
Yes, hence tripod and photographing buildings, as mentioned in an ancestor comment[1]. On a windy day (enough to shake the tripod) or on shaky ground (cars, trams) I guess this might still not be enough.
I do not but as I mentioned in my previous post the camera supports a shutter delay (to account for the movement of pressing the button), which I used for this mode.
I feel like I really did everything I could to create the ideal conditions for this mode to produce good results, and it simply didn't.
That said, all this talk has made me want to go try again.. :-)
Hah yes the GH6 also claims that the IBIS (in-body image stabilization) can allow you to use the pixel-shift mode handheld but given the mixed results I've had on a tripod, I also have not tried this.
GFX 100 has similar mode. It’s not surprising that stitching multiple images of a moving subject , or thousands of moving subjects in the case of leaves and blades of grass, fails to produce something similar to a pixel shifted image of something static.
So, they just literally determine every fourth pixel in the final image from the fourth image? That's a solution to a niche problem. I hope they didn't sell that as general-purpose...
Nah the manual and specs are pretty clear on the limitations of the mode and how it works.. I personally find it pretty useless, but it's easy to just ignore it entirely since it's a dedicated mode on the dial.
Canon doesn't do it better. I tried the R5 mode a few times when it came out and it's complete garbage. More like something an intern did as a proof of concept than an actual well implemented feature. The stitching could be done a lot better, but it isn't. Purely a marketing gimmick.
The new R5 Mk II AI upscale is the same, a gimmick that got way to much attention by the press solely for using the word AI.
Canon does use AI features in their R5 AF tracking, and it's really impressive. But Canon calls those features deep learning or other more specific terms.
A7III has a dedicated DSP for focus & tracking, plus it has a computer vision model which can detect eyes and focus for them (even when you have sunglasses it can spot eyes perfectly), moreover you can give 5 faces to it with focus priority, and if it finds any of these faces in the frame, it focus to that one (lower the number, higher the priority). System works at 30 calculations per second, correcting focus 30 times per sec, as required.
This is Sony's AF system, 2 generations ago. I'm sure big three can do these tricks, at least at 30FPS. Latest, highest end Sony cameras do it at 120FPS, plus it can track whole body after losing the face, so they can track the same person in the frame after latching to it.
Interesting. I have a pen-f and the high-res mode absolutely increases the resolution. Everything is very sharp when viewed at 100%. If the scene is stationary (no water or leaves blowing in the wind) I can't detect any stitching artifact.
I also think that the lenses have to be able to resolve this resolution. My 12mm prime looks somewhat sharper than my 8-25 zoom in high-res mode, even though there is no difference in regular mode.
I'm not sure how to quantify the increase in quality, but the details are sharp. However, I'd say that the real benefit of this mode is that the image is smoother than in the normal mode. Tonal transitions, in particular.
I can't find any specifics, but just taking a picture in this mode and looking at my watch, I think this particular camera takes 8 exposures.
Yeah, in the states we say 4x5, but in either case it’s the most common large format film size. You buy them in boxes and use them one sheet at a time.
I don’t know how much you’re actually missing. Pixel shift is nice for architecture but if the purpose is to increase resolution, it’ll just kind of work for landscape and certainly not street. Leaves move in the air, and water moves. People definitely move. Birds, it takes practice to get the right shutter speed.
There is this kind of weird and intense competition for features and specs in this space of photography, and it doesn’t always make sense. I think if Canon dropped pixel shift, they probably figured that people wouldn’t terribly miss it, but for a little while, it was the feature of the moment. I did appreciate that they added it in firmware for the R5, but I never found a use for it.
As someone experienced with cameras, we are living in an age where upperclass smartphones look superior to many cinema production cameras on paper.
Yet the smartphone picture still sucks, as it is too compressed, the colors are off, weird post processing is applied that ruins the picture, the sensor may have impressive resolution, but all other specs are not as impressive and of course the lens assembly doesn't even compare due to size constraints. Which in the end leads to a mix where the unprocessed gamma-adjusted picture of the cinema camera will look like the real thing, while the smartphone picture will have its flaws, that will come through or not depending on the source material.
As for features, making more use of existing hardware is good. I wouldn't need pixelshift in most cases, but when I would it is there.
I can't find it right now, but there's a YouTube video that demonstrates using this mode for astrophotography. It seems that a side effect of the computational process is that noise gets averaged out, resulting in a cleaner (and sharper, as you'd be expecting anyway) image.
In a sense, it's an in camera image stacking where the sensor noise can be removed while retaining actual image data.
The Nikon bodies were known to eat stars with its in camera noise removal worse than other makers, it was one of the reasons astro recommendations are to not use that feature for any body.
This concept was used in microscopy before high resolution sensor was available. I don't know why they are using it on a camera that takes moving images most of the time.
If you have that kind of requirements, budget and setup, why wouldn't you just invest in a camera with a native MP that is closer to what you need?
Right tool for the job and all that..
And I'll tell you, I did test this. I set up ideal conditions in my home studio with a tripod, good lights and - short of ground tremors - zero movement, and the results still weren't that usable.
Comparing the 100MP image to a 25MP version, you really couldn't see that much more detail, and when you zoomed in on the 100MP image, imperfections were quickly visible.
Yeah, it's a real problem. It's not 4x better since the assumptions don't hold up. Check out Figure 5 from [0] to see what the software is up against. I have no idea how well first party software handles this motion.
It depends on how you want to count better, and mpx. Yeah you need 4 images perfectly alligned to sample each color at each subpixel location (assuming your lens + aperture + shutter shake + movement allows you to have that much resolution) and I agree that you won't see a 4x improvement in resolution since the thing we compare too isn't a binning of each Bayer subpixel into a single final pixel, it's the result of a complex interpolation scheme. Then there's the issue of what we mean by resolution. Are we talking about MTF curves? And in that case is it black and white alternating lines, or is it colored ones?
I would much prefer for these pixel shift modes to produce an image where each pixel has an RGB value associated with it, rather than yet another Bayer image that's just bigger.
> I would much prefer for these pixel shift modes to produce an image where each pixel has an RGB value associated with it, rather than yet another Bayer image that's just bigger.
I know that's one way to do it, I was under the impression that for some reason it was being used by some cameras to increase resolution only to ultimately produce a bigger Bayer image. I'm probably wrong since I haven't been able to find cameras that do that.
I've never figured out a good use-case for that mode, and I've tried to use it quite a few times to shoot landscapes or static scenes with a tripod.
Maybe Canon does it better, but perhaps they dropped it because it's just not producing worthwhile results.