Hacker News new | past | comments | ask | show | jobs | submit login
Sony announces a9 III: first full-frame global shutter camera (dpreview.com)
261 points by ksec on Nov 9, 2023 | hide | past | favorite | 158 comments



> Global shutter is a method of ending an exposure where all the image data is captured simultaneously. This is distinct from most existing shutter systems (electronic and mechanical) that start and end progressively: working their way across the sensor.

Buried at the bottom of the article. This is interesting for especially video and sports photography. Some cameras with slower sensors suffer from the rolling shutter effect where fast moving subjects get distorted because the subject or the camera is moving. Think e.g. lampposts looking diagonal when shot from a fast moving car. Capturing all the pixels in one go solves that.

Cropping the sensor is a common way to counter rolling shutter with current cameras. Basically, that means you are not using all of the pixels on the sensor and capture a smaller area. Aside from the quality, this also means that you don't use everything the lens captured; it's cropped away. So, it looks like you are using a larger focal length than you actually are and you can't capture at the full width of the lens.

Lots of cameras use cropping to support higher resolutions and frame rates because they can't read out the sensor fast enough. This new sensor can use the full resolution and doesn't have to crop.

And like with photography, you use shutter speed and aperture to control what the video looks like. 1/30th of a second is actually a relatively long exposure for a photo. And that's also potentially a lot of rolling shutter to deal with. An instant and fast readout means you have more wiggle room to play with this creatively or shoot in very bright or low light.


Most higher end cameras like the Panasonic G6, Nikon Z8/Z9, etc. hardly have any rolling shutter. You'll certainly NEVER see rolling shutter in any PRACTICAL application for video unless you pan like a true madman, either in photo or in video. (I am a professional photographer and videographer....)

In fact for sports and wildlife, you'll NEVER see rolling shutter in pretty much any existing camera, either in photo or video. For photography it's been like this for decades and for videography recently.

Where rolling shutter does appear is in the use of the electronic shutter only, but the Nikon Z9 already solved that, same with the Sony a1.

The main benefit is flash sync and a very SLIGHT improvement in some types of distortion when shooting at very high shutter speeds. I'd argue that the Nikon Z9 is a much better camera if not wanting flash sync due to its lower base ISO.

So to reiterate, the main benefit is not the reduction of rolling shutter, especially since any camera in this price point already has minimal rolling shutter.

Global shutters are COOL, and the reduction of rolling shutter is COOL, but not the main point, though of course most future cameras will probably switch to global shutters at some point if the base ISO can be lowered.


Global shutter allows for much higher flash sync speeds.

I shoot dance, often in close proximity, and rolling shutter on canon r6 was a reason enough to upgrade to r6ii.


>You'll certainly NEVER see rolling shutter in any PRACTICAL application for video unless you pan like a true madman

Fast lateral movement doesn't just happen on a whip pan. Any scene involving fast vehicles, high speed machinery or strong vibration is likely to have some level of rolling shutter artifacts. Many perfectly normal tracking shots will have some artifacts, particularly if you're shooting handheld. Modern fast-readout sensors are very good, but they aren't perfect; to a great extent, we've just got used to seeing mild rolling shutter artifacts and don't immediately recognise them as an image quality defect.

There are still lots of good reasons to use rolling readout sensors, but there are also many real advantages to global sensors.


> lots of good reasons to use rolling readout sensors

Like what? Just curious, but the only reason I know of is that they're cheaper, or you already have them.


They also tend to have better dynamic range and noise performance.


It’s not a “very SLIGHT improvement” as you write. It’s VERY SIGNIFICANT. Even the Nikon Z9 has roughly 4ms rolling shutter speed. Light travels faster than you imply here. A lot faster.


It's not the speed of light that matters. It's the speed of objects in the field of view (or, more specifically, the speed of their image on the sensor compared to the speed of the shutter across the sensor).


Also worth noting that, unlike physical objects, images are not bound by the speed of light. Patterns of light and shadow can move across a sensor at unrestricted speeds.


I'm confused what this means. Are patterns of light and shadow not also light, and bound by the speed of light (on the upper end)? How can patterns consisting of light (or the absence of it) move faster than light?


https://physics.stackexchange.com/a/48329

In other words, speed of a projection of light from 3d space to 2d space may be higher than the original speed in 3d. (Because one dimension gets squished to 0, so movement in this dimension is perceived to be instant.)

It's like a diagonal of a cube 1x1x1 has length sqrt(3), but if you apply orthogonal projection onto R^2, its image will be a diagonal of a square and it will have length sqrt(2). Shorter distance -> shorter time to travel.


> It's like a diagonal of a cube 1x1x1 has length sqrt(3), but if you apply orthogonal projection onto R^2, its image will be a diagonal of a square and it will have length sqrt(2). Shorter distance -> shorter time to travel.

This example doesn't make sense to me. In that analogy, wouldn't anything on that diagonal appear to move more slowly in 2D than the same thing moving along the diagonal of a face? The cube diagonal would make it move farther than it does in 2D space.

I remember seeing a simulator in my optics class that combined multiple wavelengths of light. The interference pattern moved faster than the speed of light, but that was fine because information wasn't moving faster. That was just the result of adding them together.


But when you move the laser emitter in your hand you're controlling the speed in that 2d space, not in 3d. You don't ever affect the position of photons in the Z dimension. So you’re not constrained by speed in 3d which would later be slowed down after being projected. So you move your laser emitter along the diagonal of a face with velocity v. And the perceived light which would get projected onto a plane needs to match the position of the emitter on the face. Which creates the illusion that light travelled along the 3d longer diagonal faster than at v (in order to match the projection which describes how you/camera sensor see the light). But in reality the light never travelled along this longer diagonal. It’s only an illusion. And it is this illusion that we’re measuring the speed of. Photons on this diagonal arrived straight from the emitter, i.e. each of them appeared in only one point of the diagonal throughout its entire history. In other words, the photon at the beginning of the perceived movement is a different photon than at the end. They travelled along different paths. And when some photons were at the diagonal, some others were on their way there.


Shine a laser into space and the image of your laser can be much faster than the speed of light. Nothing actually moved faster than the speed of light though.


What do you mean by "image faster than light"?

How is an image not light?

Or do you mean a captured image may show items from different points in time?

But that's only relevant after the photo has been created, not during the window of time that a sensor is capturing light.


Stand a meter away from a wall and wave a laser pointer such that the spot travels back and forth between two points a meter apart in one second. Move two meters away, but keep your movement exactly the same; the spot now moves two meters in one second.

Move two light-seconds away and do the same movement. The spot now moves two light-seconds in one second: twice the speed of light. Of course it takes two seconds from when you turn the laser on to when an observer at the wall would see it, and four seconds before you see the spot on the wall, but the spot itself moves faster than light.


Ah, so for the sake of capturing conceptual / perceived "objects", the global shutter, at least, can do a better job at what would be perceived during a short period of time that the shutter opens and captures each pixel.

A rolling shutter might capture points along the way but leave gaps in comparison. In the laser pointer example, you'd probably want a longer exposure, but the global shutter would still give you uniform capture better matching what your eyes / brain perceived.


What are you trying to argue? I am simply stating that speed of light is faster than rolling shutter speed. You’re trying to make a point about some edge case scenario that doesn’t apply to 99.999999% of use case for photography and videography.


At 120 fps that’s half the frame time and probably the entire shutter time.


The Z9 has no (mechanical) shutter.


I mean is 1/250 "shutter speed" which seems more normal for lighting than 1/120. But I guess for video it doesn't matter and could be 1/120.


For video you typically set shutter speed to half a frame. The most common frame rate is 24FPS, so you get 1/50 shutter speed. It’s lousy for a photo, but good for video , since you want motion blur, so that the video isn’t janky. You control light through ND filters and LED/tungsten lights, not through shutter speed, since shutter speed affects motion blur and you want to make that choice independently of exposure.

120FPS is good for sports or wildlife, but those are special. For normal stuff, 24FPS is good. And even with wildlife, you’re probably not recording at 120FPS all the time, only during quick action. Otherwise you’re just wasting space on your memory card with identical frames. Battery as well.


> I'd argue that the Nikon Z9 is a much better camera if not wanting flash sync due to its lower base ISO.

Yeah. I think a lot of people focus too much on those "high end" specs and too little about small, practical, quality of life things.

I remember how I instantly liked Fuji X100 for its built in ND filter. Yes, you can screw a filter on by yourself. Then you suddenly find yourself on a sunny beach and you wish you had your photo bag with you.

Not saying global shutter isn't nice feature, but that 250 base ISO can be a bit of a hassle depending on what kind of photography you do just like not everybody is going to benefit from the new shutter.


The faster available shutter speeds will compensate for the faster base ISO except where the superior dynamic range of a lower ISO is needed or when a longer shutter speed is simultaneously desired.


Well to be fair if the base ISO is 250, even the typical 1/8000 will be enough for most purposes except maybe f/1.2 in sunlight....


True that, unless you also want slower shutter speed for some reason. But that's pretty rare.


Never could get why people liked ND filters until I got a 50mm f/0.95 lens for which exposures are scary short close to sunset...


They're pretty essential for any situation where you can't just stop down the exposure in the camera. If you want a long exposure shot of some crashing waves on a bright day for instance, you might not be able to just increase the shutter speed or narrow the aperture without ruining the intended photo.


I personally like to make dreamy photos wide open with relatively long shutter to accentuate movement. Some of it in full sunlight. Yes, ND filter is essential then.


It's not essential. You can take hundreds of short-exposure photos and blend them in software.


Global shutter is essential for serious drone photography or photography from platforms with undamped vibrations like machinery. It means you will have discontinuities in features of the picture. You will need to re-stich the photo or post process it. For video it is just bad. An example of such pixel perfect uses was we were able to know the size of a marking on the ground with centimetre accuracy by counting the pixels.

Modern gimbals are amazing and do away with much of the issue but if you are needing pixel accurate features global shutter is the way to go.


An example of global shutter issues https://youtu.be/qqsDMJ7iyM0?t=86


You mean an example of rolling shutter issues.


Yes thanks for the correct. I cannot edit the typo anymore :(


Not just the global shutter, but 120 FPS with full AF and AE is huge too. The A9III is going to be the best sports camera out there for a few years I think. Of course, you're going to pay through the nose if you want access to that tech.

I can't wait for them to gain back the dynamic range and for it to roll down to less expensive cameras in the future.


The article also mentions the downside of global shutter: lower dynamic range.


Thank you, as I might have overlooked this downside had it not been for this comment.

The article doesn't make this immediately clear:

> It's a 24.6MP camera that Sony says doesn't compromise on ISO performance or dynamic range.

Then later:

> This might explain the base ISO of 250, which will ultimately limit the camera's maximum dynamic range.

Sounds like a compromise to me...


Agreed. Worth mentioning as well: flash sync speed. As you are not dealing with a shutter or progressive capture this will be a thing of the past.


> Some cameras with slower sensors suffer from the rolling shutter effect where fast moving subjects get distorted because the subject or the camera is moving. Think e.g. lampposts looking diagonal when shot from a fast moving car.

These days this is the least of your worries with a camera with rolling shutter. I got a Nikon Z-fc, knowing from the reviews that it would have a rolling shutter, because I don't mind these types of distortions.

What the reviews didn't mention is that many LED based lights actually alternate red, green and blue LEDs in quick succession. And you see that in photos as horizontal streaks of rapid changes in white balance. This also happens with projectors that are LED based, and CFTs if their drivers aren't fast enough.


Are you sure that's an artifact of the light source emitting colors sequentially, and not an artifact of the camera sensor detecting colors sequentially?


It only happens with some light sources. If it was the camera it would happen with all of them.


> 1/30th of a second is actually a relatively long exposure for a photo.

As an aside, in video & motion picture cameras the frame rate isn’t the exposure length. You need to account for the time the shutter is closed. So the real exposure is about 1/60, still slow but much more manageable.


Thanks for the explainer - I was wondering what the significance was, and something I do is mess about with cameras for fun.


> Cropping the sensor is a common way to counter rolling shutter with current camera

Not in my experience. For full-frame 35mm digital cameras, the most common way to counter electronic rolling shutter effects is, ironically, a very fast mechanical shutter.

In digital cameras with mechanical shutters, which include all Canon/Nikon/Sony/etc. consumer DSLRs and mirrorless cameras, the electronic shutter programmed to stay open for much longer than the mechanical shutter, which allows there to be a time instant when all pixels are simultaneously electronically open. The mechnical shutter then opens and closes during that time.

The rolling shutter on a Sony a7 is about 26ms. Mechanical shutters can beat that easily, and can be as fast as <0.2ms.

(Electronic shutters can be faster on a per-line basis. But they will still take 26ms to roll through the image for a rolling shutter sensor. So if your electronic shutter is set to 0.01ms you'll still be seeing 0.01ms exposures at each line but it will take 26ms to roll through the image.)


I don't think you're right about mechanical shutters being faster. The type of mechanical shutter on most cameras like this is a focal plane shutter[0]. It has two curtains - an upper one and a lower one. When the shutter is closed, the two overlap. For a long exposure (typically anything longer than 1/100s although some cameras are faster) the lower curtain will fall all the way, then the upper curtain will fall to close it off again. But at faster shutter speeds, the upper curtain will start falling before the lower curtain has reached the bottom, and so you can get exactly the same rolling shutter effect with a mechanical shutter as with an electronic shutter. For very fast shutter speeds, the exposure is a narrow slit between the two curtains that travels down the image at a much slower speed than the exposure time, and this is why most cameras can't synchronise with a flash at speeds faster than 1/100s or so. So therefore the type of mechanical shutter in most DSLRs and mirrorless cameras effectively have a rolling shutter transition time of 4-10ms, which is much more than the 0.2ms you claim.

Video-specific cameras are where this might be improved, because you can have a shutter which is a rotating disc, and this can allow faster opening/closing times, because it doesn't need to speed up from a standstill for each frame.

There exist central shutters which are mechanical shutters that act as a global shutter, but these are unusual because they need to be built into each lens you attach to the camera. These are typically limited to a maximum speed of 1/500s.

[0] https://en.wikipedia.org/wiki/Focal-plane_shutter


Cropping is common in video. For stills, you're correct.


This is the first time I've heard of C2PA authentication[0]. It looks like they are digitally signing images from the cameras now to create a chain of authenticity. There's even TruePic[1] that is an authentication service that will show people if an image came from one of these cameras, overlaid on the image (or video) with some Javascript. Interesting way to fight AI images and deep fakes. They are also letting people register their deep fakes[2], but I'm not sure why someone would do that.

0: https://c2pa.org/ 1: https://truepic.com/ 2: https://truepic.com/revel/


C2PA is more about tracking origin and edit lineage for e.g. press photographers. It does not determine if an image is a deepfake or not. Create a deepfake then take a picture of it with a C2PA camera, now you've got a C2PA-stamped deepfake.

But supposed you take a C2PA photo and then edit its contents with e.g. generative in-fill. The updated C2PA stamp from that action will get noted and can be audited later on.


> Create a deepfake then take a picture of it with a C2PA camera, now you've got a C2PA-stamped deepfake.

Okay but then it's a deepfake from the BBC or some other known source, which would be bad for its reputation if people found out.


Right but if it’s a breaking-news-tiktok-video the C2PA will probably be absent and then said media co plays the “did we just Fox News smear Dominion?” game. Which isn’t so bad a tradeoff today while C2PA is new.

The C2PA might theoretically prevent forgery of the C2PA record, but it cannot certify that the pixels happened due to an event or due to a trick.


Sure but you can tell if it's from an organization you trust not to fake stuff or not.


Could it be used to track down journalists or prove that a certain journalist took a specific photograph?

Sounds like the photographic equivalent of guns that put unique stamps on their fired shells - theoretically making it easier to ID the shooter.


Both newspapers and social media routinely reconvert the original camera image into some smaller size or more suitable format. How is this system going to work in practice?


The newspapers and/or social media would sign their scaled pictures - presumably they could also provide the original for external validation as well


Considering this might be targeted by nation-state adversaries for propaganda uses, I don't see the effectiveness, at least on that threat model. AFAIK even if the private keys are buried in some trusted computing chip, there's very little a focused ion beam can't do. And that's in an extreme security scenario, I doubt those cameras even reach that.


Assuming they use different embedded keys per camera (which is unfortunately unlikely) instead of the same key on every chip, this can be defeated by demanding to see the intact camera before accepting the signature. Ion beam assisted probing is a destructive process.


What do you mean? An ion beam can read the private keys from the onboard chip?


It's essentially a million-dollar silicon modding tool. It can, in 99% of cases, read down to the individual transistor level.


Then why can’t the three letter agencies in the US unlock an iPhone? I don’t think it’s that easy. In that case all encryption would be useless if you had physical access to the machine.


It only works for things where the key is stored on the chip or phones where the key is stored in a TPM or equivalent and relies on a PIN to release it, as opposed to typical use of encryption where the full key is entered to unlock. An attacker with physical access can probe the chip but it’s risky - one slip and it’s gone for ever. This technique is most useful when all chips of this type have the same key so you only need to successfully crack one, any one. Ross Anderson’s book Security Engineering uses the example of Sky Pay TV cards https://www.cl.cam.ac.uk/~rja14/Papers/SEv2-c16.pdf


With all the encryption I’ve used these past decades the only time I have entered a full encryption key was for cryptocurrency. Aren't 99% of use cases the case where the key is stored on the client device? It seems like the main deterrent then just comes down to the failure rate of the probes.

On a tangent, a bad actor could then release HDCP keys for TVs from a big brand like Samsung and effectively invalidate all content protection for all the TVs they have already sold (afaik those can’t be remotely updated). If those keys are then revoked, there would be millions of bricked Samsung TVs.


Pretty sure they use a kdf to generate the key from your input so you don't need to enter the " full key"


> Then why can’t the three letter agencies in the US unlock an iPhone?

They almost certainly can but I believe the point of the FBI making a big song and dance about unlocking that phone (which they did unlock by themselves, btw) was about trying to force Apple into allowing TLA backdoors via the court of public opinion.


> Then why can’t the three letter agencies in the US unlock an iPhone?

Who says they can’t?


I get a flashback of the yellow printed dots. Smells a privacy threat to me. I wonder if that fingerprinting can be disabled.


Useful for source authentication in news to combat fakes. I am sure if you don't want your identity associated with a picture there will be plenty methods to avoid this. Indeed some journalists will also need anonymity - but perhaps signed metadata on the GPS location/timestamp, so any camera for serious photo-journalist reporting will have to have those features.

Perhaps less interestingly, single frame exposure will defeat the accidental recording of sound into pictures recently reported to be a side effect of rolling shutters.


I think we are going to see more of this. I doubt this is about "deep fakes", as in reality the press doesn't care about it as long as it generates clicks and revenue.

It's more to do with surveillance and being able to more easily track someone trying to play investigative journalist, whistleblower etc. and show them their place.

For instance, if someone takes a photo of a big corporation exec giving a brown envelope to a politician and then another journalist show them to interested parties as a courtesy before publication, they could find out who took the picture and get them killed.

This is very dangerous and stupid what Sony is playing with.


The thing is, it's rather easy to remove this. Even advanced steganographic watermarks fall to current AI and DSP. Non-consensual/forced source ID is a fools errand.

OTOH, it's very hard to firmly attribute source ID in a way that cannot be faked (hashed or reconstructed after the fact).

In other words it's easy to disown, hard to own. Repudiation is a one way function.

Since we're headed for a post-truth, post-trust digital world the desirable sides of this seem to greatly outweigh the apparent "dystopian downsides".


The new Leica camera already does it, give it a look for a working example already in the wild


Seemed fairly dystopian before you get to the examples. It also seems fairly trivial to bypass even if it's through crude brute force methods (ie. screenshots/copying framebuffers).

Is this essentially just enforcement through building a moat around editing applications?


Bing's Dall E-powered image creator uses Content Credentials as well.

https://i.imgur.com/7qDr0Q9.png


> will also add C2PA authentication metadata. C2PA is a combination of the efforts of the Content Authentication Initiative and the separate Project Origin initiative.

First I have heard of this. My initial thought is "gross" but I wonder if it may actually be beneficial for the average consumer, prosumer even.

The spec is absolutely vomit inducing, though -- more ASN.1!


Working for a broadcast company with a good chunk of news content, authenticating content is one of the major we face. It has always existed, but it’s getting worse and worse. That’s the goal of this system, not for consumer.


I think ideally we'd get to a point where end users could easily see who signed an image so if someone was claiming an image was from cnn it could be validated. I imagine the end goal would be to put a warning on images that aren't from a signed source or maybe not displayed at all by default - similar to the uptake of https


Agree, but I also think that signing at all should be optional. That is: Give the camera user the option of strongly identifying themselves (with the added trust that brings), or remaining anonymous (as is the case now).


I mean even if the setting isn't there it's easy to strip the metadata.


Global shutter might be the most impressive feature, but also notable is that the a9 includes C2PA authentication metadata. This is a cryptographic chain of provenance on digital media, starting at the hardware camera itself.

In addition to provenance itself, C2PA allows for asserting that an image should not be used for AI training / ML mining.

https://c2pa.org/specifications/specifications/1.3/explainer...


What stops someone from photographing a printed picture of a deepfake?


Nothing, but how do you make that look like a real photo?

I want to know how this process survives edits. Is it like HDMI, where I can't watch cable TV on my Linux machines because I don't have some stupid chain of certs from the hardware to my eyeballs? Will I no longer be able to edit my photos on Linux?


I remember a whole thread about dpreview no longer posting new articles and yet this one seems to be very recent.

https://news.ycombinator.com/item?id=35520495

Sony is rapidly becoming a major factor in the professional world as well. A family member works for a pro camera shop and he's absolutely lyrical about the Sony line. My biggest gripe is that it is hard to interface to the camera for things like shutter control, what looks like a USB cable really isn't (there are a ton of other wires running there that you can just barely see that control these kind of functions). But otherwise it's a very nice piece of gear.


Gear Patrol bought them, so DPReview lives on.

https://www.dpreview.com/site-news/8298318614/dpreview-com-l...


Ah cool, I totally missed that. Thank you!


I love seeing global shutter on this camera. It's been amazing getting shots with cameras with global shutter for me.

Unlike film grain, the weird artifacts for sports/action, camera moves, rotary motion... it's all bad looking and disappears completely with global shutter.


Not just global shutter, but shutter speeds up to 1/80000, flash sync the same (which is simply bonkers, but possible due to global shutter), and pre-recording?

I love my A7IV, but the A9III makes it look like a toy.


While I love ridiculous specs as much as anyone, let's remember that 1/80,000th of a second is.. nuts and as far as I can find there's only one flash unit that can even do that, the profoto pro-11, which is a $20k proposition before you even buy the damn camera. And you will get very, very far from its maximum output for that kind of duration.

So it's very cool, yes, but unless you have a ton of money to spend and a very very good reason to need such extreme speed.. you're never gonna use it.

I am certainly looking at my trusty but aging a7sii with an increasingly contemptuous eye, though...


Flash doesn't have to be faster. Max flash sync speed means up to this speed the shutter is guaranteed to be fully open when exported flash signal triggers the flash. Below this speed is totally fine, above this could result in partially lit images.

e.g. A camera's max sync speed is 1/60s, and you connected a flash unit over sync cord, and set the camera to 1/250s, ignoring the lightning bolt sign on the dial. You take a picture, and alas, top half/left half is pitch black and only bottom half/right half is shown correctly, because you exceeded the max speed and mechanical circuit to trigger the flash did not close until image was already partially taken.

Or, e.g. A camera's max sync is 1/80ksec., with a random flash unit on top. You set it to that 1/80ksec, push down the shutter, the camera triggers flash, flash starts discharging energy onto Xenon gas tube. As the light intensity goes up, the image processor on the camera pulls down internal !global_shutter_trigger signal to lock image data onto per-pixel RAM on the sensor. 1/80ksec. later, the signal is released and intensity is recorded for retrieval.

These faster sync speeds are useful for situations when a) you want to use flash && b) the room/environment is too bright for a flash. Now you can keep ISO value to not way low, aperture kept open to F2.0, etc. without overexposing the image(can't do F0.3 at 1/80k; aperture is limited to up to F1.8 with some body recognized lenses attached, according to product pages)


FYI you cannot use the abbreviation "1/80ksec". It may be interpreted as (1/80)ksec or 1/(80ksec). But you actually intended to mean (1/(80k))sec.

The correct ways to write it include 1/80000 s, 1/80 ms, or 12.5 μs.


Isn't it 12.5 microseconds?


Yes. Edited now.


When I was shooting professionally I certainly would have used the extreme speeds. Perhaps not down to 1/80000, but certainly higher than the "normal" flash sync speeds most cameras support today.

I used to photograph professional skateboarders. My favorite sort of shot would be of the rider doing something down a set of stairs using a telephoto (usually in the f4 and up focal length). I'd setup strobes in front of and behind the rider's line to get good rim lighting of their face and body as they nailed the trick.

That was to help isolate them in the shot against the background better, which, because of the f4 and need for a lot of light, we'd shoot in broad daylight.

That meant you needed very bright flashes and very fast shutter speeds. The best my camera could do at the time was 1/250 flash sync, which meant that was the fastest shutter speed I could use. 1/250 is the very bare minimum you can use in those conditions to freeze motion. If I could have shot above 1/500, that would have opened up so many other options for capturing those kinds of shots.

Others have explained that the flash duration itself doesn't need to be quick as the shutter. If it helps, think of it this way: you can still capture sunlight at 1/80000, even though the sun's "flash" duration is pretty long.


The only reason you need a flash duration that short is if you really want to freeze something that is moving very quickly. You don't need the shutter duration to be that short as well, unless you also want to exclude light that hasn't been produced by your flash.


For me, I've often wanted to shoot with flash at 1/500th or so outdoors, but most mirrorless cameras only go up to like 1/160th. Back in the DSLR days, there were a few camera bodies that would go up to 1/500th. Its insane to see flash sync beyond 1/1000.


Note that a full power strobe will burn about 1-1.5 ms, so below 1/1000 you start to loose the ability to balance flash versus ambient via shutter speed.


It's for when you want to shoot at huge apertures in full sunlight or directly into strong lights without having to grab an ND filter.


FWIW, you could do shutter speeds like that on the Canon PowerShot series of cameras all the way since ~2005 due to the unofficial CHDK - Canon Hackers Development Kit.


Weird artifacts and rotary motion are for all PRACTICAL purposes gone with fast, stacked shutters as in the Nikon Z9. The global shutter's main benefit is flash sync. Grain has nothing to do with global shutter....


Even with a fast readout, PWMing LEDs can still present issues for cameras like the Z9. The global shutter removes that concern entirely with no need for fine-tuning--still need some logic to identify the peak brightness but don't need to fine tune frame rates by fractions of a Hz to reduce banding.

A global shutter has been the holy grail for awhile for cameras like this in wide production. Yes, you can get a machine learning camera with 20 megapixels and a global shutter, but you'll need to strap a whole PC to it to do the data capture. And it'll be manual focus and manual aperture to boot.


Far as I can tell, OP isn't saying that grain has anything to do with global shutter, they're saying that rolling shutter produces artifacts, which unlike other photographic artifacts such as the often pleasant ones produced by film grain, are not at all aesthetically pleasing.


This is what I was getting at, yes.


Looking at this history of this problem, it first appeared with the adoption of focal plane shutters, which appeared widely with Leica 35mm cameras, and is a technology used by most SLRs. I know of earlier examples, the Speed Graphic large format camera uses a focal plane shutter, not sure when this was first used.

In the medium format world, lots of camera systems used leaf shutters, and these do not suffer from rolling shutter problems. The exposure is controlled by an aperture within the lens itself which opens the pupil to expose the film/sensor. The problem with leaf shutters is the minimum exposure tends to be limited to 1/500th of a second, although I believe there are some 1/1000th of a second leaf shutters in some lenses. Even some medium format SLRs have leaf shutters, for example, the Hasselblad 500 series cameras, and the Mamiya RZ67. Leaf shutters allow flash sync at all shutter speeds, which is why it was a popular technology for studio work.

In 35mm land there are some leaf shutter cameras, they tend to be fixed focal length designs. The olympus 35RD has one for example.

Anyhow, interesting tech, it's not relevant for the sort of photography I do, but i'm pretty sure the videographers will appreciate it!


The Leica Q has a leaf shutter that goes up to (down to?) 1/2000s.


Curious to see how it films a spinning airplane propeller.

Also, can the global shutter be turned off when you don’t need it, so you can get higher dynamic range?


It’s about time Sony threw in a PCIe5/6 slot on these cameras. Ability to simply insert an internal 2-4TB drive would be very welcome. These CFE cards are ridiculously expensive for what they do.


The latest Hasselblad already ships with an internal 1Tb SSD so there's hope.


A handgun bullet travels 1200 ft per second. At 1/80000 of a second, the bullet travels 0.18 inches.

It won't be very good, but you can actually shoot a bullet with this. With enough light of course.


it’s never made sense to me that there’s still not a single compact full frame camera. there are plenty of 35mm film cameras that you can literally slip in a pocket and still have pretty good lenses, what’s stopping a compact full frame camera these days?

https://casualphotophile.com/2019/05/17/ricoh-gr1v-film-came...


One issue is that you need a bunch of space behind the focal plane in a digital camera (the thickness of the sensor + its mounting system + PCB + cooling gap) while film cameras can mount the film (~zero thickness) right against the back door of the body. Another is that people expect EVF now, which requires more envelope volume than a compact 35mm OVF. Generally people paying $$ for a camera expect it to have removable lenses which adds again to the physical envelope.

There have been some Sony camera with fixed 35mm focal length lens and FF sensor. Like this: https://www.bhphotovideo.com/c/product/1316682-REG/sony_cybe...


Theres lots of super thin digital cameras, an inch thin or less (coolpix, cybershot, etc). An iphone, which has a better screen, more capability and cpu power than a camera is extremely thin, what 'cooling gap'? Are you saying that the thickness of the sensor goes up as sensor size increases?

And theres plenty of $$ cameras with no removable lens, fuji x100 series, Sony RX1, leica Q, etc, like you linked.


I've got the smallest digital full frame (the Sony RX1). Compared with your linked analog Ricoh GR1v, the Sony is bigger because of its much more capable lens. Putting the Ricoh lens on the Sony would have limited the quality a lot. It's a question of "do the elements fit together?".


The sony lens is so big because its a lot faster. Sony also makes a pretty compact 35mm F2.8 lens comparable to what you could fit on a point and shoot. The elements definitely do 'fit together'.

https://electronics.sony.com/imaging/lenses/full-frame-e-mou...


There are pocketable FF camera with prime lenses and pocketable 1” sensor cameras with zooms. Those old 35mm film pocketable zooms had terrible apertures but a RX100M3 with a 1.8-2.8 aperture has a comparable effective aperture to some FF mirrorless with kit lens.


Compared to what modern 35mm cameras can deliver, these cameras had very bad resolution. Yes, you could build such a camera today with a 35mm sensor, but the results wouldn't really justify the effort. If you want good compact cameras, get one of the smaller mFT cameras like the E-M10 or OM-5 and mount a pancake lens on it.


“bad resolution”??? 35mm film can deliver 87 MP, more than any ff digital camera i know.

https://www.kenrockwell.com/tech/film-resolution.htm#:~:text....


Besides that this number is high and there we are talking about only very high resolution films, the lenses limit the resolution. With the tiny lenses of those cameras, you wouldn't make good use of any large sensor. There is a reason that even mFT lenses are usually larger than those old compacts. They used 35mm film only because it was the widest available and cheapest material.


No, it can't. That's at a very low contrast rate, meaning that any detail at the 87MP level is going to be extremely muted and barely noticeable even if it was already high contrast. So basically you'd only ever get that resolution if you were taking picture of a black/white striped test chart, nothing else.

A better benchmark is MTF50, at which you can get about 50 line pairs at ISO 100-50 which corresponds to about 10 megapixels. If we want to be nice we might go as low as MTF20 where you'll get about 18 megapixels of low contrast. I'm looking at Velvia 50, the same film as Ken (https://www.ishootfujifilm.com/uploads/VELVIA%2050%20Data%20...)

Ken also makes the point that the Bayer sensor reduces the resolution - and it does - but he forgets that film has different MTFs for different wavelengths, sometimes even as bad as half for red compared to green.


Tmax and Delta 100 have 150 lines per millimeter, and it's even way higher for certain specialty films like Adox CMS 20 with 250 lines. Though you probably need a drum scanner to resolve to that level. For general use, a high quality scanner at 22 mp is about enough. Thats with 35mm though. You get much much higher resolution with 120 film and larger

https://jpbuffington.com/?p=167


I was comparing color films, but sure, speciality black and white films can approach the resolution of current high resolution color cameras, with the drawback that the lack of stabilization will make these resolutions only achievable with a tripod and of course only for static subjects, in which case pixel shift is probably a better option.

And yes, you can use medium or large format film, but that comes with very serious drawbacks, and for low ISO film is literally impossible without a tripod, in which case you might be better served with sensor shift instead.


Ken Rockwell is the most reliable source of non-fiction fiction on the internet.


There are a few Leicas (Q3), a Sony (RX) and some others.

On APS-C (Fuji) or compact-design-but-with-changable-lens there are a lot more options.


There's the sigma fp which is the smallest I believe.


Sony a7C. Very compact...


It's still a bit of a thick back; compared to the thinner rangefinder film bodies it's a little thicker still.

What I don't like most about all my Sony cameras is the hand grip. It feels made for a robot, and I miss the nice organic shape of Canon and Nikon grips.


> [Sony] also says it will be introducing firmware updates to the a7S III and the a1, adding features such as breathing compensation. The a7S III will also gain DCI 4K capture. (...)

> All three cameras will also add C2PA authentication metadata. C2PA is a combination of the efforts of the Content Authentication Initiative and the separate Project Origin initiative.

Wait, they're adding image attestation in software to existing models?


Everyone wishes there could be new photography or videography features and Sony could with effort to reengineer some of these processing pipelines to do more (ex: Focus Breathing). but adding some workflow capabilities is off the main & intensive path of the device & easier to tack on.


Funnily enough I can imagine this awful DRM being one of the few things that would help us avoid deepfakes and similar in the future.


It's not DRM, it's optional provenance metadata.

So, think of it as cryptographically signing the image rather than encrypting the image.


Oh does this open software lenses and optical beamforming? The progress of acquisition tech is just staggering. ADCs in the last 10 years have just gone to incredible sampling rates, compacity... The future looks fun for the linear algebra signal/image processing people to go back to the drawing board: now that you have teraflops and these sensors available, let's start again.


Nokia has had something similar for a while now where the sensor readout was fast enough to prevent the roller shutter effect. Either way this was never a huge issue for photographers since you could always fall back to using mechanical shutters.

The more interesting part of this camera is whether this is a trend towards photography being the process of extracting a frame from a video rather than taking a series of shots.


I hope global shutter makes its way to other brands so we can get away from the banding and jelly cam that we put up with when using digital cameras.


That stuff is already mostly gone with fast stacked shutters as in the Nikon Z9. In fact, even cheaper cameras like the Panasonic G6 don't have noticeable banding.


How is this accomplished? Is there a memory cell close to every physical pixel? Did they sandwich two dies together?


yes, its a 2 layer stacked-Cmos sensor


A camera like this will make it a lot easier (and probably cheaper!) to verify who wins competitions, such as road bike races where a 200km race can be won by millimeters.

And oh hey! dpreview.com lives!


I don't think so. You've posted a link to Wikipedia in an adjacent thread that clearly says that photo finish cameras do NOT use traditional 2D sensors, but a "1D" single-strip sensor with very high refresh rate. They intentionally use an effect similar to rolling shutter where instead of the currently exposed row moving up and down the sensor, there is just one row and the photographed subjects move against the camera.

Yes, you could capture a similar photo by aiming a global-shutter camera at the finish line and firing the shutter exactly when the first contender triggers a laser beam or something, but that would be much more complicated and expensive than just capturing a single slice of space and letting time flow along the perpendicular axis.


>would be much more complicated and expensive than just capturing a single slice of space and letting time flow along the perpendicular axis

I'm not quite sure. You're replacing a custom rig and technicians, with an off the shelf part. Sounds like it could be easier and cheaper (my point).


Here's an example of how the rolling shutter effects bike race photo finishes,

https://inrng.com/2012/04/photo-finish-camera/


And here's what's really going on with photos like this,

https://en.wikipedia.org/wiki/Photo_finish#Strip_photography


This might be a strange thought, but the idea of the global shutter got me thinking about the promise of a global shutter and the strange implications it could lead to regarding special relativity. I'm not an expert but, as I understand, one aspect of special relativity is that there is no shared now. Like, the idea of simultaneity is different than how we experience it, because two observers can literally witness a different sequence of events and there are no contradictions. (I hope I got that right)

In a traditional shutter, each row of pixels are read one at a time, so there is an inherit arrow of time. However, with a global shutter, each row of pixels is "exposed" simultaneously, theoretically. So what I wonder is what would be the implications of this with respect to special relativity.

For example, if a global shutter camera were used to adjudicate a close race, would it be possible for one row of sensors to witness a contradictory result with another row? Sorry if I'm not explaining this right but I'd be curious to hear what a more knowledgable person's thoughts on the matter are.


There are no implications regarding special relativity. The propagation speeds of the triggering signal are well understood. If for some insane reason it's not considered fast enough to simply ignore delays from one side of the chip to the other (wherever the sync signal comes in), it's straightforward to put in delay lines to ensure everything triggers at the same time to an accuracy that depends on the amount of silicon you want to lay down for those delay lines relative to the capacitors.

Most likely a couple nanoseconds one way or another don't matter (even at timescales of 1/80,000 s) and those ever-so-slight differences are ignored.


I meant in a theoretical sense, like as a thought experiment. If the shutter were an idealized, global shutter, where each row of pixels are exposed instantaneously, what would be the special relativity implications, if any?


Unless the sensor is half over the event horizon of a black hole I'm failing to understand how my answer doesn't short-circuit the thought experiment to ground.


If two global shutter cameras take a scene at the same time, they would have the same image. If they don't, then they wouldn't.

Global shutter cameras are also not new. CCD sensors are global shutter and we used to use CCD sensors for cameras decades ago until we switched over to CMOS for most applications due to their larger size and better dynamic range. I have a global shutter HVX200 from 2005 in my closet.


So the whole point of what I was saying is that special relativity proves that whether two spatially separated events occur at the same time is not absolute and it depends on the observer’s reference frame. My question has nothing to do with engineering and wasn’t intended practically.

The point was to imagine light from some event, reaching two parts of the sensor at the same time, being exposed at the same time (ie the global shutter), and yet asking if those two parts of the sensor could show conflicting versions of that same event, since they are nonetheless separated by space.


There is no problem. The sensor is a rigid assembly where all parts are motionless with respect to each other. Simultaneity is a well-defined concept for the sensor.


As for price:

> The a9 III will be available from Spring 2024 at a recommended price of $5999.

> The 300mm F2.8 GM OSS has two XD linear motors to allow it to focus fast enough to work with the a9 III.

> The 300mm F2.8 will also cost $5999 and will also be available in Spring 2024.


> Sony says doesn't compromise on ISO performance or dynamic range. [..] Sony isn't quoting a DR figure for the camera.

Who are they fooling?


No one that bothers to look at the minimum ISO spec of 250.


The two products announced are mostly catered for professional photographers at the Olympics. The camera body and the lens are both very expensive, over $10,000 for the two combined.

That said, the global shutter is interesting, and will hopefully find its way into consumer grade products in coming years. Personally I'm excited about this "obsoleting" existing used cameras so that I can finally afford them.


For many it is irelevant. No doubt sony and video professional in sony cam would enjoy another upgrade opportunity. But for normal people, it is not the camera but also the lens etc. one has to think about. Not to mention still have F lens to migrate. (Or canon ... and Manu of these can be used by my Z9.)


How does this compare to the capabilities of DSLR with a mechanical shutter?


More capable mechanical focal plane shutters with a flash sync speed of 1/200s or faster do not show objectionable rolling shutter effects for most practical stills photography, including wildlife and most sports. Better mirrorless cameras use the same types of shutters.

Older film cameras may have had flash sync speeds of 1/60s, and rolling shutter was sometimes a problem with them.

Mirrorless cameras also have a fully electronic shutter, for silent shooting and shooting video. (Some DSLRs could also shoot video this way.) In all but a handful of very expensive cameras, it takes a long time to scan the sensor and rolling shutter is obtrusive – in many cameras it is as slow as 1/15s. In that handful of expensive cameras it is around 1/200s, like a fast mechanical shutter. Nikon’s expensive cameras don’t even have a mechanical shutter any more.

That last bit of speed, from 1/200s to the 0s of a global shutter, is detectable in certain examples Sony showed off in their presentation. Most of these examples relate to high-end sports photography.

Flash sync at any speed means flashes can be built more cheaply and cycle faster. Currently, to shoot faster than the flash sync speed the flash must emit a long burst instead of a single pulse. For this reason, some cameras used a leaf shutter, which can sync at any speed, but they were typically expensive to make and sometimes less reliable.

There are related problems to do with the refresh rate of LEDs in the field of view, which are solved by the global shutter and somewhat clumsily worked around in cameras without one by trying to sync the shutter with the refresh rate. Again, high-end sports photographers are the ones who usually encounter this problem.


So the global shutter will have no benefit today for most users.

In the future though it means we can do away with the mechanical shutter which in theory means smaller, cheaper, more reliable cameras.

However the capturing photos when you hold the focus button down feature will I think be genuinely useful for pretty much everyone. Especially in family situations with kids, pets etc.


> So the global shutter will have no benefit today for most users.

It will have a benefit for anyone who is also recording video using their DSLR, likely the majority of users. It completely eliminates rolling shutter. Now, is it a benefit that is worth paying $6,000 for? That is a good question! For most users, probably not.


> who is also recording video using their DSLR, likely the majority of users

I doubt that the majority of users are using their DSLR to film video.


> Now, is it a benefit that is worth paying $6,000 for?

This tech will certainly reach all levels of the consumer market. Plus being able to shoot with flash at any speed will indeed benefit tons of amateurs who will be able to do away with the stupid artificial limitations present for the last 150 years


> In the future though it means we can do away with the mechanical shutter

The future is a year ago: The Nikon Z9 and Z8 cameras have no mechanical shutter, and their electronic shutter speed is so fast that there is no practical issues capturing even fast motion.


When’s it getting put in a phone?


Very cool technology, for those who don’t want, need or can’t make creative use of the visual artefacts that come with other cameras.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: