I pulled out my copy of Knuth vol 2 to answer this (pages 190-193):
* You can negate a number by interchanging 1 and ̅1.
* The sign of a number is given by its most significant non-zero trit.
* Rounding to nearest integer is the same as truncation.
By the way if we had balanced ternary computers today, then you wouldn't have programming languages with signed and unsigned integers. They'd be the same thing.
Of course the above might be academic if it's more difficult to implement in CMOS ...
Representing binaries numbers with explicit sign bit and the rest unsigned gets you an even better version of that; this is how floating point handle the sign.
I'm skeptical of value of ternary. The key behind the success of binary computers is that binary has great noise immunity; having a _single_ threshold is way simpler to get working than two or more. (Yet we do go there for NAND Flash, but note how speed and endurance worsen dramatically as thresholds are added).
The "most economical radix" is often cited but it's a red herring - it ignores practical concern of greater importance.
suppose that we got robust practical 3 level logic elements, then potentially we'd need less of them for the same bit-width of computation thus less energy consumption, and the multiply÷ can probably be done in lesser number of cycles.