Hacker News new | past | comments | ask | show | jobs | submit login

> Really, I don't see what you like about Halmos definition of the determinant...

Halmos shows (it is almost trivial) that the space of anti-symmetric n-forms Wn over L_n is 1-dimensional. Wn(Ae1,...,Aen) = const*Wn(e1,...,en). This scalar const is called determinant. It has all the properties you would ascribe to Volume like volume spanned by collinear column-vectors is zero. This is a nice bridge to geometry in Ln. Also, in a space of just one page (p.99) he introduces determinant and proves its main properties like det(A*B) = det(A)*det(B) and therefore det(A^-1) = 1/det(A).




The determinant of n vectors {vi} relative to a particular basis {ei} in an n-dimensional vector space is the scalar-valued ratio:

( v1 ∧ v2 ∧ ··· ∧ vn ) / ( e1 ∧ e2 ∧ ··· ∧ en )

The signed volume per se is just the n-vector: v1 ∧ v2 ∧ ··· ∧ vn

Generally working with the wedge product is more pleasant and conceptually clearer than working with determinants. Among other things we don't need to make an arbitrary choice of basis or unit n-vector. There's also no reason to limit ourselves to n terms. v1 ∧ v2 is also a reasonable quantity to use, etc.


The beauty of Halmos' derivation, which is similar but not identical to exterior algebra (wedge product), is that his approach is basis independent. A determinant by his definition is scalar invariant over all bases. It is very geometrical in nature.


The determinant inherently involves a basis (or at the very least a choice of unit n-vector). Or if you like you can think of the determinant as a function of a square matrix (grid of numbers), rather than a function of a collection of vectors.

When you take the basis out, that's the wedge product, which inherently includes the orientation. Conveniently, there is only one degree of freedom for n-vectors in n-dimensional space. When we take the quotient of two n-vectors in n-dimensional space we therefore get a scalar.


Let me sketch a way to get the determinant basis-free:

Say we live in an n-dimensional vector space V and have an endomorphism f : V -> V. Now, we consider the pullback [1] f* : Λⁿ(V) -> Λⁿ(V) induced by f on the vector space of n-linear alternating forms Λⁿ(V) on V.

This is just an endomorphism on Λⁿ(V). However, Λⁿ(V) is one-dimensional, hence necessarily invariant under f*. This means f* has an eigenvalue (!). This eigenvalue is what we usually call the determinant of f.

This is completely independent of any choice of basis, orientation, or an inner product.

[1] That is, given an element w ∈ Λⁿ(V) and an arbitrary n-tuple v₁, ..., vₙ of vectors from V, we have (f*w)(v₁, ..., vₙ) = w(f(v₁), ..., f(vₙ))


> and have an endomorphism f

And the "outermorphism" f̱ of your linear transformation, when limited to considering its application to an arbitrary pseudoscalar, returns another pseudoscalar which necessarily has the same orientation, making that a scaling operation.

So what we could say in that case is that f̱(p) / p = d (some scalar, the "determinant" of f), where p is any pseudoscalar p = v1 ∧ v2 ∧ ··· ∧ vn.

This turns out to be about the same as what I wrote a few comments upthread. We are just dealing with

f̱( v1 ∧ v2 ∧ ··· ∧ vn ) / ( v1 ∧ v2 ∧ ··· ∧ vn ) = d

= ( f(v1) ∧ f(v2) ∧ ··· ∧ f(vn) ) / ( v1 ∧ v2 ∧ ··· ∧ vn )

instead of ( v1 ∧ v2 ∧ ··· ∧ vn ) / ( e1 ∧ e2 ∧ ··· ∧ en ) = d

And now we are talking about a property of a linear transformation instead of a property of a collection of n vectors.

In many practical situations, an oriented quantity like v1 ∧ v2 ∧ ··· ∧ vn is more useful than a scalar ratio d though.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: