If you stick to integer math with 64-bit floating point numbers, then everything actually works out pretty reasonably, since all 32-bit ints can be represented exactly as 64-bit floats (doubles). As long as you use floor/ceil/round where appropriate, you get most of the benefits of integer arithmetic, without the type checking (which you lose in a dynamic language anyways).
Some people feel strongly against this, but for dynamically typed language I enjoy its simplicity (for static languages, type systems are meant to enforce constraints on types, so multiple number types can be useful). It has done well in [at least] javascript and lua, and while the former is anything but an example in how to design a programming language, IMHO, the latter may be.
So no, it wouldn't be a bad idea to make all numbers 64-bit floats. It sounds like you might likely be compiling to javascript, in which case using 64-bit floats will mean you have one less problem to deal with while implementing it.
Some people feel strongly against this, but for dynamically typed language I enjoy its simplicity (for static languages, type systems are meant to enforce constraints on types, so multiple number types can be useful). It has done well in [at least] javascript and lua, and while the former is anything but an example in how to design a programming language, IMHO, the latter may be.
So no, it wouldn't be a bad idea to make all numbers 64-bit floats. It sounds like you might likely be compiling to javascript, in which case using 64-bit floats will mean you have one less problem to deal with while implementing it.