People that find code.interact useful should check out IPython* (which is an interactive Python shell with auto completion and a lot of other features).
Here is how I launch a shell for my projects (it tries to use IPython, if it isn't working it uses code.interact): http://paste.plurk.com/show/17110/
IPython is great and I used to be a huge fan before I found Spyder (formerly Pydee.) It is somewhat a matter of taste, but when working interactively I think Spyder is one of the best shells. It is available at http://code.google.com/p/spyderlib/
Not that I know of. In the past I've allowed my process (which was essentially a HTTP-based app server) to evaluate commands from a secure source, then built a Python interpreter that sends what you type via to HTTP to it -- so you get a pseudo shell. At another point I've set up a signal, e.g. SIGUSR2 to essentially do the code.interact() thing like in the article -- so when you need to you can kill -USR2 the process and get into a shell. The first method lets you keep running the server while the second one freezes it while you debug it.
There are also macros for gdb floating around -- you can attach the gdb (C etc.) debugger to a python process and then examine the stack through some gdb macros, but the interactivity is limited (unless perhaps someone has built it up more). That lets you break in at any moment. See http://wiki.python.org/moin/DebuggingWithGdb
In many cases where a production process is doing mysterious things, using strace or perhaps ltrace on it can give a good hint to what it is doing, together with lsof to see what files it's reading/writing.
It's a package for connecting a GUI shell, running on wxPython, to a Python process spawned by the multiprocessing package. It's documented and MIT-licensed.
WinPdb well let you do this. It lets you start a process, and then attach a graphical debugger to it over TCP. Then you can let the process run, break and examine whenever you want, set break points, conditionals, etc. Great for debugging things like FCGIs, FUSE plugings, etc.
Debugging python in a server environment is a different kettle of fish though, I keep running in to situation where something goes wrong under water and there is absolutely no hint of where the problem lies. That's one of the most frustrating bits of django/python development as far as I can see.
In PHP it is a very rare occurence to get an error that does not immediately pinpoint the problem spot. In django/python if you get an error message at all chances are that it will send you off on an hour+ tour of the documentation trying to figure out what is up.
For fun have an alternate_name on a foreign key that ends on _id, you will get errors that have absolutely no bearing on the location of the problem.
re: logging vs. print. If you do get stuck debugging something that uses print() for all its logging, remember that in extremis you can still redirect it elsewhere:
import sys
old_stdout = sys.stdout #store the real stdout
logfile = open('/some/file', 'a')
sys.stdout = logfile #redirect output to a file
#code you're debugging here
sys.stdout = old_stdout #restore the real stdout
[Obviously using logging from the start is _far_ preferable]
hmm, it would be really handy if you could connect code.interact to a socket. Then when an error is raised you could telnet to the process and do a post-mortem on it...
Is there a way to attach a python shell to a running process? I saw something similar for ruby but could not find a python equivalent.