As background: we host userpics on the east coast at a place where we can get bandwidth really cheap, and we have all the active userpics there. When a request fails due to a missing file, it requests the canonical copy from the west coast, where the rest of the servers are, and keeps it on the east coast until it's inactive again. That's not slow in itself, but combined with some other historical problems, it's not a wonderful situation.
The filenames on disk have no extensions (jpg/gif/png), so we have to have a program (mod_perl code or mod_mime_magic) figure it out. We can't use TUX (in-kernel server) because it has no mod_mime_magic equivalent, so we were using mod_proxy in front of mod_perl, with mod_perl doing all the path translation and mime checking, and mod_proxy doing the buffering.
But mod_perl is a pig for serving images, and even though we can run thousands of mod_proxy httpds with lingerd, we can't run too many mod_perl ones before the pain kicks in.
Our new solution is to use mod_rewrite with an external rewrite script (docs) that either tells apache where the file's at, or tells apache to tell the client to fetch it from us on the west coast, while the process forks and fetches it itself for local storage. No mod_perl at all.
The nice solution one day would be to use a user-space TUX daemon that checks if the file's on disk and lets TUX handle it, otherwise passes through to a script which would fetch it. Unfortunately, TUX docs are somewhat lacking, and the real solution would be to not use TUX and use a nicer webserver with modern event notification and network I/O APIs.
But in any case, our new solution will be a lot faster: no mod_perl bottleneck with unavailable processes.