Confused, are languages like python, ruby single threaded? unlike say java? (for web apps)
Solution 1:
Both Python and Ruby have full support for multi-threading. There are some implementations (e.g. CPython, MRI, YARV) which cannot actually run threads in parallel, but that's a limitation of those specific implementations, not the language. This is similar to Java, where there are also some implementations which cannot run threads in parallel, but that doesn't mean that Java is single-threaded.
Note that in both cases there are lots of implementations which can run threads in parallel: PyPy, IronPython, Jython, IronRuby and JRuby are only few of the examples.
The main difference between Clojure on the one side and Python, Ruby, Java, C#, C++, C, PHP and pretty much every other mainstream and not-so-mainstream language on the other side is that Clojure has a sane concurrency model. All the other languages use threads, which we have known to be a bad concurrency model for at least 40 years. Clojure OTOH has a sane update model which allows it to not only present one but actually multiple sane concurrency models to the programmer: atomic updates, software transactional memory, asynchronous agents, concurrency-aware thread-local global variables, futures, promises, dataflow concurrency and in the future possibly even more.
Solution 2:
A confused question with a lot of confused answers...
First, threading and concurrent execution are different things. Python supports threads just fine; it doesn't support concurrent execution in any real-world implementation. (In all serious implementations, only one VM thread can execute at a time; the many attempts to decouple VM threads have all failed.)
Second, this is irrelevant for web apps. You don't need Python backends to execute concurrently in the same process. You spawn separate processes for each backend, which can then each handle requests in parallel because they're not tied together at all.
Using threads for web backends is a bad idea. Why introduce the perils of threading--locking, race conditions, deadlocks--to something inherently embarrassingly parallel? It's much safer to tuck each backend away in its own isolated process, avoiding the potential for all of these problems.
(There are advantages to sharing memory space--it saves memory, by sharing static code--but that can be solved without threads.)
Solution 3:
CPython has a Global Interpreter Lock which can reduce the performance of multi-threaded code in Python. The net effect, in some cases, is that threads can't actually run simultaneously because of locking contention. Not all Python implementations use a GIL so this may not apply to JPython, IronPython or other implementations.
The language itself does support threading and other asynchronous operations. The python libraries can also support threading internally without exposing it directly to the Python interpreter.
If you've heard anything negative about Python and threading (or that it doesn't support it), it is probably someone encountering a situation where the GIL is causing a bottleneck..