Tuning (and understanding) table_cache in mySQL
Solution 1:
From the MySQL documentation
For example, for 200 concurrent running connections, you should have a table cache size of at least 200 × N, where N is the maximum number of tables per join in any of the queries which you execute. You must also reserve some extra file descriptors for temporary tables and files.
So if in your application you have a query that joins 4 tables and you want to be able to handle 200 concurrent connections, based on that statement you should have table_cache of at least 800.
As far as memory usage, I don't have those numbers, I would suspect it will depend upon the size of your tables that it is caching.
Solution 2:
You should monitor the Opened_Tables variable and see how quickly it increases. If it's significantly faster than you create new tables (including temporary ones) then your table cache may be too small.
Table_Cache should always - well mostly anyway - be significantly bigger than the total number of tables in the server. Otherwise it'll keep opening and closing tables.
I can't see how you could get a 2% cache hit rate, unless you were measuring the time just after a server restart or using FLUSH TABLES a lot (in relation to the number of queries). Normally the table cache hit rate should be 99.9% otherwise performance will suck.
Don't do a FLUSH TABLES if you can avoid it, it blows the cache away.
Opening tables is expensive as it needs to read the FRM file. In MyISAM it's significantly worse (than other engines), as when it closes a table, it also throws out all the blocks in the key cache that came from its indexes. So closing a table dumps its indexes from the key cache == not good! Other engines keep the cached blocks but still need to reread metadata and allocate some structures.