Can a pool of memcache daemons be used to share sessions more efficiently?
We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini
file (the session.save_handler and session.save_path):
I replaced:
session.save_handler = files
with:
session.save_handler = memcache
Then on the master webserver I set the session.save_path
to point to localhost:
session.save_path="tcp://localhost:11211"
and on the slave webserver I set the session.save_path
to point to the master:
session.save_path="tcp://192.168.0.1:11211"
Job done, I tested it and it works. But...
Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths
so the machines look in their own session storage before using the network. For example:
Master:
session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211"
Slave:
session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211"
Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)?
Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol...
Assume: no sticky-sessions, round-robin load balancing, LAMP servers.
Solution 1:
Disclaimer: You'd be mad to listen to me without doing a tonne of testing AND getting a 2nd opinion from someone qualified - I'm new to this game.
The efficiency improvement idea proposed in this question won't work. The main mistake that I made was to think that the order that the memcached stores are defined in the pool dictates some kind of priority. This is not the case. When you define a pool of memached daemons (e.g. using session.save_path="tcp://192.168.0.1:11211, tcp://192.168.0.2:11211"
) you can't know which store will be used. Data is distributed evenly, meaning that a item might be stored in the first, or it could be the last (or it could be both if the memcache client is configured to replicate - note it is the client that handles replication, the memcached server does not do it itself). Either way will mean that using localhost as the first in the pool won't improve performance - there is a 50% chance of hitting either store.
Having done a little bit of testing and research I have concluded that you CAN share sessions across servers using memcache BUT you probably don't want to - it doesn't seem to be popular because it doesn't scale as well as using a shared database at it is not as robust. I'd appreciate feedback on this so I can learn more...
Ignore the following unless you have a PHP app:
Tip 1: If you want to share sessions across 2 servers using memcache:
Ensure you answered Yes to "Enable memcache session handler support?" when you installed the PHP memcache client and add the following in your /etc/php.d/memcache.ini
file:
session.save_handler = memcache
On webserver 1 (IP: 192.168.0.1):
session.save_path="tcp://192.168.0.1:11211"
On webserver 2 (IP: 192.168.0.2):
session.save_path="tcp://192.168.0.1:11211"
Tip 2: If you want to share sessions across 2 servers using memcache AND have failover support:
Add the following to your /etc/php.d/memcache.ini
file:
memcache.hash_strategy = consistent
memcache.allow_failover = 1
On webserver 1 (IP: 192.168.0.1):
session.save_path="tcp://192.168.0.1:11211, tcp://192.168.0.2:11211"
On webserver 2 (IP: 192.168.0.2):
session.save_path="tcp://192.168.0.1:11211, tcp://192.168.0.2:11211"
Notes:
- This highlights another mistake I made in the original question - I wasn't using an identical
session.save_path
on all servers. - In this case "failover" means that should one memcache daemon fail, the PHP memcache client will start using the other one. i.e. anyone who had their session in the store that failed will be logged out. It is not transparent failover.
Tip 3: If you want to share sessions using memcache AND have transparent failover support:
Same as tip 2 except you need to add the following to your /etc/php.d/memcache.ini
file:
memcache.session_redundancy=2
Notes:
- This makes the PHP memcache client write the sessions to 2 servers. You get redundancy (like RAID-1) so that writes are sent to n mirrors, and failed
get's
are retried on the mirrors. This will mean that users do not loose their session in the case of one memcache daemon failure. - Mirrored writes are done in parallel (using non-blocking-IO) so speed performance shouldn't go down much as the number of mirrors increases. However, network traffic will increase if your memcache mirrors are distributed on different machines. For example, there is no longer a 50% chance of using localhost and avoiding network access.
- Apparently, the delay in write replication can cause old data to be retrieved instead of a cache miss. The question is whether this matters to your application? How often do you write session data?
-
memcache.session_redundancy
is for session redundancy but there is also amemcache.redundancy
ini option that can be used by your PHP application code if you want it to have a different level of redundancy. - You need a recent version (still in beta at this time) of the PHP memcache client - Version 3.0.3 from pecl worked for me.
Solution 2:
Re: Tip 3 above (for anyone else who happens to come across this via google), it seems that at least presently in order for this to work you must use memcache.session_redundancy = N+1
for N servers in your pool, at least that seems to be the minimum threshold value that works. (Tested with php 5.3.3 on debian stable, pecl memcache 3.0.6, two memcached servers. session_redundancy=2
would fail as soon as I turned off the first server in the save_path
, session_redundancy=3
works fine.)
This seems to be captured in these bug reports:
- https://bugs.php.net/bug.php?id=58585
- https://bugs.php.net/bug.php?id=59664
- https://bugs.php.net/bug.php?id=60105
Solution 3:
Along with the php.ini settings shown above ensure the following are set too:
memcache.allow_failover = 1
memcache.hash_strategy = 'consistent'
Then you'll get full failover and client-side redundancy. The caveat with this approach is that if memcached is down on localhost there will always be a read miss before the php memcache client tries the next server in the pool specified in session.save_path
Just bear in mind that this affects the global settings for the php memcache client running on your web server.
Solution 4:
memcached doesn't work that way (please correct me if I'm wrong!)
If you want your application to have redundant session storage, you have to create something that alters/add/deletes entries to both memcached instances. memcached doesn't handle this, the only thing it provides is as key hash storage. So no replication, synchronization, nothing, nada.
I hope I am not wrong on this matter, but this is what I know of memcached, been a few years since I touched it.
Solution 5:
memcached doesn't replicate out of the box, but repcached (a patched memcached) does. However if you're already using mysql then why not just use its replication functionality with master-master replication and get the benefit of full data replication.
C.