Common PHP setup insecurity?

I work with the systems administration at a university and just stumbled across something which is probably common, but was quite a shock to me.

All public_html directories and web areas are stored on afs, with read permissions for the web servers. Since users are allowed to have php scripts in their public_html, this means that they can access each others' files from within php (and the main web files!).

Not only does this render any .htaccess password protection completely useless, it also allows users to read php source files containing mysql database passwords and similar sensitive information. Or if they find that other people have directories where the web servers have write access (e.g. for personal logs or to save submitted form data) they can store files in those accounts.

A simple example:

<?
  header("Content-type: text/plain");
  print file_get_contents("/afs/example.com/home/smith/public_html/.htpasswd"); 
?>

Is this a common problem? And how do you typically solve it?

UPDATE:

Thanks for the input. Unfortunately, it seems there is no simple answer. In a big shared environment such as this, users should probably not be given this much choice. The best approach I can think of is to set "open_basedir" in the main configuration for all "public_html" directories, run suphp and only allow clean php (no cgi scripts, running external commands with backticks etc).

Changing the policy like this would break a lot of things though, and quite possibly make users grab their pitchforks and chase us... I will discuss it with my colleagues and update here if we make a decision on how to change the setup.


One can use suphp which runs the php script with the uid of its owner.

http://www.suphp.org


My suggestion would be to limit PHP's access to files (via open_basedir & similar directives, on a per-vhost basis): You want users to be able to open/read/write files under their webroot, and perhaps one level above it (for scratch space), but not a directory where they would be storing e.g. htpasswd files.

A directory structure like:

/Client
    /auth
    /site
        /www_root
        /www_tmp

Would meet this requirement: open_basedir could be pointed at /Client/site safely, and htpasswd files stored in /Client/auth (with .htaccess files or httpd.conf modified to point at the appropriate location instead).
This prevents your clients from opening anyone else's files, AND as a benefit malicious users can't read the stuff in /Client/auth (or anything else on your system, like /etc/passwd :-)

See http://php.net/manual/en/ini.core.php for more details on open_basedir & per-vhost implementation.


No it's not a common problem because most shared hosts would define a open_basedir config in the htaccess file in the public_html directory of each user (or in the vhost if each user has their own vhost).

e.g. of .htaccess file:

# Assuming these are not set globally - its good practice to limit:
  php_flag magic_quotes_gpc off
  php_flag magic_quotes_runtime off
  php_flag register_globals off
  php_flag short_open_tag off
  php_value max_execution_time 60

# These set the user-specific paths
  php_value open_basedir /afs/example.com/home/smith:/usr/share/php/include
  php_value session.save_path /afs/example.com/home/smith/tmp
  php_value upload_tmp_dir /afs/example.com/home/smith/tmp

But do make sure you set the right permissions on the .htaccess file to prevent the user changing the open_basedir (if they try to override it in subdir/.htaccess, it shouldn't work - but you should probably test this to make sure).

HTH

C.


AFS ignores simple unix user permissions. suPHP executes a setuid before running the php program, but that does not give the process the kerberos tokens needed to access AFS and be restricted to its permissions. suPHP would have to be modified in some way to obtain those tokens before it could present itself to the AFS system as that user. As far as I know, that has not been done. (In fact, I found this question when looking to see if anyone else had done so.)