How do I add ~/bin to PATH for a systemd service?

Solution 1:

You could hardcode the PATH in the systemd service:

[Service]
Environment=PATH=/home/someUser/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

More flexible would be PAM. It's awfully roundabout compared to simply using bash -c '....', but you can do this with PAM.

Create a new PAM configuration in /etc/pam.d (say /etc/pam.d/foo) and add:

session    required     pam_env.so user_envfile=some-file user_readenv=1

And in /home/someUser/some-file, add:

PATH DEFAULT=/home/someUser/bin:${PATH}

Of course, you can adjust the some-file name to something more sensible, but the path in user_envfile has to be relative to the user's home directory (the user that you set in User= in the service).

Then in the service file, in the [Service] section, add (foo being the file in /etc/pam.d created earlier):

PAMName=foo

Now, when you start the service (after reloading, etc.), the session modules in /etc/pam.d/foo will be run, which in this case is just pam_env. pam_env will load environment variables from /etc/environment, subject to constraints in /etc/security/pam_env.conf, and then the user environment from ~/some-file. Since PATH is set to a default value in /etc/environment, the user environment prepends to this default value.

Here, the default value of user_envfile is .pam_environment, which is also read by the PAM configuration of other things like SSH or LightDM login, etc. I used a different file here in case you don't want to affect these things. You could remove the user_envfile=... and use the default ~/.pam_environment. you could also just use an existing PAM configuration in /etc/pam.d which has user_readenv=1, but other PAM modules may cause unwanted side effects.

Solution 2:

I know I'm digging up a slightly dated post, but I too was trying to figure out how I could get the PATH/environment variables configured so thaty I could get the scheduler to run automatically when the server is running.

I did find a solution that works for me on Ubuntu 18.04 and 18.10

I provided a full write-up of how to install Airflow and PostgreSQL on the backend on the link here.

**from the later part of my article Essentially it comes down to making a specific change to the airflow-scheduler.system file.

This is one of the ‘gotchas’ for an implementation on Ubuntu. The dev team that created Airflow designed it to run on a different distribution of linux and therefore there is a small (but critical) change that needs to be made so that Airflow will automatically run when the server is on. The default systemd service files initially look like this:

[Unit]
Description=Airflow scheduler daemon
After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
Wants=postgresql.service mysql.service redis.service rabbitmq-server.service

[Service]
EnvironmentFile=/etc/sysconfig/airflow
User=airflow
Group=airflow
Type=simple
ExecStart=/bin/airflow scheduler
Restart=always
RestartSec=5s

[Install]
WantedBy=multi-user.target

However, this will not work as the ‘EnvironmentFile’ protocol doesn’t fly on Ubuntu 18. Instead, comment out that line and add in :

Environment="PATH=/home/ubuntu/anaconda3/envs/airflow/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

You will likely want to create a systemd service file at least for the Airflow Scheduler and also probably the Webserver if you want the UI to launch automatically as well. Indeed we do want both in this implementation, so we will be creating two files, airflow-scheduler.service & airflow-webserver.service. Both of which will be copied to the /etc/systemd/system folder. These are as follows:


airflow-scheduler.service

[Unit]
Description=Airflow scheduler daemon
After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
Wants=postgresql.service mysql.service redis.service rabbitmq-server.service

[Service]
#EnvironmentFile=/etc/default/airflow
Environment="PATH=/home/ubuntu/anaconda3/envs/airflow/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
User=airflow
Group=airflow
Type=simple
ExecStart=/home/ubuntu/anaconda3/envs/airflow/bin/airflow scheduler
Restart=always
RestartSec=5s

[Install]
WantedBy=multi-user.target
#airflow-webserver.service

airflow-webserver.service

[Unit]
Description=Airflow webserver daemon
After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
Wants=postgresql.service mysql.service redis.service rabbitmq-server.service

[Service]
#EnvironmentFile=/etc/default/airflow
Environment="PATH=/home/ubuntu/anaconda3/envs/airflow/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
User=airflow
Group=airflow
Type=simple
ExecStart=/home/ubuntu/anaconda3/envs/airflow/bin/airflow webserver -p 8085 --pid /home/ubuntu/airflow/airflow-webserver.pid
Restart=on-failure
RestartSec=5s
PrivateTmp=true

[Install]
WantedBy=multi-user.target

Finally, with both of those files copied to the /etc/systemd/systemd folder by way of a superuser copy command sudo cp it is time to hit the ignition:

sudo systemctl enable airflow-scheduler sudo systemctl start airflow-scheduler sudo systemctl enable airflow-webserver sudo systemctl start airflow-webserver

Solution 3:

It seems terribly hackish, but prepending a $PATH update seems to work.
I'm on the lookout for side-effects however . . .

Example:

ExecStart=/bin/bash -c "PATH=/home/someUser/bin:$PATH exec /usr/bin/php /some/path/to/a/script.php"

Solution 4:

In a service I was setting up (Apache Airflow) I had an Environment File set.

In my /etc/systemd/system/airflow file, I had this line:

[Service]
EnvironmentFile=/etc/default/airflow

Opening this environment file, I added the line I needed, in my case:

SCHEDULER_RUNS=5
PATH=/opt/anaconda3/bin:$PATH

Add whatever paths to the executables you need to be able to be reached by the service here and you should be ok. Worked well for me.