I'm running a few different services off of my Ubuntu VM on ProxMox, and they've all been running great for about 6 months now. However, I'm trying to setup some better backups and such of individual services, and I wrote a bash script to do that for me and delete older backups once I accumulate enough.
All of that works 100% fine. Like absolutely no issues with the script when I run it myself. However, I can not for the life of me get crontab to run it.
If I run sudo ./folder/directory/backup.sh then everything runs perfectly. However, if I setup my crontab with 0 * * * * ./folder/directory/backup.sh I get absolutely nothing.
I have also tried setting the crontab with sudo, sh, sudo sh, and both combinations without the dot in front of the path to the shell script.
Does anyone have any idea what I am doing wrong?
Thank you so much for any help
Update: I have edited /etc/crontab with the following 0 * * * * * root /mnt/nas/freshrss/backups/backup.sh. After waiting for the crontab to fire off, nothing happened. Still not really sure what's going on.
The crontab has no concept of . meaning the current directory. Try with the full path to the script. You might also need a user (but you might not if it's a user's crontab as opposed to the system one).
The crontab has no concept of . meaning the current directory.
Not quite true. . exists in all directories so will work in any application. But it raises the question of what is the directory cron is running in. Probably not what you expect, definitely not your users home dir and you probably should not rely on it. So you should not use relative paths inside it - even if you can get them to work. Best to just stick to absolute paths or explicit cd to the right location before hand (that is on the same cron line or in the script it calls).
I have edited /etc/crontab with the following 0 * * * * * root /mnt/nas/freshrss/backups/backup.sh. After waiting for the crontab to fire off, nothing happened.
There's an extra *. There should be 5 time fields, but there's a zero followed by 5 *s. If that's not what's causing it, next spot I'd check is output from the cron logs. Not sure where that is in Ubuntu, though, might be in/var/log/messages or in the systemd journal. Cron sometimes sends mail when there's an error, too, so checking the users mail might give you some clues as well.
I have edited /etc/crontab with the following 0 * * * * * root /mnt/nas/freshrss/backups/backup.sh. After waiting for the crontab to fire off, nothing happened.
So, right now I'm trying the system crontab instead of my user crontab.
Just to reiterate from my post, however, I have tried the full path. I was giving example paths. I should have been more explicit that by just "using dot" I meant using relative and absolute paths.
All paths have been full paths from the get go, though I did try cd-ing into the folder and running it with a relative path. My hope at this point is that it's somehow a permissions issue as my storage setup is a bit odd with TrueNAS Scale running as a VM on ProxMox. Permissions with docker are usually hell, and I have to run literally everything that touches my NAS as root to get the permissions to play nicely, so it would make sense here that it's just the permissions being upset and preventing access to the files.
I set a backup to run on the hour, so I'll report back with whatever happens.
Thre could be two other things that I can think of:
Permissions maybe: Try "sudo chmod +x /path/yourscript.sh" to make your script explicitly executable.
Also, the environment of cron doing something may be different from when you do it as root or user. So you should always use the full path to every command in your script; like "/bin/tar" instead of just "tar". To find out, where things are, you can use "whereis tar", and it will tell you, whether it's in /bin, /usr/bin or elsewhere.
Why do you have root in there? If you need something to run as root do sudo crontab -e and edit the root user's crontab accordingly. The user shouldn't be specified in the crontab directly.
The crontab that is found at /etc/crontab very specifically states that it has a user field. I will readily admit that I might be misunderstanding it, but that feels pretty explicit to me.
What distro are you using? I haven't seen /etc/crontab in quite a while with the advent of the /etc/cron.d directory. That said, crontab -e will handle this stuff for you.
Edit: I see, Ubuntu. I'm not too familiar with what they're doing over there. I have an /etc/cron.d dir on my Arch boxes. Some other stuff to check though: does any cron job run? If not, is the service running? You could also redirect this script's output to a file under /tmp or something to check if it's running and what might be going wrong. Beyond that, check the systemd logs for any errors.
I know it is not really what is asked, but cron is a pain in the ass to handle and manage. I am not sure if it is officially deprecated yet, but I would migrate everything to systemd timers instead it is so much better. It provides configuration tools and proper integrated logging and troubleshooting tools.
Just create a service file of type oneshot which runs your backup script and a timer unit with the same base name. Set the timer to hourly, place both files into /etc/systemd/system, do a daemon-reload and enable the timer. You can see the status or journal for output and list-timers to see the schedule and wether or not it ran.
Usually if programs can run in a user context but don't work as some automated process it is either due to environment differences. Most importantly PATH which can be solved by using absolute paths for programs. Another very common problem is the systems MAC implantation although it happens more often with SEL. Still you might want to check your AppArmor configuration and (audit) logs.
If you want to stick with cron also make sure to read the mails (/var/mail/root by default), because most cron implementations dump their output/logs there.