I have a few applications that I download manually and install in the /opt folder in my 4 Linux computers. Whenever there’s an update to any of them I have to manually download, unpack and replace the application on every computer. While 4 computers are not a big deal, I would like to save time and make sure I don’t forget one or more computers.
I’m thinking of syncing the /opt directory between those computers using Syncthing. Since all configuration of these applications and user data resides in the home directories I don’t foresee any problem with this approach. Am I missing something? Do you think it is a good idea?
openSUSE installs various software in opt/, but not AFAICT in /usr/local/. On this 42.3 opt/ contains 3 directories, kde3, gnome and brother. I put all my outside-the-package-system apps in /usr/local/, and have no problem syncing it among machines.
I assume that at least /opt/kde3 is a left over from long ago ( I have it on a 13.1 with still KDE3 stuff installed).
I assume that syncing /opt between systems you all manage can be an option to maintain the software located there, as long there are no parts of those software products outside /opt and as long as there are no configuration files of those products somewhere in /opt that should differ between the systems.
Depends on the meaning of manual. I don’t remember whether the Brother printer drivers came as mere script or if rpm was involved. I have no idea why gnome is there, as I never install anything Gnome on purpose. It’s nothing but a tree of directories. kde3 is there because that’s where Zypper and/or YaST2 puts it. My own scripts, fonts, Mozilla products and more all go in /usr/local/. I know I’ve see *office files in /opt/ on some PCs.
I’d never do this with syncing. F.e. Google installs their applications in /opt/google, Nodejs installs built apps in /usr/local
But, a good one for you ( and not only for this issue ) would be to use saltstack. It’s not hard to setup, has lots of internal commands that can be used, and lets you manage X machines from a single spot. This:
# salt '*' pkg.upgrade
updates 12 machines, i.e. 3 CentOS VPSes, 4 Leap 15 PC’s, 2 Tumbleweed VPSes, my Tw laptop and 2 Rpi3’s with Leap. And yes, it will invoke ‘yum update’, ‘zypper up’, ‘zypper dup’ depending on the OS running on the minions ( clients ).
Yup. Anything can be done through salt, that can be done on a local machine. F.e.
salt 'cloud.knurpht.nl' cmd.run '/home/knurpht/bin/NCupgrade'
calls a script on host cloud.knurpht.nl that pulls in the latest Nextcloud archive, sets the running version to maintenance mode, performs the upgrade ( incl. a backup ), and removes maintenance mode. Please note the replacement of ‘’ by a single hostname, where wildcards could have been used as well, f.e. 'centos’ . Pretty easy to pull in a script on multiple machines and run it. It could even be done without the script, but instead by creating a so called salt-state file on the master ( the salt server ).
It’s very interesting that you recommend a configuration management solution. I’ve been wanting to try them out for a while but didn’t have much time. I see that the popular ones are ansible, chef, salt and puppet. Can you advise on which one is the easiest to use and fastest to setup for a very small network and a couple of VPS?
edit: also can salt or the others add the menu item for these manually extracted applications in the KDE menu?