Making an SSH connection problem

I am trying to connect between two machines on the same subnet. I know the IP address of the remote machine and I believe I have opened port 22 on the Firewall.
I am able to ping the address but when I try and make an ssh connection I get:-

alastair@localhost:~> ssh alastair@192.168.xxx.xxx 
ssh: connect to host 192.168.xxx.xxx port 22: Connection refused 
alastair@localhost:~> 

I suspect that the remote machine has not been set up correctly but am not sure what I should do next. I only have occasional access to the remote machine so need to get it working so I can access it 24/7.

Budge

  1. 192.168.xxx.yyy is a private Network and will not routed to the bad Internet, so you can show the whole IP.

  2. ssh -vv USERNAME@IP would show more.

  3. the USERNAME is at the Server installed?

  4. Server is running or restarted after changes: systemctl restart sshd.service

Forgot to mention I am using KDE desktop.

Maybe I have messed up the Firewall because although I thought I had enabled port 22 I get the following result when scanning ports;-

alastair@localhost:~> nmap -Pn 192.168.XXX.XXX 
Host discovery disabled (-Pn). All addresses will be marked 'up' and scan times will be slower. 
Starting Nmap 7.91 ( https://nmap.org ) at 2021-08-07 20:34 BST 
Nmap scan report for 192.168.XXX.XXX 
Host is up (0.55s latency). 
Not shown: 991 filtered ports 
PORT     STATE  SERVICE 
22/tcp   closed ssh 
1717/tcp closed fj-hdnet 
1718/tcp closed h323gatedisc 
1719/tcp closed h323gatestat 
1720/tcp closed h323q931 
1721/tcp closed caicci 
1723/tcp closed pptp 
1755/tcp closed wms 
1761/tcp closed landesk-rc 

Nmap done: 1 IP address (1 host up) scanned in 70.71 seconds 
alastair@localhost:~>

I may be confused by the zones but it is clear from the above scan that port 22/tcp is closed.
Please help!

As root:

firewall-cmd --get-default-zone

Many thanks for your prompt reply.
Here is the result from your command:-

alastair@localhost:~> ssh -vv alastair@192.168.xxx.yyy 
OpenSSH_8.4p1, OpenSSL 1.1.1k  25 Mar 2021 
debug1: Reading configuration data /usr/etc/ssh/ssh_config 
debug1: /usr/etc/ssh/ssh_config line 24: include /etc/ssh/ssh_config.d/*.conf matched no files 
debug1: /usr/etc/ssh/ssh_config line 26: Applying options for * 
debug2: resolve_canonicalize: hostname 192.168.xxx.yyy is address 
debug2: ssh_connect_direct 
debug1: Connecting to 192.168.xxx.yyy [192.168.xxx.yyy] port 22. 
debug1: connect to address 192.168.xxx.yyy port 22: Connection refused 
ssh: connect to host 192.168.xxx.yyy port 22: Connection refused 
alastair@localhost:~> 


Looks like the daemon is not running. I thought this was started automatically. Should this be started on my local machine or the remote machine?

So that was it. Is the something in Yast to make this service start after every reboot because the remote machine is running TW and needs re-starting very frequently. I looked but couldn’t find an obvious line in the Services tab.
Looked in wrong place it is in System>Services Manager. All good now thanks.

There should be a services line for “sshd” (or maybe it is just “ssh” or “openssh”).

debug1: Reading configuration data /usr/etc/ssh/ssh_config

Why /usr/etc on Leap 15.3?

Post:

zypper lr -d

Now I have my new system up and running and a working connection I wish to make a copy of my multimedia directory on a remote machine to my new machine. I am overwhelmed by the number of options but am seeking connection speed & reliability.

Reading other posts it is clear that among others, two options are to use rsync or to use NFS and once mounted copy the files. I favour rsync but am thinking of using Luckybackup as the rsync tool because I can be more confident about “tuning” the system to check what I want.

Once I have the connection confirmed how do I define the remote source please, ie the syntax for pointing to the remote directory.
Budge.

I should add I have been trying to use the remote directory as 192.168.xxx.xxx:/home/alastair/mastermedia but I am getting message that one or other of the directories does not exist. Both source and destination directories are actually soft links to directories on different drives. So can I use soft links this way or do I have to connect between original drives?

I should add I have been trying to use the remote directory as 192.168.xxx.xxx:/home/alastair/mastermedia but I am getting message that one or other of the directories does not exist. Both source and destination directories are actually soft links to directories on different drives. So can I use soft links this way or do I have to connect between original drives?

My remote system has:-

lrwxrwxrwx 1 alastair users 22 Aug 15 2019 mastermedia -> /multimedia/multimedia

and my local drives have:-

lrwxrwxrwx 1 alastair users 12 Aug 16 21:59 multimedia -> /mastermedia

I know this should have

 brackets but they have not appeared on my browser this morning?

You want to copy files from server A to client B?

Hi and thanks for the reply.

If the remote machine is running sshd and it is called Server A and I am at my local workstation B with an empty mounted disk, I want to copy the multimedia directories and contents from A to B.
The multimedia data are on a separate drive on server A and the directory has a soft link to the directory in my home/alastair/mastermedia.
The drive to receive the data is on a local machine B with disk mounted at /mastermedia and has a soft link to /home/alastair/mastermedia. (May have mixed up mastermedia and multimedia but not at machine at present.

If you can ssh to Server A from Client B, you can use dolphin/krusader with Protcol fish.

Or use the terminal with:

scp

see

man scp

Hi and many thanks. I have used scp effectively for moving files onto RPi etc. but I am trying to copy several TB of data. That is why I wanted to use Rsync, but it is above my paygrade for soft links etc.

My favorite is to mount a remote file system with sshfs as follows:

sshfs -o allow_other alastair@192.168.xxx.xxx:/ /mnt

Then you have access to all of the remote file system … rsync or whatever.

However I just found out the hard way yesterday, the remote system needs to be running openssh and not dropbear as the ssh server :shame:. I was trying to get into an OpenWrt system!

Hi JulinaB,
I have tried this and having sorted my permissions, I have started by trying to copy files using Dolphin. Very easy but very very slow. I have several TB of files to copy. There must be a faster way. I believe rsync is faster but am getting confused by the soft links. Will keep trying while copying is in progress.
Thanks for the info on sshfs.
Regards,
Budge.

Hi Budgie2,

I may have missed some details but I didn’t quite get what speaks against nfs? It is very easy to set up via YaST as well as in terminal. You can simply export the folder on the separate drive of the remote machine Server A. You can mount it to /home/alastair/yourmediafolder/ on your local machine Client B or to any other location. Then you can simply copy to and fro with dolphin, via terminal and of course with rsync. With Dolphin an Client B you don’t even notice that it’s remote. I’m using a very similar set up with nfs server on my home server and three client syncing full home directories, just firefox profiles or backup to internal / external drives. I’m using unison rather than rsync but it’s a minor difference (fancy the optional GUI). As this is quite fast I even gave up on excluding temp files and folders e.g. of kaffeine TV video recorder or time shift function.
Permission settings can be handled - depending on your requirements.

I was just going to suggest Dolphin’s fish protocol, but I see that somebody already has. I don’t understand what you mean by Dolphin being slower than other software. As far as I know, unless you’ve made some kind of QOS modifications to your network, any transfer software would just blast data down the pipe at the maxminum rate available, right? You could maybe get a speed increase by not transferring the data without encryption, or perhaps compressing everything into a zipped up tarball, but zipping and taring multiple terabytes of data before you even start the transmission could still take a while (especially if you’re going to try for maximal compression).Only other thing that could give other software a speed advantage would be transmitting over UDP rather than TCP, but I can’t imagine any widely used file transfer software is going to transmit actual file data over UDP.

Network speeds are always going to be a limiting factor. Is it practical to move one of the hard drives to the other machine for the copy?