Trying to get SSH set up in a small cluster - but can’t get ssh-add to work. From my reading of the DIY pages (https://www.open-mpi.org/faq/?category=rsh#ssh-run-prereqs), ssh-add should add my identity to the ssh-agent so that openmpi processes on my head node can spawn threads on my second node. It doesn’t change anything if I give rw-rw-rw- to ~/.ssh/*, so I don’t think it’s a permissions problem. I am able to ssh between the machines, but it always asks for my passphrase. I’ve been working at this a while and feel badly because this is just the first step in trying to get openmpi working… :\
Thank you for any help or insight.
:~> ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/patti/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/patti/.ssh/id_rsa.
Your public key has been saved in /home/patti/.ssh/id_rsa.pub.
The key fingerprint is:
<...>
:~>
:~> ssh 111.111.0.102
Enter passphrase for key '/home/patti/.ssh/id_rsa':
Last login: Sun Jun 29 18:57:43 2014 from 111.111.0.100
Have a lot of fun...
AA:~> exit
logout
Connection to 111.111.0.102 closed.
:~>
:~> ssh 111.111.0.102
Enter passphrase for key '/home/patti/.ssh/id_rsa':
Last login: Sun Jun 29 19:09:52 2014 from 111.111.0.100
Have a lot of fun...
AA:~> exit
logout
Connection to 198.162.0.102 closed.
:~>
:~> eval 'ssh-agent'
SSH_AUTH_SOCK=/tmp/ssh-u1t0fiXc0WpD/agent.24032; export SSH_AUTH_SOCK;
SSH_AGENT_PID=24033; export SSH_AGENT_PID;
echo Agent pid 24033;
:~>
:~> ssh-add /home/patti/.ssh/id_rsa
Could not open a connection to your authentication agent.
:~>
…not trying to figure out how to have ssh-agent do it’s thing behind-the-scenes… i.e., without having to invoke it and ssh-add every time I run a shell.
I’m not completely clear on the question. But I’ll comment anyway.
I’m not sure what you think you are testing there. If the permissions of ~/.ssh are too generous, then ssh refuses to use that directory, at least in my experience.
These days, I am mainly using X, and ssh-agent is started as part of the X-session startup.
Back when I was doing a lot of command line logins, I had a script. I would invoke it with:
eval path/to/script
The script would output the magic needed to make ssh-agent available in the current shell. If there was already a running ssh-agent owned by me, the script would find it and generate the information to access it. Otherwise it would start ssh-agent, and use the information from the one it started.
Basically, the script would output the commands to put SSH_AUTH_SOCK and SSH_AGENT_PID in the environment, with appropriate values for the running agent. The agent would stay running in the background until the next boot or until I manually killed it.
Thank you - you nailed it - I’ll ponder that. I suppose the simplest thing would be
to always do eval ‘ssh-agent’ whenever I open a shell to do my parallel
software. I’ve verified that works as long as I don’t issue an exit command.
Not being a scripter, that is the way I’ll go for now, at least until I get openmpi
working across two nodes. It may be that my software (not written by me, written
by NASA) calls other shells, so I’m hoping all daughter shells to an ssh-agent
shell have the same information…
Thank heaven ssh is automatic enough that it stores machines and identities
automatically, or else I’d never figure it out. I just verified that I could ping
my IP addresses (not hostnames) and then tried ssh <<ip address>> and it
worked. I also modded my openmpi default hostfile to contain ip addresses, rather
than hostnames.
I did notice what you say - it makes sure that nobody else can read your .ssh/* files
or else ssh simply quits saying “ignoring [that keyfile]…”