Migrating Kickstart from CentOS 6 to CentOS 7 + ESXi VMware Tools

In another post I described how to install CentOS 6 via a kickstart file (be sure to check out the “Kickstart Sample Configuration” section). CentOS 7 was recently released, and with that I needed to also use a kickstart configuration. However, simply using the previous kickstart configuration was not as easy as copy-and-paste (besides updating the release version in the repository configuration).

Summary of Kickstart Changes

There were a few changes that needed to be made for a base/core installation of CentOS 7:

  • Include  eula --agreed (read the documentation)
  • Include  services --enabled=NetworkManager,sshd (read the documentation)
  • Update the install packages list ( %packages  section)
  • CentOS 7 is also a bit more strict with the kickstart file, so I had to explicitly include %end  where applicable
  • CentOS 7’s default file system is now xfs, in CentOS 6 it was ext4, so consider updating the automatic partitioning to use xfs
  • Package groups @scalable-file-systems, @server-platform, @server-policy, and @system-admin-tools no longer exist – I haven’t located suitable replacements yet
  • Things like ifconfig are no longer included by default (they are now deprecated), so if you need them be sure to include net-tools. You should be using ip by now anyway.

Kickstart Sample Configuration

And now, an updated kickstart config for CentOS 7, with consideration of the previously mentioned updates (compare it with the previous Kickstart Sample Configuration mentioned in the other post). I also chose to include some extra packages that don’t exist by default with a @core  installation.

 

VMware Tools Change

If you’re like me and use ESXi, I’m currently on version 5.5 and 5.5u1, the installable tools for integrating with ESXi is a nice treat. However, the repository location has changed specifically for RHEL7, and so have the packages.

Add VMware Tools to YUM

Put the following repo configuration in /etc/yum.repos.d/vmware-tools.repo:

Then update yum and install the tools:

 

 

uberSVN: Cheap Backup Script Using hotcopy and rdiff-backup

SVN backups, and backing up in general, is critical for the purpose of disaster recovery. There are a few requirements to consider:

  1. Backups should mirror (or come close to) the current state of system, i.e. a snapshot.
  2. Backups should be incremental. In the event a snapshot is malformed, or the system during the snapshot is corrupt, one can roll back to previously working snapshot.
  3. Take Backup only what you need to survive. No sense in including temporary files, libs that are part of the distribution (easily downloadable), etc.

Lower-level note: I don’t have the resources for LVM snapshots, and the physical disks are RAID 6.

Considerations

Originally, I was backing up my uberSVN installation and all of the SVN repositories using a simple rdiff-backup command. This approach is shortsighted: a rsync of the internal repositories directory does not consider current commits, prop changes, hooks, etc. that are occurring while the rsync is happening which could cause a non-restorable backup; using svnadmin hotcopy addresses this concern. However, the issue with hotcopy is that it does not perform an incremental backup for you, so I needed to couple it with rdiff-backup. It is worth noting that performing this type of copy operation will include, props, hooks, commits and other information – it is more comprehensive than a normal svnadmin dump, but with the downside that it is only truly compatible with the version of SVN that generated it.

As if this wasn’t enough to think about, hotcopy does not preserve file system timestamps. This is problematic with rdiff-backup which relies on a timestamp + file size combination; even though it uses the same underlying algorithm as rsync, AFAIK it does not support the checksum feature for transfers. So after the svnadmin hotcopy is performed, file attributes should be synchronized as well (with slight risk I might add).

Lastly, uberSVN has a handful of configurations and metadata that must be backed up per installation. Its GUI has an easy backup feature, but there is no CLI access/equivalent that I could find. I’m sure I could dig through the source and figure out exactly what was included in the backup, but I decided to reverse-engineer the backup file (which uses ZIP compression by the way) and infer other details. uberSVN includes (or should include) the following directories from its install directory (default is /opt/ubersvn): /opt/ubersvn{conf,openssl,tomcat,ubersvn-db}. From this, a backup archive can be carefully constructed for later restoration within the GUI.

The Implementation (tl;dr)

This is a bash script that is configurable in the first two grouping of variables (lines are highlighted). It also uses a subsys lock mechanism (flock) so it cannot be run in parallel which is helpful when using it in a crontab. I haven’t extensively tested its reliability, but it does work… generally speaking. Here’s the hack:

 

Scripting Parallel Bash Commands (Jobs)

Sometimes commands take a long time to process (in my case importing huge SQL dumps into a series of sandboxed MySQL databases), in which case it may be favorable to take advantage of multiple CPUs/cores. This can be handled in shell/bash with the background control operator & in combination with wait. Got a little insight from this StackOverflow answer.

The Technique

The break-down is commented inline.

Notes

  • This does not function like a pool (task queue) where once a “thread” is freed up it will be immediately eligible for the next task, although I don’t see why something like this could be implemented with a little work.
  • wait will pause script execution for all PID parameters it is provided to complete before moving on.
  • Once the if...fi control block is entered wait will cause the for...in loop to suspend.
  • $! (or ${!}) contains the PID of the most previously executed command; make sure it comes directly after the operation used with &. Throw it into a variable (like lastPid) for future use.
  • This is not multi-threading, although this simplified concept is similar. Each command spawns a separate process.
  • read  is just creating a multiline list; in my specific case this was favorable over a bash array, but either will work.
Screenshot of htop showing parallel process tree.

Reload DNS Zone with Bind9 and rndc

Was really confused when trying to reload a zone. I would receive the error rndc: 'reload' failed: dynamic zone. This was especially frustrating because in a similar installation, a simple service named reload would do just fine; not this time. Also, remember the point is to not completely reload named; doing so will clear its cache (if used as a local DNS resolver) and cause a small outage if clients do not have their current domain lookup cached.

Assuming the zone to update is called “THEZONE” perform the following:

If necessary, don’t forget to update the reverse DNS records. I’m lazy and use PTR:

So… the moral of the story is to  freeze, reload and thaw. Remember f-r-t (or fart?).

Install CentOS 6 with Anaconda/Kickstart (plus ESXi VMware Tools)

Synopsis

I’ve been getting my feet wet with ESXi, CentOS 6 VMs, and YUM/RPM. Well the last two I have been using for years, but not like recently.

The goal is to be able to blindly install a controlled distribution of CentOS 6.x quickly and without error (maybe even install multiple at the same time). What I needed:

  • Anaconda Kickstart file (ks.cfg)
  • Local mirrored repository for CentOS 6.x (6.3 in my example)
  • Custom 3rd party repo
  • HTTP/NFS/RSYNC access to these
  • Variable disk/cpu/ram size – the partitions need to be dynamic

Without writing a book about all of this, I really want to just highlight some problems I ran into and how I solved them.

An example of my kickstart file is below for reference.

Automated Partition Schema

Since I’m in the world of virtualized hardware, it is important for the disk to scale easily without lost data. The prerequisite to this is of course Logical Volume Management (LVM). Now you may not agree with my LVM layout, and honestly, this isn’t my expertise (optimization of disk partitions), but at the least there must be “boot,” “root,” and “swap” partitions.

The goal here is to make the root partition grow to its maximum size without negating the swap. Also, the boot partition won’t be on the LVM, it will be fixed in the MBR. The kickstart section is as follows:

I do want to note that the logvol’s are interpreted in a random order, so it is perfectly fine for the swap logvol to be declared after the root (/) logical volume.

Bypassing “Storage Device Warning”

The only problem I had in regards to a prompt-less install was the “Storage Device Warning” asking if I was sure I wanted to write to the disk and lose all of my data. No matter what I put in the partition specification of kickstart, it would always prompt. The answer is to use zerombr yes. See the option “zerombr” as defined within the CentOS kickstart guide. This can be placed anywhere in the kickstarter file (well except in %packages, %post or similar); just put it up near the top.

Auto Reboot

After the installation is complete, automatically reboot the machine. This works perfectly in ESXi since it automatically unmounts the virtual cdrom after the first boot of the guest! Simply put  reboot anywhere in your kickstart – near the top is probably best.

VMware Tools RPM

In order for the vSphere Client to monitor and execute certain tasks on the guest vm, VMware Tools is required. This will show you things like IP addresses, hostnames and guest state as well as integrated shutdown/reboot tools.

Add VMware Tools to YUM

Put the following repo configuration in /etc/yum.repos.d/vmware-tools.repo:

Then execute the following shell in %post of kickstart:

The important part to mention here is that the package is called vmware-tools-plugins-guestInfo. All the dependencies will come with it, so no worries there.

Mirroring a Repository for NFS Kickstart Installation

Create the Repo Mirror

Remember, my goal is to be able to quickly add a CentOS VM. With that, I don’t want to wait 30 minutes to pull down packages from a mirror in Iowa, New York or Cali. I want to pull it down once, keep it up-to-date and have my local install pull from my local mirror. For simplicities sake, I’ll put the mirror in /repo/centos.

I am choosing to exclude any local files/directories (“local*”) and also the huge DVD ISOs (“isos”). Also note that the mirror format is host::path and that the mirror host must support the rsync protocol.

Keep the Local Mirror Updated

To keep the local repo copy up-to-date, run this script via cron (by the way, I stole this from somewhere, I just don’t remember). Please don’t forget to swap out the mirror hostname and path with something that makes more geographical sense to you.

Configure NFS for Kickstart Network Installations

NFS server support is built into CentOS and running by default, so this is pretty easy. Add the following to /etc/exports:

This exports the directory “/repo/centos” for NFS. Only the subnet 172.16.0.0/16 is allowed access (no credentials required). It is mounted as read-only (ro), connection are synchronous as opposed to asynchronous (sync), and all connections are anonymous for security purposes (all_squash). Man exports(5) if you need more help.

Restart NFS via  service nfs restart.

I feel like I’m missing something with NFS, but I don’t recall; this was too easy. In my memory there was a struggle with rpc!

Update iptables for NFS

Edit /etc/sysconfig/iptables and throw these rules in there before -A INPUT -j REJECT --reject-with icmp-host-prohibited.

And restart iptables via  service iptables restart.

Configure Kickstart to Use Local Repo via NFS

This is an easy one-line if everything is set up correctly. Add the following after the “install” option within the kickstart configuration.

Use Local Repo Post Install

So you want to keep using your new local repo beyond the kickstart installation? No worries. Install apache, configure the vhost and update ks.cfg.

Inside vhosts.conf:

Add the following rule to iptables:

Restart iptables via service iptables restart;.

Start httpd via  service httpd start; chkconfig httpd on;.

Update the kickstart configuration:

Done!

Kickstart Sample Configuration

For the option “rootpw” use grub-crypt  with the specified hash algorithm under authconfig –passalgo=X (to replace DEFAULT_SALTED_ROOT_PASSWORD). In the sample ks.cfg file, I have sha512, so:

Using the Kickstart Configuration

The idea is to create a custom ISO with the kickstart configuration embedded, but I haven’t done this yet. So for now, I’m hosting the file as ks.cfg on an intranet HTTP server and booting a centos 6.3 netinstall (~200mb). At the bootloader prompt, specify extra parameters vmlinux initrd=initrd.img ks=http://some.host.local/ks.cfg. This installs all the packages, updates as needed, partitions the disk, runs a custom script, and reboots the machine.

Brain dump complete.

Setting up BIND 9 on CentOS 6 and Securing a Private Nameserver on the Internet

Today I was setting up a brand new server over at LiquidWeb (I have been hosting with this Lansing, MI based company for years, although I’m stubborn and have never tried out their heroic support). I already had the IP addresses (2) and the box provisioned. It is a clean install of the latest CentOS 6 – that means no cPanel/WHM, Plesk or similar. The box will serve many purposes, but it also needs its own nameserver. For the sake of this tutorial, the example domain will be putthingsdown.com and the two IP addresses my host provided are 11.22.33.44 and 11.22.33.45.

Register Your Private Nameservers at Your Registrar

My one-and-only registrar is GoDaddy. They keep things simple and allow for flexibility as far as domain management goes. They are just my registrar: I do not host with them, use their mail servers, nor their nameservers.

This part is simple, when you register the domain name, navigate to the domain management tool and update the namservers to “ns1.putthingsdown.com” and “ns2.putthingsdown.com” – these do not have to exist yet and will be created below.

Lastly, register the nameservers and a utility host at GoDaddy by adding the following three “Host” entries (not subdomain entries, but host entries – there is a difference):

  1. Hostname is ns1 and IP address is 11.22.33.44.
  2. Hostname is ns2 and IP address is 11.22.33.45.
  3. Hostname is host and IP address is 11.22.33.44.

Configuring named

  1. First, get the named service installed:  yum install bind-chroot
  2. Notice that bind-chroot will install under /var/named in its own chrooted directory. This is for security purposes. There should be hardlinks to the chrooted “data” and “slave” directories (this was updated with EL6 – yay!)
  3. Configure the rndc key: (Use the ampersand to send this process to the background, it will take 10-15 minutes to generate) rndc-confgen &
  4. Secure the newly generated rndc.key file:  chown named /etc/rndc.key; chmod 600 /etc/rndc.key;
  5. Get the rndc key name which is encapsulated in double qutoes from the generated file (it should be rndc-key by default):  grep key /etc/rndc.key
  6. Note on CentOS 6 and bind 9.7.3, there will not be a /etc/rndc.conf
  7. Create your first zone file in /var/named/data (example below):  vi /var/named/data/putthingsdown.com.zone
  8. Configure named (example below):  vi /etc/named.conf

Example Master/Authoritative Zone File

  •  The “serial” should be updated every single time this zone file is modified and reloaded into named. The format is as follows: YYYMMDDnn (where nn is an incrementor for the same day, e.g. 01, 02, 03…. 11, 12, 13)
  • refresh, retry, expire, and minimum are measured in seconds, just a note that these aren’t always followed, especially by residential DNS mirrors/servers.
  • The SOA (Start of Authority) is the… well… start of the zone record. This section (including the parens) is what kickstarts the zone file and defines the meta data.
  • After the SOA the first two records are NS (NameServer): the TTL (time-to-live) is 86400 seconds, or 1 day, and point to the (non-existant, yet) nameservers ns1 and ns2.
  • The next 2 records are A (Address) records that register the ns1 and ns2 subdomains and bind them to IP addresses – now the two NS records have something to point to.
  • The third A record is the actual domain itself which is bound to the primary IP address. This is proof that you really don’t need the “www” in front of the domain name, although this is also dependent on the web server configuration
  • “host” will serve as a utility subdomain which also points to the primary IP. This is helpful in the future if there is a secondary util server used for SSH to manage all the servers within the private network.
  • Lastly, the www subdomain acts as a CNAME (Canonical Name), or alias to putthingsdown.com – essentially putthingsdown.com and www.putthingsdown.com will take you to the same place. Hint: try not to use CNAME records if you don’t have to, although they make your zone more flexible for future enhancements, since they require a secondary lookup.

Example named Configuration

The lines highlighted below are the ones that changed from the default named.conf provided by the installation script. I am not going to go over this in detail, but do what to highlight a few pieces. Note: this is the insecure version of the named configuration; continue reading for security enhancements.

  • The first include is to the rndc.key file that was generated in step 3 above.
  • The “trusted” ACL has your two IP addresses in it.
  • Within the controls declaration, the key name found in step 5 above should be defined.
  • “listen-on” needs to have both IP address listed – named binds itself to port 53 on both these IP addresses
  • Setting “allow-query” to any will allow any upstream DNS server the ability to query yours.
  • The section at the bottom for the zone is the inclusion of the zone file created in previous.

Almost Done: Test Stuff

Before we take the step to secure the named server, let’s make sure it works first. Restart the service (hopefully it doesn’t throw any errors). Once it restarts successfully let’s start it on boot-up (with its default run levels); make sure the chkconfig took:

I’ll assume port 53 is open for TCP and UDP. Honestly, this isn’t me “not knowing” which protocol; DNS primarily uses UDP, but has been known to use TCP as a fail-over and will definitely be used in IPv6.

Lastly, use a public tool on the web to verify your DNS configuration. I like to use NsLookup by Network-Tools.com. Simply put “putthingsdown.com” in the domain field and hit GO. If everything is set up correctly, some records from the zone file will be listed, specifically the SOA and NS, as well as the primary A.

Securing named

After some super-fast searching, I found this nifty BSP (not sure what BSP stands for?) over at NIST.gov: How to Secure a Domain Name Server (DNS). Here’s the gist of §3.1.2 (most of these are snippets, they shouldn’t be interpreted verbatim):

  • override “version” number using  options { version "dunno kthxbye"; };
  • restrict zone transfers:  options { allow-transfer { localhost; trusted; }; };
  • restrict dynamic updates in each zone:  zone "putthingsdown.com" { allow-update { localhost; trusted; }; };
  • protect against DNS spoofing:  options { recursion no; };
  • restrict by default all queries:   options { allow-query { localhost; trusted; }; };
  • allow individual zone queries:  zone "putthingsdown.com" { allow-query { any; }; };
  • Verify with security tools: DNSWalk (online version) zone transfer should fail with “REFUSED”, ZoneCheck, and dlint. Happy hunting.

Download & Install a SSL Cert into a Java keystore with keytool

Today I was notified our notification email mail server was changing hosts. So I made a list of the services that use the notify email address (e.g. notifications@domain.tld) – this email address is responsible for sending info to our network users which include updates for everything from issue tracking to password recovery (and then some). With security in mind all emails should be sent over SSL (the mail server supports SSLv3), but the problem is that the installed cert is self-signed; now I know it is a good cert – I generated it, we just don’t wanna fork over the $$ to have a root cert provider put their stamp of approval on it and it’s used for internal purposes only.

Now, if you generated a self-signed SSL cert and want to import that, just skip the “Download the SSL Certificate” section.

There’s Always Something to Mess Your Day Up

Normally this isn’t a big deal: just update the SMTPS credentials and be on your way. However, most of our service applications are Java-based. You’re thinking, “no biggy, just turn on the flag to trust all certs.” Not that easy; the options within our apps don’t have this fancy little checkbox. So I guess I have to do it the hard way: download the cert from the mail server and add it to the Java keystore, restart the service (bah, I have down time), and cross my fingers that it works.

Download the SSL Certificate

This one is pretty easy, and really straight forward (with the help of Didier Stevens’ quickpost)

Obviously the above dump isn’t exactly what you’ll get, but you get the idea… Also, notice the Ctrl+C up there, this is important, the openssl command hangs and you don’t need all the extra stuff, so just wait for the initial dump and cancel the script.

Next, copy the base64 encoded certificate to a .pem file. Don’t forget to include “—–BEGIN CERTIFICATE—–” and “—–END CERTIFICATE—–“. Just save it to something like host.domain.tld.pem. Really, just copy and paste from the terminal. For the idiots: in PuTTY just select it all with the left mouse button, and click on it with the left mouse button. This will copy it to your clipboard. Issue the command “nano host.domain.tld.pem” and right click in nano to paste. Ctrl^O to write out (write the file) and Ctrl^X to exit nano. Done.

Lastly, to figure out the host the certificate belongs to, run the following (this will also confirm if you’ve copied the PEM base64 over correctly):

This will show the lines which include “CN” (e.g. “…/CN=host.domain.tld/emailAddress…”). The CN parameter is the host/domain name that the certificate is registered under. This will be required information when importing with keytool.

Install the SSL Certificate with Java’s keytool into the keystore

There are a few things to accomplish in this section: find and locate the JRE you want to use, find the keytool script, find the trusted certificates file (cacerts), and execute a single-line command.

If there are several JRE’s on the system, figure out which one the application uses. For example, the standalone app I have installed has its own jre folder which contains ./bin/keytool, however, I also have a a system-wide Java installation. To expose all the keytools on your system use find / -name “keytool” …don’t use whereis, only registered applications appear with this command.

My setup looks something like this:

Since I need the service app to have the certificate trusted, I’ll use its own embedded jre keytool, “/opt/thirdpartyapp/jre/bin/keytool”.

Also, the cert needs to be installed into the trusted certs file, so within the particular java/jre installation there should also be a ./lib/security directory with a cacerts file.

TL;DR Working with keytool

Now with the previously retrieved information, here’s the bread’n’butter:

  • The import switch tells keytool we want to import the pem into the trusted certs file (cacerts).
  • The alias switch’s value should be the CN you found in the previous section (after running openssl x509…), this is the domain for which the cert was created for
  • The keystore switch is the path to the Java keystore file, usually cacerts, which stores trusted certificates
  • The file switch’s value is the path to the pem that we created in the previous section.

keytool will now attempt to import the cert. First it’ll prompt for a password, unless you know otherwise try “changeit” or “changeme” – these are widely used defaults. Once you provide the correct password it’ll dump out a bunch of information about the certificate it is importing and lastly ask you if you want to “Trust this certificate” – type “yes” and hit return: “Certificate was added to keystore” is presented.

Now restart the Java application (however it is you do that) and it’ll recognize the SMTPS connection (or what ever else you’re working towards, e.g. HTTPS, SFTP, POP3S etc.)

Piece of cake, after you do it a few times. (Just realized I think i change tenses and POV a couple times in this write up… oh well.)

Further Reading

…like at the bottom of each chapter in your school text books, don’t worry kids, there isn’t a chapter review or quiz.

Password-less SSH with Public/Private Keys

From the POV of an advanced *nix user setting up public and private keys for password-less SSH logins seems trivial. However, to a beginner/novice user this can be confusing. I’ll admit, as comfortable as I am with managing Linux servers, configuring RSA keys on the local and remote machines was messing with my head – until I got the hang of it. I still don’t have a complete top-to-bottom understanding of this when it comes to different versions of SSH and then some, but enough to jot down a note for future reference just in case. I’ll assume the reader has a basic knowledge of CLI, connecting to a remote server with SSH, and that the two (or more) devices in question are both Linux machines.

By the way, there are a lot of these guides out there, but i couldn’t find one that helped. I still had to screw around a bit to really the the hang of it. They were either all too in-depth, too brief, only covered certain parts, or simply just didn’t fit my needs.

Before we get started, there is an easier and less intrusive way to do this if you’re running cPanel. I’ll write about that later.

Use Case

Close your eyes, find your power animal – slide… imagine you’re logged into some nix box (we’ll call this one “foo.com” with user “jack”). Now you need to issue some commands on another remote machine (and we’ll call this one “bar.net” with user “jill”), so what do you do?

At this point, it will make the connection, and if is the first time ask if you want permanently trust the connection, in which you reply “yes” after careful consideration. SSH prompts you for a password, you type it in and you’re good to go. You start navigating the filesystem and ferociously execute commands.

Now this scenario is all fine and dandy, until you’re accessing the server so often that you get sick and tired of typing your 15 character password (with upper/lowercase letters, numbers and symbols). Perhaps a better excuse to not enter the password is that you have a non-interactive shell script that needs to make this connection temporarily, in which case you do not have the luxury of entering your password in the prompt; don’t even tell me you were thinking of embedding the password into the command with the p switch. Wish there was a better way? Let me show you the light.

Benefits of Using Keys with SSH

  • Doesn’t prompt for a password (duh)
  • Can be used with non-interactive/unmonitored scripts (play off of the previous bullet)
  • FAR more secure – this is probably the biggest reason you should be going with this approach regardless of the previous benefits
  • Establish a “trusted connection”
  • Help prevent brute force attacks
  • Read this article on “old” password-style authentication

TL;DR Lezzdo’t.

Remember jack and jill on foo.com and bar.net? Well they didn’t fetch a pail of water, instead they generated an RSA key with ssh-keygen. We’re getting back to that now… There are two types of encryption: DSA and RSA. DSA is supposedly faster but not as compatible (isn’t compatible w/ Protocol 1). RSA has higher encryption (4096) and is more compatible, but at the cost of speed. Up to you, but i’ll be sticking with RSA.

Configure SSH and SSHD on foo.com and bar.net

  •  Make sure that ssh is configured correctly on foo.com and bar.net: Use Protocol version 2, not 1 (you can use 1, but it is a PITA especially when you’re configuring multiple keys)
  • On bar.net in your sshd configuration (e.g., /etc/ssh/sshd_config) you’ll need the following configurations:
  • All other defaults (OOTB) should be good, maybe when you’re done you can set “PasswordAuthentication no” within bar.net’s sshd_config to force the usage of only key exchanges.

Generate a Key Pair on foo.com for Jack to Use

  1.  Create a .ssh directory in jacks’ home directory if it doesn’t exist yet, /home/jack or ~/ if you’re logged in as jack. actually from here on out, just assume you’re logged in as jack, if not, make sure you can sudo -u jack to make it seem like jack is issuing the commands. mkdir /home/jack/.ssh
  2. chown jack:jack /home/jack/.ssh
  3. chmod 700 /home/jack/.ssh
  4. step 3 is important, make sure the perms took: stat .ssh
  5. ssh-keygen -t rsa -C “the key for jill on bar dot net”
  6. the t switch tells ssh-keygen what crypt to use (t = type) and the C switch is just a comment to leave in the public key string (c = comment)
  7. Hit return
  8. Now it starts to generate a public/private key pair and asks you where to save giving you a suggestion (default): /home/jack/.ssh/id_rsa
  9. id_rsa is the default and you can stick with that, but if you want to store multiple, you’ll have to outsmart the keygen. remember, the computer is just a tool, its faster than you, but not nearly as smart. you need to be smarter than the computer if you’re not, this is a requirement unless you’re competing against an IQ of 150. before you hit enter, read step 10
  10. save it to the file: /home/jack/.ssh/id_rsa.jillbarnet
  11. take a moment and think, who is Jill Barnet?
  12. when prompted for a passphrase just keep it blank and hit return (twice)… disclaimer: this is NOT really a good idea. you should always have a passphrase for security purposes, but i’m lazy right now and don’t wanna go through the extra steps, if you’re really worried RTFM.
  13. notice /home/jack/.ssh/id_rsa.jillbarnet and /home/jack/.ssh/id_rsa.jillbarnet.pub
  14. DONE.

Configure Jack’s Account to Use Multiple Key Pairs

This is important if you followed the last steps exactly. We generated a specific key file. By default, ssh looks for a file called id_rsa. If we wouldn’t have specified a file, ssh-keygen would’ve written to that default id_rsa file and loaded it by default every time. This sucks when it comes to managing multiple keys, so forget the default functionality; we’re gunna rock this bitch.

  1. Create a file called config: touch /home/jack/.ssh/config
  2. chown jack:jack /home/jack/.ssh/config
  3. chmod 600 /home/jack/.ssh/config
  4. edit the file (i like nano, sorry @ most of you vi snobs): nano /home/jack/.ssh/config
  5. For each id_rsa.* entry we’ll need a Host and IdentityFile directive, all directives separated by at least a new line…

The Host directive means that only the next line (IndentifyFile) is applicable when connecting to that specific host. The IdentifyFile directive describes what file to use in that situation. So lemmie pick this apart really quick. I indent because it looks pretty.

  • Lines 1 and 2 mean that ssh should use the file id_rsa.jillbarnet when connecting to host bar.net.
  • Lines 3 and 4tell ssh to only use file id_rsa.rickybobbyorg when connecting to bobby.org (where did he come from? read the next section and find out)
  • Lines 5 and 6 is basically a catch all. The asterisk is a wildcard, it means, for any host that hasn’t matched yet, use the default identity id_rsa.

Hint: If those files don’t exist ssh won’t crap out at all, it’ll die gracefully during the lookup and continue normal operations (and ask for a password). So no worries there.

In theory you could combine multiple identity files under the same host directive, but i haven’t tried it and don’t take my word for it. You can also wildcard partial host names, e.g. Host *.domain.tld – this i did try.

Understanding the Private Key and Public Key

So here’s the confusing part, really try to pay attention. Remember step 13 (scrunched together letter B) in previous? Why did it generate two files when you only specified one?

Well this is where private key and public key comes into play. The private key is “id_rsa.jillbarnet” and the public key is “id_rsa.jillbarnet.pub”. The private key STAYS with Jack on foo.com, it should never be shared, that’s why it is “private” you I-D TEN TEE. The public key is the one you can whore out to other accounts; i’ll squash the myth right now: the public key is not specifically for jill over at bar.net, it can be used for ricky over at bobby.org, or at both places (jill@bar.net and ricky@bobby.org). The public key simply acts as a “trusted” or “authorized” user who provides the private key. Read that last sentence two or three more times.

The private key looks something like this:

And the public key looks something like this (a command with a base64 encoded string + your custom comment):

Cool? Cool. Now let’s prep jill’s home dir.

Prep Jill’s Home Dir Over at bar.net for Jack’s Key

Let’s assume jill’s home directory is /home/jill. Again assuming you’re logged in as user jill and jill belongs to group jill (in previous i assumed jack belonged to group jack).

  1. Make sure jill has a .ssh dir in her home directory, if not make one: mkdir /home/jill/.ssh
  2. Verify jill is the sole owner: chown jill:jill /home/jill/.ssh
  3. Make sure jill is the only one allowed to access that .ssh directory: chmod 700 /home/jill/.ssh
  4. make a file called authorized_keys: touch authorized_keys
    don’t know what touch does? RTFM: man touch. (just wanted to type “man touch”)
  5. set restrictive perms (that’s permissions, not a hair-do) for authorized_keys: chmod 700 authorized_keys
  6. At this point, in some envs/distros they require an authorized_keys2 file, just make sure it is identical to authorized_keys in terms of content and permissions. Or in my case with CentOS5, you don’t need it. so skip this step. I experienced this issue on a Debian box, fyi.
  7. Copy the contents of id_rsa.jillbarnet.pub (the one with the content that starts “ssh-rsa” then has the base64 encoded string w/ the comment) and append it to authorized_keys. There are a million-n-a-half ways to do this. Figure it out.
  8. For multiple entries in authorized_keys, just delimit them by new lines. so each line will start with “ssh_rsa”
  9. I feel like i’m forgetting something, but that should do it.

Test From Jack’s Account on foo.com

Granted you did everything correctly, it will log you in w/o a password prompt. If it asks for a password, you screwed up somewhere, try again, do not pass go, do not collect 200 bones. Note that restarting SSHD will NOT solve your problem, no service restart is required.

That’s All She Wrote

Well that’s it. So now you can connect to multiple hosts using multiple private keys for jack at foo.com (using multiple id_rsa files and the config file updates), and multiple people can connect to jill over at bar.net (by copying the public key to authorized_keys and delimiting by new lines). Any questions? Too bad, I closed my comments.