CentOS 5.8 + Apache 2.2 + PHP 5.3 + suPHP 0.7.1

So I’m a bit of a purist when it comes to CentOS administration. CentOS is built on the idea of stability and sustainability. Without the addition of extra 3rd-party repositories, it provides the bare necessities to run a reliable and secure server. Don’t get me wrong though, there are plenty of great packages out of the box (from OpenSSL, Apache, PHP to OpenLDAP, PostgreSQL and then some), but sometimes you need some heavy-duty next-gen power tools like ffmpeg, nginx, OpenVPN or suPHP. Most of these packages are not available from the “CentOS Certified” base, extras and updates repositories; in fact, you can’t get them via yum without adding a third-party repo like RPMForge.

With that said, I need suPHP for a PHP staging environment. I’m not going to talk about what suPHP is, you can read about it on your own time. Going back to me being a purist, I don’t use RPMForge repos or anything similar. I like to stick to base and extras only and since there isn’t a suPHP RPM available – I’ll have to build it myself. The proper way to do this is to build it as an RPM (Red Hat Package Manager) and install via yum from the locally built RPM, but for whatever reason I can never get myself to do it this way.

Reminder, suPHP can only use PHP CGI, not PHP CLI (so look for a php-cgi binary, not just a php one)

Download & Building suPHP from Source

Before we start, make sure you have dev tools:

We’ll also need development packages for httpd (Apache 2.2), php53 (PHP 5.3), and apr (Apache Runtime Libraries and Utilities):

Now create a working directory, download the suPHP src, configure it and build (make). Note that you need to figure out where the apr config is located, mine is at /usr/bin/apr-1-config

Configure Apache + PHP to use suPHP

I’ll admit, I relied heavily on the suPHP docs, but even then it was not 100% complete. That, and sites like this one didn’t provide any useful information – I’m mainly aggravated that they used RPMForge and did not use php53 packages. But, after some re-reading, reinterpreting and trial & error, I’m up and running… and this is how it went (starting to get tired of writing this post, this will be short and sweet):

Important Files

  • /usr/local/etc/suphp.conf (this is the core suPHP configuration)
  • /etc/httpd/conf.d/suphp.conf (this is the Apache mod_suphp configuration… needed to create this)
  • /etc/httpd/conf.d/php.conf (this is the php configuration that I had to disable)
  • /etc/httpd/conf/httpd.conf (for some of the primary virtual hosts… all my other vhosts are in separate files)

suPHP Core Configuration

/usr/local/etc/suphp.conf, I based it off of the suphp.conf-example file located in the source code’s doc directory. This is an ini-style configuration:

mod_suphp Configuration

/etc/httpd/conf.d/suphp.conf:

PHP Configuration

/etc/httpd/conf.d/php.conf, just comment everything out, you don’t need it

Apache Virtual Host (vhost) Configuration

This can be set in each individual vhost if you want to override. For example:

Almost Done…

Now restart httpd:

Refresh a php page and check. If it didn’t work, re-read this post or email me (contact info in my resume) and I won’t help, but i’ll refine this post and provide more information.

MySQL UDF: Perl Regular Expression Clauses

Currently working on migrating a database in MySQL. I needed to to some perl-like regex find and replaces on cells. MySQL does not support this natively. It does support REGEX/RLIKE, which is basically a LIKE clause with regular expression support – this is crap: is only useful for lookup queries and not data manipulation. One may argue that relational databases should only be used to load and serve static data and any manipulation of data should be done externally of the database. Well I say, “Bollocks!” in this case. When I’m on a utility server doing one-time, one-way updates to row data I don’t care if there’s a performance hit – of course I’m not stupid enough to implement and utilize these types of queries in a production environment (in this case the data coming in should be prepared and optimized before hand in order to maximize query times).

So after some really brief searching I found this little library called “lib_mysqludf_preg.” I’ll just document my installation procedure. Remember, I’m on CentOS 5.8 (recently upgraded from 5.6, went pretty smooth), oh and per usual I’m doing this as root since I haven’t broken that habit yet.

Download and Build the Module

First, a pcre module is required, so I went ahead and grabbed that:

While you’re at it, make sure you have things like make:

Create a compilation directory and grab the lib_mysqludf_preg source (double check the site for the latest stable build, at the time of this writing it is 1.0.1):

Now you should be in the directory full of source code. Go ahead and perform the preliminary configuration and checks for the upcoming build:

Everything went smoothly for me. It found mysqlbin but threw the notice “ERROR 1045 (28000): Access denied for user ‘root’@’localhost’ (using password: NO)” don’t be alarmed, it is just making sure mysqlbin is available. It also found mysql_config at /usr/bin/mysql_config and PCRE at /usr/bin/pcre-config (with pcre v6.6).

OK, so let’s “make” it:

OOPS! Looks like the initial make crapped out. I’m missing mysql dev files… soooo:

Now reconfigure and try again:

Finally! Looks like everything went smooth. OK let’s install it:

Success! Now to see if I can load the so module within MySQL:

Uh-Oh, can’t find the .so module…. read on.

Register and Install the Module in MySQL

We’re not done yet. So far all we’ve done is built the .so plugin, but we need MySQL to find it. By default, my distro put it in /usr/local/lib, but MySQL doesn’t know that exists. Why? Well my plugin_dir configuration for MySQL is blank, which means it falls back to the system’s dynamic link resolver. So I go look that one up:

…which gives me “/usr/lib64/mysql” This is where I need to copy those modules. I’ll be honest here, I don’t know if I need just the .so or all 3 files that the build created, so i’ll copy all 3 just to be safe and give them execute permissions:

Now let’s register them with ldconfig and restart mysql (again, tbh, not sure of the mysql restart is required, actually, I don’t think it is but better safe than sorry).

Now install the user-defined functions:

Finally, test it all to make sure the installed UDFs are working:

All test came back ok. Should be done now. That is all. kthxbye.

Download & Install a SSL Cert into a Java keystore with keytool

Today I was notified our notification email mail server was changing hosts. So I made a list of the services that use the notify email address (e.g. notifications@domain.tld) – this email address is responsible for sending info to our network users which include updates for everything from issue tracking to password recovery (and then some). With security in mind all emails should be sent over SSL (the mail server supports SSLv3), but the problem is that the installed cert is self-signed; now I know it is a good cert – I generated it, we just don’t wanna fork over the $$ to have a root cert provider put their stamp of approval on it and it’s used for internal purposes only.

Now, if you generated a self-signed SSL cert and want to import that, just skip the “Download the SSL Certificate” section.

There’s Always Something to Mess Your Day Up

Normally this isn’t a big deal: just update the SMTPS credentials and be on your way. However, most of our service applications are Java-based. You’re thinking, “no biggy, just turn on the flag to trust all certs.” Not that easy; the options within our apps don’t have this fancy little checkbox. So I guess I have to do it the hard way: download the cert from the mail server and add it to the Java keystore, restart the service (bah, I have down time), and cross my fingers that it works.

Download the SSL Certificate

This one is pretty easy, and really straight forward (with the help of Didier Stevens’ quickpost)

Obviously the above dump isn’t exactly what you’ll get, but you get the idea… Also, notice the Ctrl+C up there, this is important, the openssl command hangs and you don’t need all the extra stuff, so just wait for the initial dump and cancel the script.

Next, copy the base64 encoded certificate to a .pem file. Don’t forget to include “—–BEGIN CERTIFICATE—–” and “—–END CERTIFICATE—–“. Just save it to something like host.domain.tld.pem. Really, just copy and paste from the terminal. For the idiots: in PuTTY just select it all with the left mouse button, and click on it with the left mouse button. This will copy it to your clipboard. Issue the command “nano host.domain.tld.pem” and right click in nano to paste. Ctrl^O to write out (write the file) and Ctrl^X to exit nano. Done.

Lastly, to figure out the host the certificate belongs to, run the following (this will also confirm if you’ve copied the PEM base64 over correctly):

This will show the lines which include “CN” (e.g. “…/CN=host.domain.tld/emailAddress…”). The CN parameter is the host/domain name that the certificate is registered under. This will be required information when importing with keytool.

Install the SSL Certificate with Java’s keytool into the keystore

There are a few things to accomplish in this section: find and locate the JRE you want to use, find the keytool script, find the trusted certificates file (cacerts), and execute a single-line command.

If there are several JRE’s on the system, figure out which one the application uses. For example, the standalone app I have installed has its own jre folder which contains ./bin/keytool, however, I also have a a system-wide Java installation. To expose all the keytools on your system use find / -name “keytool” …don’t use whereis, only registered applications appear with this command.

My setup looks something like this:

Since I need the service app to have the certificate trusted, I’ll use its own embedded jre keytool, “/opt/thirdpartyapp/jre/bin/keytool”.

Also, the cert needs to be installed into the trusted certs file, so within the particular java/jre installation there should also be a ./lib/security directory with a cacerts file.

TL;DR Working with keytool

Now with the previously retrieved information, here’s the bread’n’butter:

  • The import switch tells keytool we want to import the pem into the trusted certs file (cacerts).
  • The alias switch’s value should be the CN you found in the previous section (after running openssl x509…), this is the domain for which the cert was created for
  • The keystore switch is the path to the Java keystore file, usually cacerts, which stores trusted certificates
  • The file switch’s value is the path to the pem that we created in the previous section.

keytool will now attempt to import the cert. First it’ll prompt for a password, unless you know otherwise try “changeit” or “changeme” – these are widely used defaults. Once you provide the correct password it’ll dump out a bunch of information about the certificate it is importing and lastly ask you if you want to “Trust this certificate” – type “yes” and hit return: “Certificate was added to keystore” is presented.

Now restart the Java application (however it is you do that) and it’ll recognize the SMTPS connection (or what ever else you’re working towards, e.g. HTTPS, SFTP, POP3S etc.)

Piece of cake, after you do it a few times. (Just realized I think i change tenses and POV a couple times in this write up… oh well.)

Further Reading

…like at the bottom of each chapter in your school text books, don’t worry kids, there isn’t a chapter review or quiz.

Password-less SSH with Public/Private Keys

From the POV of an advanced *nix user setting up public and private keys for password-less SSH logins seems trivial. However, to a beginner/novice user this can be confusing. I’ll admit, as comfortable as I am with managing Linux servers, configuring RSA keys on the local and remote machines was messing with my head – until I got the hang of it. I still don’t have a complete top-to-bottom understanding of this when it comes to different versions of SSH and then some, but enough to jot down a note for future reference just in case. I’ll assume the reader has a basic knowledge of CLI, connecting to a remote server with SSH, and that the two (or more) devices in question are both Linux machines.

By the way, there are a lot of these guides out there, but i couldn’t find one that helped. I still had to screw around a bit to really the the hang of it. They were either all too in-depth, too brief, only covered certain parts, or simply just didn’t fit my needs.

Before we get started, there is an easier and less intrusive way to do this if you’re running cPanel. I’ll write about that later.

Use Case

Close your eyes, find your power animal – slide… imagine you’re logged into some nix box (we’ll call this one “foo.com” with user “jack”). Now you need to issue some commands on another remote machine (and we’ll call this one “bar.net” with user “jill”), so what do you do?

At this point, it will make the connection, and if is the first time ask if you want permanently trust the connection, in which you reply “yes” after careful consideration. SSH prompts you for a password, you type it in and you’re good to go. You start navigating the filesystem and ferociously execute commands.

Now this scenario is all fine and dandy, until you’re accessing the server so often that you get sick and tired of typing your 15 character password (with upper/lowercase letters, numbers and symbols). Perhaps a better excuse to not enter the password is that you have a non-interactive shell script that needs to make this connection temporarily, in which case you do not have the luxury of entering your password in the prompt; don’t even tell me you were thinking of embedding the password into the command with the p switch. Wish there was a better way? Let me show you the light.

Benefits of Using Keys with SSH

  • Doesn’t prompt for a password (duh)
  • Can be used with non-interactive/unmonitored scripts (play off of the previous bullet)
  • FAR more secure – this is probably the biggest reason you should be going with this approach regardless of the previous benefits
  • Establish a “trusted connection”
  • Help prevent brute force attacks
  • Read this article on “old” password-style authentication

TL;DR Lezzdo’t.

Remember jack and jill on foo.com and bar.net? Well they didn’t fetch a pail of water, instead they generated an RSA key with ssh-keygen. We’re getting back to that now… There are two types of encryption: DSA and RSA. DSA is supposedly faster but not as compatible (isn’t compatible w/ Protocol 1). RSA has higher encryption (4096) and is more compatible, but at the cost of speed. Up to you, but i’ll be sticking with RSA.

Configure SSH and SSHD on foo.com and bar.net

  •  Make sure that ssh is configured correctly on foo.com and bar.net: Use Protocol version 2, not 1 (you can use 1, but it is a PITA especially when you’re configuring multiple keys)
  • On bar.net in your sshd configuration (e.g., /etc/ssh/sshd_config) you’ll need the following configurations:
  • All other defaults (OOTB) should be good, maybe when you’re done you can set “PasswordAuthentication no” within bar.net’s sshd_config to force the usage of only key exchanges.

Generate a Key Pair on foo.com for Jack to Use

  1.  Create a .ssh directory in jacks’ home directory if it doesn’t exist yet, /home/jack or ~/ if you’re logged in as jack. actually from here on out, just assume you’re logged in as jack, if not, make sure you can sudo -u jack to make it seem like jack is issuing the commands. mkdir /home/jack/.ssh
  2. chown jack:jack /home/jack/.ssh
  3. chmod 700 /home/jack/.ssh
  4. step 3 is important, make sure the perms took: stat .ssh
  5. ssh-keygen -t rsa -C “the key for jill on bar dot net”
  6. the t switch tells ssh-keygen what crypt to use (t = type) and the C switch is just a comment to leave in the public key string (c = comment)
  7. Hit return
  8. Now it starts to generate a public/private key pair and asks you where to save giving you a suggestion (default): /home/jack/.ssh/id_rsa
  9. id_rsa is the default and you can stick with that, but if you want to store multiple, you’ll have to outsmart the keygen. remember, the computer is just a tool, its faster than you, but not nearly as smart. you need to be smarter than the computer if you’re not, this is a requirement unless you’re competing against an IQ of 150. before you hit enter, read step 10
  10. save it to the file: /home/jack/.ssh/id_rsa.jillbarnet
  11. take a moment and think, who is Jill Barnet?
  12. when prompted for a passphrase just keep it blank and hit return (twice)… disclaimer: this is NOT really a good idea. you should always have a passphrase for security purposes, but i’m lazy right now and don’t wanna go through the extra steps, if you’re really worried RTFM.
  13. notice /home/jack/.ssh/id_rsa.jillbarnet and /home/jack/.ssh/id_rsa.jillbarnet.pub
  14. DONE.

Configure Jack’s Account to Use Multiple Key Pairs

This is important if you followed the last steps exactly. We generated a specific key file. By default, ssh looks for a file called id_rsa. If we wouldn’t have specified a file, ssh-keygen would’ve written to that default id_rsa file and loaded it by default every time. This sucks when it comes to managing multiple keys, so forget the default functionality; we’re gunna rock this bitch.

  1. Create a file called config: touch /home/jack/.ssh/config
  2. chown jack:jack /home/jack/.ssh/config
  3. chmod 600 /home/jack/.ssh/config
  4. edit the file (i like nano, sorry @ most of you vi snobs): nano /home/jack/.ssh/config
  5. For each id_rsa.* entry we’ll need a Host and IdentityFile directive, all directives separated by at least a new line…

The Host directive means that only the next line (IndentifyFile) is applicable when connecting to that specific host. The IdentifyFile directive describes what file to use in that situation. So lemmie pick this apart really quick. I indent because it looks pretty.

  • Lines 1 and 2 mean that ssh should use the file id_rsa.jillbarnet when connecting to host bar.net.
  • Lines 3 and 4tell ssh to only use file id_rsa.rickybobbyorg when connecting to bobby.org (where did he come from? read the next section and find out)
  • Lines 5 and 6 is basically a catch all. The asterisk is a wildcard, it means, for any host that hasn’t matched yet, use the default identity id_rsa.

Hint: If those files don’t exist ssh won’t crap out at all, it’ll die gracefully during the lookup and continue normal operations (and ask for a password). So no worries there.

In theory you could combine multiple identity files under the same host directive, but i haven’t tried it and don’t take my word for it. You can also wildcard partial host names, e.g. Host *.domain.tld – this i did try.

Understanding the Private Key and Public Key

So here’s the confusing part, really try to pay attention. Remember step 13 (scrunched together letter B) in previous? Why did it generate two files when you only specified one?

Well this is where private key and public key comes into play. The private key is “id_rsa.jillbarnet” and the public key is “id_rsa.jillbarnet.pub”. The private key STAYS with Jack on foo.com, it should never be shared, that’s why it is “private” you I-D TEN TEE. The public key is the one you can whore out to other accounts; i’ll squash the myth right now: the public key is not specifically for jill over at bar.net, it can be used for ricky over at bobby.org, or at both places (jill@bar.net and ricky@bobby.org). The public key simply acts as a “trusted” or “authorized” user who provides the private key. Read that last sentence two or three more times.

The private key looks something like this:

And the public key looks something like this (a command with a base64 encoded string + your custom comment):

Cool? Cool. Now let’s prep jill’s home dir.

Prep Jill’s Home Dir Over at bar.net for Jack’s Key

Let’s assume jill’s home directory is /home/jill. Again assuming you’re logged in as user jill and jill belongs to group jill (in previous i assumed jack belonged to group jack).

  1. Make sure jill has a .ssh dir in her home directory, if not make one: mkdir /home/jill/.ssh
  2. Verify jill is the sole owner: chown jill:jill /home/jill/.ssh
  3. Make sure jill is the only one allowed to access that .ssh directory: chmod 700 /home/jill/.ssh
  4. make a file called authorized_keys: touch authorized_keys
    don’t know what touch does? RTFM: man touch. (just wanted to type “man touch”)
  5. set restrictive perms (that’s permissions, not a hair-do) for authorized_keys: chmod 700 authorized_keys
  6. At this point, in some envs/distros they require an authorized_keys2 file, just make sure it is identical to authorized_keys in terms of content and permissions. Or in my case with CentOS5, you don’t need it. so skip this step. I experienced this issue on a Debian box, fyi.
  7. Copy the contents of id_rsa.jillbarnet.pub (the one with the content that starts “ssh-rsa” then has the base64 encoded string w/ the comment) and append it to authorized_keys. There are a million-n-a-half ways to do this. Figure it out.
  8. For multiple entries in authorized_keys, just delimit them by new lines. so each line will start with “ssh_rsa”
  9. I feel like i’m forgetting something, but that should do it.

Test From Jack’s Account on foo.com

Granted you did everything correctly, it will log you in w/o a password prompt. If it asks for a password, you screwed up somewhere, try again, do not pass go, do not collect 200 bones. Note that restarting SSHD will NOT solve your problem, no service restart is required.

That’s All She Wrote

Well that’s it. So now you can connect to multiple hosts using multiple private keys for jack at foo.com (using multiple id_rsa files and the config file updates), and multiple people can connect to jill over at bar.net (by copying the public key to authorized_keys and delimiting by new lines). Any questions? Too bad, I closed my comments.

Adobe Shadow

This is probably the greatest thing since sliced bread: Adobe Shadow (Sd).

In summary, it synchronizes your desktop (PC and Mac) browser with your hand-held touch device (both Android and iOS) for streamlined mobile design, development, and troubleshooting. You can get up and running in 3 easy steps:

  1. Download Adobe Shadow from Adobe Labs
  2. Download the app on your mobile device
  3. Download the Google Chrome Browser extension

I haven’t used it yet, but will give it a whirl within the next few days… and probably post my experience.

AS3: BitmapData.draw with Video

When drawing video/camera data to a BitmapData object the original size is not respected. After a quick search, this page as the answer: “Size mismatch when getting BitmapData from Camera”

The solution:

AS3 Stage3D: Away3D

So Away3D has been around for some time now (they’re currently in v4.0.0 beta) and I have not tasted this new hardware-accelerated library for Flash. In my previous years I worked with Papervision3D in AS2 (yes, version 2) and quickly moved to Papervision3D 2 for AS3. Papervision3D was pretty easy to use (I have a small Maya background) and actually got a chance to teach it to my students a few times as an intro to high-level 3D in my Intermediate ActionScript class. The problem was performance: I couldn’t get everything I wanted out of p3d like advanced UV mapping and high poly scenes; I should give it credit though, it did pioneer 3D for flash (AFAIK) and eventually implemented shader support and did a great job at it (e.g. phongs). It eventually fell off of my radar due to random professional reasons (like working around the clock on projects that $$) and other techy stuff.

Overview

I was looking for something with direct hardware acceleration and a few other clever native techniques. A colleague of mine told me about this thing called Stage3D, a.k.a “Molehill,” which i totally forgot about. This part of Flash renders separately from the DisplayList but just on top of the StageVideo, and utilizes the GPU (holy shit!). I was leery at first since my experience with Flash in previous had been 2.5D (two-n-a-half-D) and I wasn’t impressed especially coming from a P3D background. Apparently in my absence it had come a long way (at the time of this writing the current version is 11.2 beta). So I gave it a whirl.

I have a Gentoo mindset (even though my weapon of choice is RHEL), so I naturally downloaded the v4 beta, which I came to find had severely limited documentation (found some generated docs here, although not perfect). Found the v4 documentation, I guess someone flipped a switch at a datacenter somewhere… Not good for entry-level development into this new lib! Not to mention, the examples were AMAZING… amazingly overdone – I just wanna see how the stupid thing works; that’s great that you can generate trees and import 3DS models, congratu-fuckin-lations, but what about us n00bs, i need some practical and minimalist examples.

TL;DR: Right to the Code

After an hour of toying with it, skimming through some haphazard docs, watching for code hints and perusing around the source code I wrote this little ditty…

Hope that helps. Think of it as a kickstarter.

Development Setup

So, I completely forgot the installation steps for using Away3D. BTW, I use FlashDevelop (currently have the latest version 4.0.1RTM) and am building against Flex SDK v4.6.0, AIR 3.1 (which I think was auto-downloaded by FlashDevelop) for flash player 11.

  • For portability (but not flexibility or debugging) download the away3d core 4.0b SWC.
  • Create a new project in FlashDevelop
  • Put the SWC in the ./lib folder (or wherever… just do the lib folder)
  • In FlashDevelop right click on “away3d-core-fp11_4_0_0_beta.swc” and choose “Add to Library” – it should turn blue
  • That’s it, you’re ready to go.

I can see myself getting deeper and deeper into this sexy beast of a lib. And the fact that my CPU utilization is a big fat ZERO and i’m getting uber sweet FPS, i’m hooked. About friggin’ time Flash, about time!

Working SWF

Away3D in the player window (screen shot)

Download the ZIP containing the demo (no source code): away3d-simple-demo

Resources

MaxMind GeoIP + PHP PECL

Overview

Getting around to writing a geo IP look-up script and needed to leverage maxmind’s geoip database. First things first, get the geoip lib installed and ready to go or else the dat file isn’t going to do me any good.

Initial Setup: First Try

At first I went with the PEAR route just because it wouldn’t require make installing’ing any dynamic PHP extensions (.so). Went to do a pear update-channels for the first time and got a boat load of eregi errors – off to a great start. Turns out CentOS 5.7 php53 (php 5.3.3) package w/ php-pear is not compatible – bastards! RedHat has an update available but is only available to their customers which doesn’t include me (since i go the free route of  centos). Did some extra googling and found this: php-pear-1.4.9-8.el5.noarch.rpm.html (version 8.el5 versus centos 5.7’s native version 6.el5). Did a little wget magic followed by a yum install <path to package> and ta-da, no more eregi errors. Did a pear install Net_GeoIP and it worked too.

After all that hard work and fidgeting around, I read this little line on MaxMind’s website:

PHP Extension on PECL
Download a PHP extension that allows you to embed the GeoIP C Library inside PHP for improved performance. There is a new fork with a more complete implementation.

Crap! I love good performance and I’d take a compiled lib over an interpreted one any day. PEAR headache: all for naught. Proceeded with a pear uninstall Net_GeoIP, then cried for a half an hour since i just wasted 15 minutes (for a total of 45 man minutes wasted, yes).

Next stop, PECL.

Initial Setup: Second Try (start tl;dr)

  1. Started with a pecl update-channels, and all went well. That’s a first.
  2. pecl search geoip – got results, sweet.
  3. pecl install geoip… got through maybe 40 lines of the ./configure and it bombed, can’t find a valid compiler
  4. no worries, do a little yum groupinstall “Development Tools” and now I’ve got a C compiler (as well as automake and a few other required tools)
  5. pecl install geoip… configure bombs again, this time I’m missing the geoip linux packages, and at this point i’m thinking, “how many friggin dependencies are there?” holy overhead batman.
  6. yum install geoip-devel, wait, no,yum install GeoIP-devel since yum is case sensitive – this gets me everything i need.
  7. 3rd times a charm: pecl install geoip – install completes, tells me to add “extension=geoip.so” so I do like the robot I am, doing what a linux prompt tells me to do.
  8. service httpd restart – cycling apache allows for the geoip section to appear in the phpinfo() call, not sure why this is (more mysteries!)

OK, I got the libs installed and the commands are asking for me to call them.

Installing the MaxMind database

  1. mkdir /usr/share/geoip
  2. wget and extract geoip db to /usr/share/geoip (http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz)
  3. chmod -R 755 /usr/share/geoip
  4. nano /etc/php.ini
  5. add a [geoip] config section:

  1. service httpd restart
  2. check the phpinfo() for custom_directory
Let the scripting begin. I’ll stop writing now; from this point forward it’s all about RTFM (link below…)

Resources