Playing Around with PHP Namespaces, Classes, and Variable Instantiation

Just needed to do some sanity checks, figure out some syntactical sugar (i.e. tricks), and test certain scenarios. I figured I’d share my findings with the world. Explanations are very light (brain dump) and embedded in the code; output follows directly. Oh, and this is a single stand-alone file.

php://stdin

Well, this really isn’t php://stdin, but the headline looks cool.

php://stdout

Yes, this time it really is stdout:

 

uberSVN: Cheap Backup Script Using hotcopy and rdiff-backup

SVN backups, and backing up in general, is critical for the purpose of disaster recovery. There are a few requirements to consider:

  1. Backups should mirror (or come close to) the current state of system, i.e. a snapshot.
  2. Backups should be incremental. In the event a snapshot is malformed, or the system during the snapshot is corrupt, one can roll back to previously working snapshot.
  3. Take Backup only what you need to survive. No sense in including temporary files, libs that are part of the distribution (easily downloadable), etc.

Lower-level note: I don’t have the resources for LVM snapshots, and the physical disks are RAID 6.

Considerations

Originally, I was backing up my uberSVN installation and all of the SVN repositories using a simple rdiff-backup command. This approach is shortsighted: a rsync of the internal repositories directory does not consider current commits, prop changes, hooks, etc. that are occurring while the rsync is happening which could cause a non-restorable backup; using svnadmin hotcopy addresses this concern. However, the issue with hotcopy is that it does not perform an incremental backup for you, so I needed to couple it with rdiff-backup. It is worth noting that performing this type of copy operation will include, props, hooks, commits and other information – it is more comprehensive than a normal svnadmin dump, but with the downside that it is only truly compatible with the version of SVN that generated it.

As if this wasn’t enough to think about, hotcopy does not preserve file system timestamps. This is problematic with rdiff-backup which relies on a timestamp + file size combination; even though it uses the same underlying algorithm as rsync, AFAIK it does not support the checksum feature for transfers. So after the svnadmin hotcopy is performed, file attributes should be synchronized as well (with slight risk I might add).

Lastly, uberSVN has a handful of configurations and metadata that must be backed up per installation. Its GUI has an easy backup feature, but there is no CLI access/equivalent that I could find. I’m sure I could dig through the source and figure out exactly what was included in the backup, but I decided to reverse-engineer the backup file (which uses ZIP compression by the way) and infer other details. uberSVN includes (or should include) the following directories from its install directory (default is /opt/ubersvn): /opt/ubersvn{conf,openssl,tomcat,ubersvn-db}. From this, a backup archive can be carefully constructed for later restoration within the GUI.

The Implementation (tl;dr)

This is a bash script that is configurable in the first two grouping of variables (lines are highlighted). It also uses a subsys lock mechanism (flock) so it cannot be run in parallel which is helpful when using it in a crontab. I haven’t extensively tested its reliability, but it does work… generally speaking. Here’s the hack:

 

Scripting Parallel Bash Commands (Jobs)

Sometimes commands take a long time to process (in my case importing huge SQL dumps into a series of sandboxed MySQL databases), in which case it may be favorable to take advantage of multiple CPUs/cores. This can be handled in shell/bash with the background control operator & in combination with wait. Got a little insight from this StackOverflow answer.

The Technique

The break-down is commented inline.

Notes

  • This does not function like a pool (task queue) where once a “thread” is freed up it will be immediately eligible for the next task, although I don’t see why something like this could be implemented with a little work.
  • wait will pause script execution for all PID parameters it is provided to complete before moving on.
  • Once the if...fi control block is entered wait will cause the for...in loop to suspend.
  • $! (or ${!}) contains the PID of the most previously executed command; make sure it comes directly after the operation used with &. Throw it into a variable (like lastPid) for future use.
  • This is not multi-threading, although this simplified concept is similar. Each command spawns a separate process.
  • read  is just creating a multiline list; in my specific case this was favorable over a bash array, but either will work.
Screenshot of htop showing parallel process tree.

JavaScript Inheritance and Method Overriding

I’m always finding myself wanting to use JavaScript like a classical OOP language with class-based inheritance, even though I know it is prototypical. Anyways, I’m jotting down how to do this and also allow for overriding (not overloading) methods. Note, that if you’re using Node.js, then you don’t need to define the inherits function, it is already available in require('util').inherits.

Check the console:

 

Node.js: Split/Count Number of Lines (Unicode Compatible)

The task was to read n number of a lines from a file in Node.js. The assumption is that the read buffer is UTF-8. The caveat is that there is more to just new lines in Unicode than “New Line (ASCII 10 or \n)” and “Carriage Return (ASCII 13 or \r)”.  This SO comment put me in the right direction.

Unicode Technical Standard #18 § 1.6 describes the following rule regarding line boundaries, (PS and LS are defined in Unicode 6.1 § 16.2 Layout Controls):

RL1.6 Line Boundaries
To meet this requirement, if an implementation provides for line-boundary testing, it shall recognize not only CRLF, LF, CR, but also NEL (U+0085), PS (U+2029) and LS (U+2028).

To put it into practice, the number of lines can be tallied while the ReadStream has incoming data:

The key is the regular expression, derived straight from RL1.6,  /\r\n|[\n\r\u0085\u2028\u2029]/ , which looks for any of the possible new line characters with precedence to CRLF (greedy). Don’t forget to listen for the “end” event and also remove the “data” listener on “end” or “error!”

In the context of splitting incoming data by line, simply calling  chunk.toString('utf8').split(/\r\n|[\n\r\u0085\u2028\u2029]/g) will do the trick. Note that if working with CSV files, parsing will require a bit more effort if escaped new-lines will need to be taken into consideration, and PS, LS and NEL are probably not characters that should act as line delimiters in that context anyway.

Node.js: Handling File Uploads without Express

I was working on a little project which does not utilize express. Part of the program needed to handle file uploads. In PHP this is easy with the $_FILES superglobal, and with express this is also just as easy with req.files. In pure node.js the developer doesn’t have this luxury because they’re just dealing with raw http requests and responses (I’m surprised node.js even parses the headers, it is a miracle!). Before I begin, this whole process should be optimally streamed to a file so large file uploads do not consume more RAM than their response chunk size, but I didn’t do this, shame on me.

First, capture the response body. Here I am assuming you have a HTTP server that is listening and responding to requests:

Here, I’ll give you the entire source to my MultipartParser Node.js module. It requires nodeproxy.

And the implementation is as follows:

LittleDiddy does not have any path routing (e.g. it won’t actually show a form that uses enctype="multipart/form-data"), it simply accepts and parses a POST with “file” inputs. This “little diddy” (implementation example) is untested by the way, so good luck.

This article’s source is released under the MIT license; I don’t care what you do with it.

Reload DNS Zone with Bind9 and rndc

Was really confused when trying to reload a zone. I would receive the error rndc: 'reload' failed: dynamic zone. This was especially frustrating because in a similar installation, a simple service named reload would do just fine; not this time. Also, remember the point is to not completely reload named; doing so will clear its cache (if used as a local DNS resolver) and cause a small outage if clients do not have their current domain lookup cached.

Assuming the zone to update is called “THEZONE” perform the following:

If necessary, don’t forget to update the reverse DNS records. I’m lazy and use PTR:

So… the moral of the story is to  freeze, reload and thaw. Remember f-r-t (or fart?).

Install CentOS 6 with Anaconda/Kickstart (plus ESXi VMware Tools)

Synopsis

I’ve been getting my feet wet with ESXi, CentOS 6 VMs, and YUM/RPM. Well the last two I have been using for years, but not like recently.

The goal is to be able to blindly install a controlled distribution of CentOS 6.x quickly and without error (maybe even install multiple at the same time). What I needed:

  • Anaconda Kickstart file (ks.cfg)
  • Local mirrored repository for CentOS 6.x (6.3 in my example)
  • Custom 3rd party repo
  • HTTP/NFS/RSYNC access to these
  • Variable disk/cpu/ram size – the partitions need to be dynamic

Without writing a book about all of this, I really want to just highlight some problems I ran into and how I solved them.

An example of my kickstart file is below for reference.

Automated Partition Schema

Since I’m in the world of virtualized hardware, it is important for the disk to scale easily without lost data. The prerequisite to this is of course Logical Volume Management (LVM). Now you may not agree with my LVM layout, and honestly, this isn’t my expertise (optimization of disk partitions), but at the least there must be “boot,” “root,” and “swap” partitions.

The goal here is to make the root partition grow to its maximum size without negating the swap. Also, the boot partition won’t be on the LVM, it will be fixed in the MBR. The kickstart section is as follows:

I do want to note that the logvol’s are interpreted in a random order, so it is perfectly fine for the swap logvol to be declared after the root (/) logical volume.

Bypassing “Storage Device Warning”

The only problem I had in regards to a prompt-less install was the “Storage Device Warning” asking if I was sure I wanted to write to the disk and lose all of my data. No matter what I put in the partition specification of kickstart, it would always prompt. The answer is to use zerombr yes. See the option “zerombr” as defined within the CentOS kickstart guide. This can be placed anywhere in the kickstarter file (well except in %packages, %post or similar); just put it up near the top.

Auto Reboot

After the installation is complete, automatically reboot the machine. This works perfectly in ESXi since it automatically unmounts the virtual cdrom after the first boot of the guest! Simply put  reboot anywhere in your kickstart – near the top is probably best.

VMware Tools RPM

In order for the vSphere Client to monitor and execute certain tasks on the guest vm, VMware Tools is required. This will show you things like IP addresses, hostnames and guest state as well as integrated shutdown/reboot tools.

Add VMware Tools to YUM

Put the following repo configuration in /etc/yum.repos.d/vmware-tools.repo:

Then execute the following shell in %post of kickstart:

The important part to mention here is that the package is called vmware-tools-plugins-guestInfo. All the dependencies will come with it, so no worries there.

Mirroring a Repository for NFS Kickstart Installation

Create the Repo Mirror

Remember, my goal is to be able to quickly add a CentOS VM. With that, I don’t want to wait 30 minutes to pull down packages from a mirror in Iowa, New York or Cali. I want to pull it down once, keep it up-to-date and have my local install pull from my local mirror. For simplicities sake, I’ll put the mirror in /repo/centos.

I am choosing to exclude any local files/directories (“local*”) and also the huge DVD ISOs (“isos”). Also note that the mirror format is host::path and that the mirror host must support the rsync protocol.

Keep the Local Mirror Updated

To keep the local repo copy up-to-date, run this script via cron (by the way, I stole this from somewhere, I just don’t remember). Please don’t forget to swap out the mirror hostname and path with something that makes more geographical sense to you.

Configure NFS for Kickstart Network Installations

NFS server support is built into CentOS and running by default, so this is pretty easy. Add the following to /etc/exports:

This exports the directory “/repo/centos” for NFS. Only the subnet 172.16.0.0/16 is allowed access (no credentials required). It is mounted as read-only (ro), connection are synchronous as opposed to asynchronous (sync), and all connections are anonymous for security purposes (all_squash). Man exports(5) if you need more help.

Restart NFS via  service nfs restart.

I feel like I’m missing something with NFS, but I don’t recall; this was too easy. In my memory there was a struggle with rpc!

Update iptables for NFS

Edit /etc/sysconfig/iptables and throw these rules in there before -A INPUT -j REJECT --reject-with icmp-host-prohibited.

And restart iptables via  service iptables restart.

Configure Kickstart to Use Local Repo via NFS

This is an easy one-line if everything is set up correctly. Add the following after the “install” option within the kickstart configuration.

Use Local Repo Post Install

So you want to keep using your new local repo beyond the kickstart installation? No worries. Install apache, configure the vhost and update ks.cfg.

Inside vhosts.conf:

Add the following rule to iptables:

Restart iptables via service iptables restart;.

Start httpd via  service httpd start; chkconfig httpd on;.

Update the kickstart configuration:

Done!

Kickstart Sample Configuration

For the option “rootpw” use grub-crypt  with the specified hash algorithm under authconfig –passalgo=X (to replace DEFAULT_SALTED_ROOT_PASSWORD). In the sample ks.cfg file, I have sha512, so:

Using the Kickstart Configuration

The idea is to create a custom ISO with the kickstart configuration embedded, but I haven’t done this yet. So for now, I’m hosting the file as ks.cfg on an intranet HTTP server and booting a centos 6.3 netinstall (~200mb). At the bootloader prompt, specify extra parameters vmlinux initrd=initrd.img ks=http://some.host.local/ks.cfg. This installs all the packages, updates as needed, partitions the disk, runs a custom script, and reboots the machine.

Brain dump complete.

PHP + OpenSSL = RSA Shared Key Generation

OK, so I’m super excited and wanted to jot down a note about RSA shared key generation.

Drama Mama

First, I wanted to generate some PEM formatted RSA public and private key pairs. I was going to do it the risky/hard way and exec openssl commands. Using exec is not a good idea if avoidable, but I was desperate. Just to put my mind at ease, I did some quick googling and found out it is possible to do this natively in PHP if you have the openssl extension installed and configured. FANTASTIC!

Then I got the curve ball. Using openssl_pkey_get_public doesn’t work as expected: it wouldn’t take a pkey resource handler and get the public key – how deceiving! I thought the thrill was over and I’d have to resort back to exec until the all mighty Brad shed his light upon us… four years ago (doh).

TL;DR

Here’s the code that generates the RSA shared key and fetched the public and private keys.

 

Alternatively, one could call the openssl commands directly:

 

Easy Custom Styles in Flex 4 Spark

I needed to be able to style the width and height (and percentWidth and percentHeight) of a spark group. These styles do not exist, but they are valid properties. After searching for a quick way to do this, and finding nothing about Flex 4 that worked and wasn’t overly complicated, i wrote this little ditty (I am assuming the file is called “ExtStyleGroup” within the package “com.andrewzmmit.flex”:

This component called “ExtStyleGroup” extends a Group which provides the default properties of width, height, percentWidth and percentHeight, but now are configurable as style attributes within an external CSS stylesheet. It also allows the styleMap to be updated in run time by any class that extends ExtStyleGroup on a case-by-case (per instance) basis. Next time you go to create another <s:Group />, if you prefer the flexibility over the performance overhead, just use the custom MXML component:

In that example, I could’ve imported a style sheet instead with <fx:Style source=”style.css” />, but you get the idea…