Talking about Gearman at Etsy Labs

Fri, Aug 5, 2011 08:11 AM
I find myself flying to New York on Monday for some dealnews related business. Anytime I travel I try and find something fun to do at night. (Watching a movie by myself in Provo, Utah was kinda not that fun.) So, this week I asked on Twitter if anything was happening while I would be in town. Anything would do. A meetup of PHP/MySQL users or some design/css/js related stuff for example. Pretty much anything interesting. Well, later that day I received an IM from the brilliant John Allspaw, Senior VP of Technical Operations at Etsy. He wanted me to swing by the Etsy offices and say hi. Turns out it is only a block away from where I would be. Awesome! He also mentioned that he would like to have me come and speak at their offices some time. That would be neat too. I will have to plan better next time I am traveling up there.

Fast forward another day. I get an email from Kellan Elliott-McCrea, CTO of Etsy wanting to know if I would come to the Etsy offices and talk about Gearman. At first I thought "That is short notice, man. I don't know that I can pull that off." Then I remembered the last time I was asked to speak at an event on short notice based off a recommendation from John Allspaw.

It was in 2008 for some new conference called Velocity. That only turned out to be the best conference I have ever attended. I have been to Velocity every year since and this year took our whole team. In addition, I spoke again in 2009 at Velocity, wrote a chapter for John's book Web Operations that was released at Velocity in 2010 and was invited to take part in the Velocity Summit this year (2011) which helps kick off the planning for the actual conference. The moral of that story for me is: when John Allspaw wants you to take part in something, you do it.

In reality, it was not that tough a decision. Even without John's involvement, I love the chance to talk about geeky stuff. The Etsy and dealnews engineering teams are like two twins separated at birth. Every time we compare notes, we are doing the same stuff. For example, we have been trading Open Source code lately. They are using my GearmanManager and we just started using their statistics collection daemon, statsd. So, speaking to their people about what we do seem like a great opportunity to share and get input.

The event is open to the public. So, if you use Gearman, want to use Gearman, or just want to hear how we use Gearman at dealnews, come here me ramble on about how awesome it is Tuesday night in Dumbo at Etsy Labs. You can RSVP on the event page.

Best Practices for Gearman by Brian Moon
Etsy Labs
55 Washington St. Ste 712
NY 11222

Tuesday, August 09, 2011 from 7:00 PM - 10:00 PM (ET)

MySQL Conference Review

Sat, Apr 17, 2010 10:54 AM
I am back home from a good week at the 2010 O'Reilly MySQL Conference & Expo. I had a great time and got to see some old friends I had not seen in a while.

Oracle gave the opening keynote and it went pretty much like I thought it would. Oracle said they will keep MySQL alive. They talked about the new 5.5 release. It was pretty much the same keynote Sun gave last year. Time will tell what Oracle does with MySQL.

The expo hall was sparse. Really sparse. There were a fraction of the booths compared to the past. I don't know why the vendors did not come. Maybe because they don't want to compete with Oracle/Sun? In the past you would see HP or Intel have a booth at the conference. But, with Oracle/Sun owning MySQL, why even try. Or maybe they are not allowed? I don't know. It was just sad.

I did stop by the Maatkit booth and was embarrassed to tell Baron (its creator) I was not already using it. I had heard people talk about it in the past, but never stopped to see what it does. It would have only saved me hours and hours of work over the last few years. Needless to say it is now being installed on our servers. If you use MySQL, just go install Maatkit now and start using it. Don't be like me. Don't wait for years, writing the same code over and over to do simple maintenance tasks.

Gearman had a good deal of coverage at the conference. There were three talks and a BoF. All were well attended. Some people seemed to have an AHA! moment where they saw how Gearman could help their architecture. I also got to sit down with the PECL/gearman maintainers and discuss the recent bug I found that is keeping me from using it.

I spoke about Memcached as did others. Again, there was a BoF. It was well attended and people had good questions about it. There seemed to be some FUD going around that memcached is somehow inefficient or not keeping up with technology. However, I have yet to see numbers or anything that proves any of this. They are just wild claims by people that have something to sell. Everyone wants to be the caching company since there is no "Memcached, Inc.". There is no company in charge. That is a good thing, IMO.

That brings me to my favorite topic for the conference, Drizzle. I wrote about Drizzle here on this blog when it was first announced. At the time MySQL looked like it was moving forward at a good pace. So, I had said that it would only replace MySQL in one part of our stack. However, after what, in my opinion, has been a lack of real change in MySQL, I think I may have changed my mind. Brian Aker echoed this sentiment in his keynote address about Drizzle. He talked about how MySQL AB and later Sun had stopped focusing on the things that made MySQL popular and started trying to be a cheap version of Oracle. That is my interpretation of what he said, not his words.

Why is Drizzle different? Like Memcached and Gearman, there is no "Drizzle, Inc.". It is an Open Source project that is supported by the community. It is being supported by companies like Rackspace who hired five developers to work on it. The code is kept on Launchpad and is completely open. Anyone can create a branch and work on the code. If your patches are good, they will be merged into the main branch. But, you can keep your own branch going if you want to. Unlike the other forks, Drizzle has started over in both the code and the community. I personally see it as the only way forward. It is not ready today, but my money is on Drizzle five or ten years from now.

Logging with MySQL

Wed, Mar 24, 2010 12:20 AM
I was reading a post by Cassandra is my NoSQL solution but..". In the post, Dathan explains that he uses Cassandra to store clicks because it can write a lot faster than MySQL. However, he runs into problems with the read speed when he needs to get a range of data back from Cassandra. This is the number one problem I have with NoSQL solutions.

SQL is really good at retrieving a set of data based on a key or range of keys. Whereas NoSQL products are really good at writing things and retrieving one item from storage. When looking at redoing our architecture a few years ago to be more scalable, I had to consider these two issues. For what it is worth, the NoSQL market was not nearly as mature as it is now. So, my choices were much more limited. In the end, we decided to stick with MySQL. It turns out that a primary or unique key lookup on a MySQL/InnoDB table is really fast. It is sort of like having a key/value storage system. And, I can still do range based queries against it.

But, back to Dathan's problem: clicks. We store clicks at dealnews. Lots of clicks. We also store views. We store more views than we do clicks. So, lots of views and lots of clicks. (Sorry for the vague numbers, company secrets and all. We are a top 1,000 Compete.com site during peak shopping season.) And we do it all in MySQL. And we do it all with one server. I should disclose we are deploying a second server, but it is more for high availability than processing power. Like Dathan, we only use about the last 24 hours of data at any given time. There are three keys for us doing logging like this in MySQL.

Use MyISAM

MyISAM supports concurrent inserts. Concurrent inserts means that inserts can add rows to the end of a table while selects are being performed on other parts of the data set. This is exactly the use case for our logging. There are caveats with range queries as pointed out by the MySQL Performance Blog.

Rotating tables

MySQL (and InnoDB in particular) really sucks at deleting rows. Like, really sucks. Deleting causes locks. Bleh. So, we never delete rows from our logging tables. Instead, nightly we rotate the tables. RENAME TABLE is an (near) atomic process in MySQL. So, we just create a new table.
create table clicks_new like clicks;
rename table clicks to clicks_2010032500001, clicks_new to clicks;

Tada! We now have an empty table for today's clicks. We now drop any table with a date stamp that is longer than x days old. Drops are fast, we like drops.

For querying these tables, we use UNION. It works really well. We just issue a SHOW TABLES LIKE 'clicks%' and union the query across all the tables. Works like a charm.

Gearman

So, I get a lot of flack at work for my outright lust for Gearman. It is my new duct tape. When you have a scalability problem, there is a good chance you can solve it with Gearman. So, how does this help with logging to MySQL? Well, sometimes, MySQL can become backed up with inserts. It happens to the best of us. So, instead of letting that pile up in our web requests, we let it pile up in Gearman. Instead of having our web scripts write to MySQL directly, we have them fire Gearman background jobs with the logging data in them. The Gearman workers can then write to the MySQL server when it is available. Under normal operating procedure, that is in near real time. But, if the MySQL server does get backed up, the jobs just queue up in Gearman and are processed when the MySQL server is available.

BONUS! Insert Delayed

This is our old trick before we used Gearman. MySQL (MyISAM) has a neat feature where you can have inserts delayed until the table is available. The query is sent to the MySQL server and it answers with success immediately to the client. This means your web script can continue on and not get blocked waiting for the insert. But, MySQL will only queue up so many before it starts erroring out. So, it is not as fool proof as a job processing system like Gearman.

Summary

To log with MySQL:
  • Use MyISAM with concurrent inserts
  • Rotate tables daily and use UNION to query
  • Use delayed inserts with MySQL or a job processing agent like Gearman
Happy logging!

PS: You may be asking, "Brian, what about Partitioned Tables?" I asked myself that before deploying this solution. More importantly, in IRC I asked Brian Aker about MySQL partitioned tables. I am paraphrasing, but he said that if I ever think I might alter that table, I would not trust it with the partitions in MySQL. So, that kind of turned me off of them.

Using ini files for PHP application settings

Tue, Jan 19, 2010 05:24 PM
At dealnews we have three tiers of servers. First is our development servers, then staging and finally production. The complexity of the environment increases at each level. On a development server, everything runs on the localhost: mysql, memcached, etc. At the staging level, there is a dedicated MySQL server. In production, it gets quite wild with redundant services and two data centers.

One of the challenges of this is where and how to store the connection information for all these services. We have done several things in the past. The most common thing is to store this information in a PHP file. It may be per server or there could be one big file like:

<?php

if(DEV){
    $server = "localhost";
} else {
    $server = "10.1.1.25";
}

?>


This gets messy quickly. Option two is to deploy a single file that has the settings in a PHP array. And that is a good option. But, we have taken that one step further using some PHP ini trickeration. We use ini files that are loaded at PHP's startup and therefore the information is kept in PHP's memory at all times.

When compiling PHP, you can specify the --with-config-file-scan-dir to tell PHP to look in that directory for additional ini files. Any it finds will be parsed when PHP starts up. Some distros (Gentoo I know) use this for enabling/disabling PHP extensions via configuration. For our uses we put our custom configuration files in this directory. FWIW, you could just put the above settings into php.ini, but that is quite messy, IMO.

To get to this information, you can't use ini_get() as you might think.  No, you have to use get_cfg_var() instead. get_cfg_var returns you the setting, in php.ini or any other .ini file when PHP was started. ini_get will only return values that are registered by an extension or the PHP core. Likewise, you can't use ini_set on these variables. Also, get_cfg_var will always reflect the initial value from the ini file and not anything changed with ini_set.

So, lets look at an example.

; db.ini
[myconfig]
myconfig.db.mydb.db     = mydb
myconfig.db.mydb.user   = user
myconfig.db.mydb.pass   = pass
myconfig.db.mydb.server = host


This is our ini file. the group in the braces is just for looks. It has no impact on our usage. Because this is parsed along with the rest of our php.ini, it needs a unique namespace within the ini scope. That is what myconfig is for. We could have used a DSN style here, but it would have required more parsing in our PHP code.

<?php

/**
 * Creates a MySQLi instance using the settings from ini files
 *
 * @author     Brian Moon <brianm@dealnews.com>
 * @copyright  1997-Present dealnews.com, Inc.
 *
 */

class MyDB {

    /**
     * Namespace for my settings in the ini file
     */
    const INI_NAMESPACE = "dealnews";

    /**
     * Creates a MySQLi instance using the settings from ini files
     *
     * @param   string  $group  The group of settings to load.
     * @return  object
     *
     */
    public static function init($group) {

        static $dbs = array();

        if(!is_string($group)) {
            throw new Exception("Invalid group requested");
        }

        if(empty($dbs["group"])){

            $prefix = MyDB::INI_NAMESPACE.".db.$group";

            $db   = get_cfg_var("$prefix.db");
            $host = get_cfg_var("$prefix.server");
            $user = get_cfg_var("$prefix.user");
            $pass = get_cfg_var("$prefix.pass");

            $port = get_cfg_var("$prefix.port");
            if(empty($port)){
                $port = null;
            }

            $sock = get_cfg_var("$prefix.socket");
            if(empty($sock)){
                $sock = null;
            }

            $dbs[$group] = new MySQLi($host, $user, $pass, $db, $port, $sock);

            if(!$dbs[$group] || $dbs[$group]->connect_errno){
                throw new Exception("Invalid MySQL parameters for $group");
            }
        }

        return $dbs[$group];

    }

}

?>


We can now call DB::init("myconfig") and get a mysqli object that is connected to the database we want. No file IO was needed to load these settings except when the PHP process started initially.  They are truly constant and will not change while this process is running.

Once this was working, we created separate ini files for our different datacenters. That is now simply configuration information just like routing or networking configuration. No more worrying in code about where we are.

We extended this to all our services like memcached, gearman or whatever. We keep all our configuration in one file rather than having lots of them. It just makes administration easier. For us it is not an issue as each location has a unique setting, but every server in that location will have the same configuration.

Here is a more real example of how we set up our files.

[myconfig.db]
myconfig.db.db1.db         = db1
myconfig.db.db1.server     = db1hostname
myconfig.db.db1.user       = db1username
myconfig.db.db1.pass       = db1password

myconfig.db.db2.db         = db2
myconfig.db.db2.server     = db2hostname
myconfig.db.db2.user       = db2username
myconfig.db.db2.pass       = db2password

[myconfig.memcache]
myconfig.memcache.app.servers    = 10.1.20.1,10.1.20.2,10.1.20.3
myconfig.memcache.proxy.servers  = 10.1.20.4,10.1.20.5,10.1.20.6

[myconfig.gearman]
myconfig.gearman.workload1.servers = 10.1.20.20
myconfig.gearman.workload2.servers = 10.1.20.21

Forking PHP!

Thu, Jul 23, 2009 01:17 PM
We use PHP everywhere in our stack. For us, it makes sense because we have hired a great staff of PHP developers. So, we leverage that talent by using PHP everywhere we can.

One place where people seem to stumble with PHP is with long running PHP processes or parallel processing. The pcntl extension gives you the ability to fork PHP processes and run lots of children like many other unix daemons might. We use this for various things. Most notably, we use it run Gearman worker processes. While at the OReilly Open Sourc Convention in 2009, we were asked about how we pulled this off. So, we are releasing the two scripts that handle the forking and some instructions on how we use them.

This is not a detailed post about long running PHP scripts.  Maybe I can get to the dos and don'ts of that another time.  But, these are the scripts we use to manage long running processes.  They work great for us on Linux.  They will not run on Windows at all.  We also never had any trouble running them on Mac OS X.

The first script, prefork.php, is for forking a given function from a given file and running n children that will execute that function. There can be a startup function that is run before any forking begins and a shutdown function to run when all the children have died.

The second script, prefork_class.php, uses a class with defined methods instead of relying on the command line for function names. This script has the added benefit of having functions that can be run just before each fork and after each fork. This allows the parent process to farm work out to each child by changing the variables that will be present when the child starts up. This is the script we use for managing our Gearman workers. We have a class that controls how many workers are started and what functions they provide. I may release a generic class that does that soon. Right now it is tied to our code library structure pretty tightly.

We have also included two examples. They are simple, but do work to show you how the scripts work.

You can download the code from the dealnews.com developers' page.

UPDATE: I have released a Gearman Worker Manager on Github.

The rise of the GLAMMP stack

Fri, May 22, 2009 11:34 AM
First there was LAMP.  But are you using GLAMMP?  You have probably not heard of it because we just coined the term while chatting at work.  You know LAMP (Linux, Apache, MySQL and PHP or Perl and sometimes Python). So, what are the extra letters for?

The G is for Gearman - Gearman is a system to farm out work to other machines, dispatching function calls to machines that are better suited to do work, to do work in parallel, to load balance lots of function calls, or to call functions between languages.

The extra M is for Memcached - memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.

More and more these days, you can't run a web site on just LAMP.  You need these extra tools (or ones like them) to do all the cool things you want to do.  What other tools do we need to work into the acronym?  PostgreSQL replaces MySQL in lots of stacks to form LAPP.  I guess Drizzle may replace MySQL in some stacks soon.  For us, it will likely be added to the stack.  Will that make it GLAMMPD?  We need more vowels!  If you are starting the next must use tool for running web sites on open source software, please use a vowel for the first letter.

Net::Gearman and PHP 5.2.9

Tue, Apr 21, 2009 12:48 PM
I just discovered an incompatibility between Net Gearman and PHP 5.2.9+.  json_decode was changed in 5.2.9 to return NULL on invalid JSON strings.  Previously, the bare string had been returned if it was not valid JSON.  This was nice in a way as you could pass a scalar string to json_decode and not worry about it.  But, in reality, it would make debugging a nightmare for JSON.

I have updated my github fork and requested a pull into the main branch.  Once that is done a new PEAR release can be done.