What is next for message board software?

When I was hired at dealnews.com in 1998, my primary focus was to get our message board (Phorum) up to speed. I had written the first version as a side project for the site. Message boards were a lot simpler back then. Matt's WWWBoard was the gold standard of the time. And really, the functionality has been only evolutionary since. We added attachments to Phorum in 2003 or something. That was a major new feature. In Phorum 5 we added a module system that was awesome. But, that was just about the admin and not the user. From the user's perspective, message boards have not changed much since 1997. I saw this tweet from Amy Hoy and it got me to thinking about how message boards work. Here is the typical user experience:
  1. Go to the message board
  2. See a list of categories
  3. Drill down to the category they want to read
  4. Scroll through a list of messages that are in reverse cronological order by original post date or most recent post date
  5. Click a message and read it.
  6. Go to #3, repeat
Every message board software package pretty much works like that and has for over 10 years. And it kind of sucks. What a user would probably rather experience is:
  1. Go to the message board
  2. The most interesting things (to this user) are listed right there on the page. No drill down needed.
  3. Click one and read it.
  4. Goto #2, repeat.
Sounds easy? That #2 is easy to type but very hard to accomplish. I think it is conceivably doable if you are running a site that has all the data. Stackoverflow comes close. When you land on the site, they default the page to the "interesting" posts. However, they are not always interesting to me. They are making general assumptions about their audience. For example, right now, the first one is tagged "delphi". I could care less about that language and any posts about it. Its a good try, but misses by oh so far. This is not a Stackoverflow hate post. They are doing a good job. So, what do I do when I land there? I ignore the front page and click Tags (#2 in the first list), then pick a tag I want to read about (#3 in the first list). Low and behold the page I get is "newest". So, I end up doing exactly what is in the first list I mentioned. They do offer other sort options. But, they chose newest as the default. And from years of watching user behavior, 80% - 90% of people go with the good ol' default. This kind of brings me to another point though about the types of message boards there are.

Stackoverflow is a classic example of a help message board. People come there and ask a question. Other people come along and answer the question. Then more people come along and vote on whether the answers (and questions) are any good. This is one really nice feature that I think will have to become a core feature in any message board of the future. The signal to noise ratio can get so out of whack, you need human input to help decide what is good and what is noise. I think the core of the application has to rely on that if we are ever going to achieve the desired experience.

The second type of message board is a conversational system. It is almost like a delayed chat room. People come to a message board and post about their cat or asking who watched a TV show, that kind of thing. This has a completely different dynamic to it than the help message board. You can't really vote if a post is good or bad. The obvious exception being spam would of course want to be recognized and dealt with.

So, how do you know what content is desirable for the user that is entering the site right now? This concept has already been laid out for us: the social graph. You have to give users a way to associate with other users. If Bob really likes Tom's posts, he is probably more interested to read Tom's post from 30 minutes ago than some new guy that just joined the site and posted 1 minute ago. The challenge here is getting people to interconnect...but not too much. Everyone has that aunt on Facebook that follows you, your roommate and anybody else she can. She would follow your dog if he had a Facebook account. So, those people would still get a crappy experience if the whole system relied on the social graph. The other side is the people that will never "follow", "like" or whatever you call it another person. Their experience would lack as well. One key ingredient here is that you need to own this data. You can't just throw like buttons and Facebook connect on your message board and think you can leverage that data. That data is for Facebook, not you. I think the help message boards could benefit from the social graph as well.

Another aspect of what is most important to a user is discussions they are involved in. That could mean ones they started, ones they have replied to or simply ones they have read. Which of those aspects are more important than the others? Clearly if you started a discussion and someone has replied, that is going to interest you. If you posted a reply, you may be done with the topic or you may be waiting on a response. It would take some serious natural language algorithms to decide which is the case. For things you have read, I think you have to consider how many times the user has read the discussion. If every time it is updated they read it, they probably will want to read it again the next time it is updated. If they have only read it once, maybe they are not as interested.

The last aspect of message boards is grouping things. This is the part I actually struggle with the most. The easy first answer is tagging. Don't force the user down a single trail, let them tag posts instead of only posting them in one neat contained area. That gets you half way there. Let's use Stackoverflow (I really do like the site) as an example again. The first thing I do is go to Tags and click on PHP. I like helping people with PHP problems. So,  is that really any different from categorization? Sure, there could be someone out there that really likes helping with Javascript. And if the same post was tagged with both tags then their coverage of potential help is larger. But, some of the time those tags are wrong when they tag it with more than one tag. The problem they need help with is either PHP or Javascript, most likely not both. They just don't know what they are doing. For example, there is this post on Stackoverflow. The user tagged it PHP and database-design. There is no PHP in the question. I am guessing he is using PHP for the app. But, it really never comes up and he is only talking about database design. So, who did the PHP tag help there? I don't think it helped him. And it only wasted my time. Having written all that, a free-for-all approach where there is no filtering sucks too. ARGH! It just all sucks. That brings us back to what Amy said in a way. Perhaps moderated tagging is an answer. I have not seen a way on Stackoverflow to untag a post. That would let people correct others. I am gonna write that down. If you work at Stackoverflow and are reading this, you can use that idea. Just put a comment in the code about how brilliant I am or something that aliens will find one day.

So, I am done. I know exactly what to do right? I just have to make code that does everything I put in the previous paragraphs. Man I wish it were that easy. When you want to write a distributed application to do it, the task is even more daunting. If I controlled the data and the servers and the code, I could do crazy things that would make great conference talks. But, it kind of falls apart when I want to give this code to a 60 year old retired guy that is starting a hobby site for watching humming birds on a crappy GoDaddy account. Yeah, he is not installing Sphinx or HandlerSocket or Gearman. Those are all things I would want to use to solve this problem in a scalable fashion. At that point you have two choices. Aim for the small time or the big time. If you aim for the small time, you may get lots of installs, but, you will be hamstrung. If you aim for the big time, you may be the only guy that ever uses the code. That is a tough decision.

What have I missed? I know I missed something. Are there other types of message boards? I can definitely see some sub-types. Perhaps a board where ideas instead of help messages are posted. Or maybe the conversations are more show off based as in a user posting pictures or videos for comment. Is there already something out there doing this and I have just missed it? Let me know what I have missed please.

Sharing gotchas on Facebook and Twitter

I have been working on adding some sharing features to dealnews.com. Dealing with Facebook and Twitter has been nothing if not frustrating. Neither one seems to understand how to properly deal with escaping a URL. At best they do it one way, but not all ways. At worst, they flat out don't do it right. I thought I would share what we found out so that someone else my be helped by our research.

Facebook

Facebook has two main ways to encourage sharing of your site on Facebook. The older way is to "Share" a page. The second, newer, cooler way to promote your page/site on Facebook is with Facebook's Like Button. Both have the same bug. I will focus on Share as it is easier to show examples of sharing. To do this, you make a link and send it to a special landing page on Facebook's site. But, lets say my URL has a comma in it. If it does, Facebook just blows up in horrible fashion. The users of Phorum have run into this problem too. In Phorum, we dealt with register_globals in a unique way long ago. We just don't use traditional query strings on our URLs. Instead of the traditional var1=1&var2=2 format, we decided to use a comma delimited query string. 1,2,3,var4=4 is a valid Phorum URL query string.

According to RFC 3986, a query string is made up of:
query = *( pchar / "/" / "?" )
where pchar is defined as:
pchar = unreserved / pct-encoded / sub-delims / ":" / "@"
and finally, sub-delims is defined as:
sub-delims = "!" / "$" / "&" / "'" / "(" / ")" / 
"*" / "+" / "," / ";" / "="
That is RFC talk for "A query string can have an un-encoded comma in it as a delimiter." So, in Phorum we have URLs like http://www.phorum.org/phorum5/read.php?61,145041,145045. That is the post in Phorum talking about Facebook's problem. It is a valid URL. The commas do not need to be escaped. They are delimiters much like an & would be in a traditional URL. So, what happens when you share this URL on Facebook? Well, a share link would look like http://www.facebook.com/share.php?u=http%3A%2F%2Fwww.phorum.org%2Fphorum5%2Fread.php%3F61%2C146887%2C146887. If I go to that share page and then look in my Apache logs I see this:
66.220.149.247 - - [18/Nov/2010:00:47:51 -0600] "GET /phorum5/read.php?61%2C146887%2C146887 HTTP/1.1" 302 26 "-" "facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)"
Facebook sent %2C instead of comma? It decoded the other stuff in the URL. The slashes, the question mark, all of it. So, what is their deal with commas? Well, maybe I can hack Facebook and not send an encoded URL to the share page. Nope, same thing. So, they are proactively encoding commas in URL's query strings.

This has two effects. The first is that the share app attempts to pull in the title, description, etc. from the page. In this case, we redirect the request as the query string is invalid for a Phorum message page. So, they end up getting the main Phorum page. In the case of dealnews, we usually throw a 400 HTTP error when we get invalid query strings. Neither of these get the user what he wanted. The second problem is that the URL that is clickable when the user has shared the URL is not valid. So, the whole thing was just a huge waste of time.

I have submitted this to the Facebook Bugzilla. The only work around is to use a URL shortener or don't use commas in your URLs. Just make sure the shortener does not use commas. I guess you could use special URLs for Facebook that used something besides comma that are then redirected to the real URL with commas. I don't know what that character is, I am just guessing.

Twitter

Twitter's issues deal with their transition from their old interface to their new interface. Twitter is in the process of (or is done with) rolling a new UI on their site. The link in the old site to share something on Twitter was something like: http://twitter.com/home?status=[URL encoded text here]. This worked pretty darn well. You could put any valid URL encoded text in there and it worked. However, that now redirects you to their new interface's way of updating your status and they don't encode things right.

If I want to tweet "I love to eat pork & beans" I would make the URL http://twitter.com/home?status=I+love+to+eat+pork+%26+beans. Twitter then takes that, decodes the query string and redirects me to http://twitter.com/?status=I%20love%20to%20eat%20pork%20&%20beans. The problem is that they did not re-encode the &. It is in the bare URL. So, when I land on my twitter page, my status box just says "I love to eat pork ". Which while true, is not what I mean to tweet. This bug has been submitted to Twitter, but has yet to be fixed.

The second problem is with the new site and how they deal with validly encoded spaces. Spaces can be escaped two ways in a URL. The first, older way (which the PHP function urlencode uses) is to encode spaces as a plus (+) sign. This comes from the standard for how forms submit (or used to submit) data. It is understood by all browsers. The second way comes from the later RFC's written about URLs. They state that spaces in a URL should be escape like other characters by replacing a space with %20. The old Twitter UI would accept either one just fine. And, if you send that to the old status update URL it will redirect you (see above) with %20 in the URL instead of +. However, if you send + to the new Twitter UI, as above, you get "I+love+to+eat+pork+&+beans" in your status box. The only solution is to not send + has an encoding for space to Twitter. In PHP you can use the function rawurlencode to do this. It conforms to the RFC(s) on URL encoding. Doing so, with thew new linking pattern generates the URL http://twitter.com/?status=I%20love%20to%20eat%20pork%20%26%20beans which works great. This was also reported to Twitter as a bug by our team.

So, maybe that will help someone out that is having issues with sharing your site on the two largest social networks. Good luck with your social media development.

Monitoring PHP Errors

PHP errors are just part of the language. Some internal functions throw warnings or notices and seem unavoidable. A good case is parse_url. The point of parse_url is to take apart a URL and tell me the parts. Until recently, the only way to validate a URL was a regex. You can now use filter_var with the FILTER_VALIDATE_URL filter. But, in the past, I would use parse_url to validate the URL. It worked as the function returns false if the value is not a URL. But, if you give parse_url something that is not a URL, it throws a PHP Warning error message. The result is I would use the evil @ to suppress errors from parse_url. Long story short, you get errors on PHP systems. And you don't need to ignore them.

In the past we just logged them and then had the log emailed daily. On some unregular schedule we would look through them and fix stuff to be more solid. But, when you start having lots of traffic, one notice error on one line could cause 1,000 error messages in a short time. So, we had to find a better way to deal with them.

Step 1: Managing the aggregation

The first thing we did was modify our error handler to log both human readable logs to the defined php error log and to write a serialzed (json_encode actually) version of the error to a second file. This second file makes parsing of the error data super easy. Now we could aggregate the logs on a schedule, parse them and send reports as needed. We have fatal errors monitored more aggressively than non-fatal errors. We also get daily summaries. The one gotcha is that PHP fatal errors do not get sent to the error handler. So, we still have to parse the human readable file for fatal errors. A next step may be to have the logs aggregated to a central server via syslog or something. Not sure where I want to go with that yet.

Step 2: Monitoring

Even with the logging, I had no visibility into how often we were logging errors. I mean, I could grep logs, do some scripting to calculate dates, etc. Bleh, lots of work. We recently started using Circonus for all sorts of metric tracking. So, why not PHP errors? Circonus has a data format that allows us to send any random metrics we want to track to them. They will then graph them for us. So, in my error handler, I just increment a counter in memached every time I log an error. When Circonus polls me for data, I just grab that value out of memcached and give it to them. I can then have really cool graphs that show me my error rate. In addition I can set alerts up. If we start seeing more than 1 error per second, we get an email. If we start seeing more than 5 per second, we get paged. You get the idea.

The Twitterverse:

I polled my friends on Twitter and two different services were mentioned. I don't know that I would actually send my errors to a service, but it sounds like an interesting idea. One was HopToad. It appears to be specifically for application error handling. The other was loggly which appears to be a more generic log service. Both very interesing concepts. The control freak in me would have issues with it. But, that is a personal problem.

Most others either manually reviewed things or had some similar notification system in place. Anyone else have a brilliant solution to monitoring PHP errors? Or any application level errors for that matter?

P.S. For what it is worth, our PHP error rate is about .004 per second. This includes some notices. Way too high for me. I want to approach absolute zero. Lets see if all these new tools can help us get there.

Boycott NuSphere. They are spammy spammers

It took a lot for me to finally write this post. I tweeted about it a while back. It seems NuSphere needs more business. They have resorted to spamming people to promote their PhpED product. I have gotten emails to email addresses that:
  1. I know are not on any mailing list
  2. Are on web pages as plain mailto: anchor tags for good reasons.
In one case, it was the security@phorum.org address we have on the site to make it easy for people to report any security related issues. We get all kind of spam because of this, but it is worth it to have an easy access address for security issues.

In the other case, the email addresses used were on the dealnews.com jobs page. It was the addresses that are used to accept resumes.

Now, today, I started getting them to my personal inboxes.



Apparently, they sent it out the first time with a misspelling, so they had to send it out again!?!?

Now, the NuSphere people posted a tweet that claims the emails are not coming from them. That it's an independent marketer that "have permission to sell our products". First, the links in the email to go the NuSphere site, not a 3rd party. I would think an independent would want to take me to their site. Second, if that is the case, revoke the permission you have given them to sell your products. Don't use a marketing/sales agreement as a shield to allow people in the PHP Community to be spammed in your name.

PHP Community, please do no support NuSphere. They are spammers, directly or indirectly. If they do this to promote their product, how will they find a way to hose their user base later on?

PHP 5.3 and mysqlnd - Unexpected results

I have started seriously using PHP 5.3 recently due to it finally making it into Portage. (Gentoo really isn't full of bleeding edge packages people.) I have used mysqlnd a little here and there in the past, but until it was really coming to my servers I did not put too much time into it.

What is mysqlnd?

mysqlnd is short for MySQL Native Driver. In short, it is a driver for MySQL for PHP that uses internal functions of the PHP engine rather than using the externally linked libmysqlclient that has been used in the past. There are two reasons for this. The first reason is licensing. MySQL is a GPL project. The GPL and the PHP License don't play well together. The second is better memory management and hopefully more performance. Being a performance junky, this is what peaked my interests. Enabling mysqlnd means it is used by the older MySQL extension, the newer MySQLi extension and the MySQL PDO driver.

New Key Feature - fetch_all

One new feature of mysqlnd was the fetch_all method on MySQLi Result objects. At both dealnews.com and in Phorum I have written a function to simply run a query and fetch all the results into an array and return it. It is a common operation when writing API or ORM layers. mysqlnd introduces a native fetch_all method that does this all in the extension. No PHP code needed. PDO already offers a fetchAll method, but PDO comes with a little more overhead than the native extensions and I have been using mysql functions for 14 years. I am very happy using them.

Store Result vs. Use Result

I have spoken in the past (see my slides and interview: MySQL Tips and Tricks) about using mysql_unbuffered_query or using mysqli_query with the MYSQLI_USE_RESULT flag. Without going into a whole post about that topic, it basically allows you to stream the results from MySQL back into your PHP code rather than having them buffered in memory. In the case of libmysqlclient, they could be buffered twice. So, my natural thought was that using MYSQLI_USE_RESULT with fetch_all would yield the most awesome performance ever. The data would not be buffered and it would get put into a PHP array in C instead of native code. The code I had hoped to use would look like:
$res = $db->query($sql, MYSQLI_USE_RESULT);
$rows = $res->fetch_all(MYSQLI_ASSOC);
But, I quickly found out that this does not work. For some reason, this is not supported. fetch_all only works with the default which is MYSQLI_STORE_RESULT. I filed a bug which was marked bogus. Which I put back to new because I really don't see a reason this should not work other than a complete oversight by the mysqlnd developers. So, I started doing some tests in hopes I could show the developers how much faster using MYSQLI_USE_RESULT could be. What happened next was not expected. I ended up benchmarking several different options for fetching all the rows of a result into an array.

Test Data

I tested using PHP 5.3.3 and MySQL 5.1.44 using InnoDB tables. For test data I made a table that has one varchar(255) column. I filled that table with 30k rows of random lengths between 10 and 255 characters. I then selected all rows and fetched them using 4 different methods.
  1. mysqli_result::fetch_all*
  2. PDOStatement::fetchAll
  3. mysqli_query with MYSQLI_STORE_RESULT followed by a loop
  4. mysqli_query with MYSQLI_USE_RESULT followed by a loop
In addition, I ran this test with mysqlnd enabled and disabled. For mysqli_result::fetch_all, only mysqlnd was tested as it is only available with mysqlnd. I ran each test 6 times and threw out the worst and best result for each test. FWIW, the best and worst did not show any major deviation for any of the tests. For measuring memory usage, I read the VmRSS value from Linux's /proc data. memory_get_usage() does not show the hidden memory used by libmysqlclient and does not seem to show all the memory used by mysqlnd either.



So, that is what I found. The memory usage graphs are all what I thought they would be. PDO has more overhead by its nature. Storing the result always uses more memory than using it. mysqli_result::fetch_all uses less memory than the loop, but more than directly using the results.

There are some very surprising things in the timing graphs however. First, the tried and true method of using the result followed by a loop is clearly still the right choice in libmysqlclient. However, it is a horrible choice for mysqlnd. I don't really see why this is so. It is nearly twice as slow. There is something really, really wrong with MYSQLI_USE_RESULT in mysqlnd. There is no reason it should ever be slower than storing the result and then reading it again. This is also evidenced in the poor performance of PDO (since even PDO uses mysqlnd when enabled). PDO uses an unbuffered query for its fetchAll method and it too got slower. It is noticably slower than libmysqlclient. The good news I guess is that if you are using mysqlnd, the fetch_all method is the best option for getting all the data back.

Next Steps

My next steps from here will be to find some real workloads that I can test this on. Phorum has several places where I can apply real world pages loads to these different methods and see how they perform. Perhaps the test data is too small. Perhaps the number of columns would have a different effect. I am not sure.

If you are reading this and have worked on or looked at the mysqlnd code and can explain any of it, please feel free to comment.

PHP and Memcached: The state of things

Memcached is the de facto standard for caching in dynamic web sites. PHP is the one of the most widely used languages on the web. So, naturally there is lots of interest in using the two together. There are two choices for using memcached with PHP: PECL/memcache and PECL/memcached. Great names huh? But as of this writing there are issues with the two most popular Memcached libraries for PHP. This is a summary of those issues that I hope will help people being hurt by them and may bring about some change.

PECL/memcache

This is the older of the two and was the first C based extension for using memcached with PHP. Before this all the options were native PHP code. While they worked, they were slower of course. C > PHP. That is just fact. However, there has not been much active development on this code in some time. Yes, they have fixed bugs, but support for new features in the memcached server have not been added to this extension. Newer version of the server suppot a more efficient binary protocol and many new features. In addition, there are some parts of the extension that simply don't work anymore.

The most glaring one is the delete() function. It takes a second parameter that is documented as: "the item will expire after timeout seconds". In fact that was never a feature of memcached. It was completely misunderstood by the original extension authors. When that parameter was supported, it locked the key for timeout seconds and would not allow a new add operation on that key. Second, this feature was completely removed in memcached 1.4.0. So, if you send a timeout to the delete function, you simply get a failure. This is creating a huge support issue in the memcached community (not the PHP/PECL community) with people not being able to delete keys because of bad documentation and unsupported behavior. I sent a documentation patch for the this function to the PHP Docs list. I then modified it based on feedback. But since I have heard nothing about it getting merged. I have a PHP svn account, but even if I do have karma on the docs repository, I don't want to hijack them without the support of the people that work on the docs all the time. If you are reading this and can change the docs, please make the documentation for that function say "DON'T USE THIS PARAMETER, IT HAS BEEN DEPRECATED IN THE MEMCACHED SERVER!" or something.

Not too long ago the extension was officially abandoned by its original maintainers. Some people stepped up and claimed they wanted to see it continue. But, since that time, there have been no releases. There are bugs being closed though so maybe there is good things coming.

A very problematic issue with this extension is with the 3.0 beta release. It needs to just die. It has huge bugs that, IMO, were introduced by the previous maintainers in an effort to bring it up to speed, but never saw them through to make sure the new code worked. In their defense, it is marked as beta on PECL. But, thanks to Google, people don't see the word beta anymore. There are lots of people using this version and when they get bad results they blame the whole memcached world. Really, the new maintainers would do the world a favor if they just removed the 3.x releases from the PECL site.

PECL/memcached

This extension was started by Andrei Zmievski while working at Digg.com as an open source developer. It uses libmemcached, a C++ memcached library that does all the memcached work. This made it quite easy to support the new features in the memcached daemon as it as it was being developed at the same time as the new server. However, it has not had a stable release on PECL in nearly a year except for release to make it compatible with new versions of libmemcached. No bug fixes and no new features. There are currently 28 open bugs on PECL for this extension. Not all of which are bugs. Some are feature requests. The ironic thing is that the GitHub repository for this extension has seen a lot of development. But, none of these bug fixes have made it into the official PECL channel. And some of these bugs are major and others are just huge WTF for a developer.

The most major bug is one that I found. If you use persistent connections with this extension, it basically leaks those connections, not reusing an existing connection but also not closing the ones already made. This uses up the memory in your processes until they crash and it creates an exponential number of connections to your memcached server until it has no more connections available. Andrei does report in the bug that it is fixed in hist GitHub. But, for now, you can't use persistent connections.

The big WTF for a developer is that some functions don't take a numeric key and use it properly.
<?php

$mc = new Memcached();

$mc->addServer("localhost", 11211);

$mc->set(123, "yes!");

var_dump($mc->getMulti(array(123)));

?>
The above code should generate:
array(1) {
  [123]=>
  string(4) "yes!"
}
But, in reality, it generates:
bool(false)
The set succeeds, but the getMulti fails. Again, this is being fixed in GitHub, but is not available for release on PECL.

Compatibility

One big issue with these two extensions is that they are not drop in replacements for each other. If you want to move from one to the other, you have to take into consideration some time to convert your code. For instance, the older extension's set() function takes flags as the third parameter. The newer extension does not take a flags parameter at all. The older uses get() with an array to do a multi-get and the newer extension has a separate method for that called getMulti(). There are others as well.

Summary

So, what should you do as a PHP developer? If you are deploying memcached today, I would use the 2.2.x branch of PECL/memcache. It is the most stable. Just avoid the delete bug. It is not as fast and does not have the features. But, it is very reliable for set, get, add.... the basics of memcached. For the long term, it is a bit unclear. PECL/memcached looks to fix a lot of things in 2.0. But, I have not used it yet. In addition the long term growth of the project is a bit in question. Will there be a 2.1, 2.2, etc? I hope so. The other unknown is if the those people fixing the PECL/memcache bugs will keep it up and release a good stable product that supports new features of the server. Again, I hope so. The best scenario would be to have a choice between two fully compatible and feature rich extensions. Keep your fingers crossed.

PHP generated code tricks

Something that is great about PHP is that you can write code that generates more PHP code to be used later. Now, I am not saying this a best practice. I am sure it violates some rule in some book somewhere. But, sometimes you need to be a rule breaker.

A simple example is taking a database of configuration information and dumping it to an array. We do this for each publication we operate. We have a publication table. It contains the name, base URL and other stuff that is specific to that publication. But, why query the database for something that only changes once in a blue moon? We could cache it, but that would still require an on demand database hit. The easy solution is to just dump the data to a PHP array and put it on disk.
<?php

$sql = "select * from publications";

$res = $mysqli->query($sql);

while($row = $res->fetch_assoc()){

    $pubs[$row["publication_id"]] = $row;

}

$pubs_ser = str_replace("'", "\\'", serialize($pubs));

$php_code = "<?php global \$PUBLICATIONS; \$PUBLICATIONS = unserialize('$pubs_ser'); ?>";

file_put_contents("/some/path/publications.php", $php_code);

?>
Now you can include the publications.php file and have a global variable named $PUBLICATIONS that holds the publication settings. But, how do we load a single publication without knowing numeric ids? Well, you could make some constants.
<?php

$sql = "select * from publications";

$res = $mysqli->query($sql);

while($row = $res->fetch_assoc()){

    $pubs[$row["publication_id"]] = $row;

    $constants[$row["publication_id"]] = strtoupper($row["name"]);

}

$pubs_ser = str_replace("'", "\\'", serialize($pubs));

$php_code = "<?php\n";

$php_code.= "global \$PUBLICATIONS;\n";

$php_code.= "\$PUBLICATIONS = unserialize('$pubs_ser');\n";

foreach($constants as $id=>$const){

    $php_code.= "define('$const', $id);\n";

}

$php_code.= "?>";

file_put_contents("/some/path/publications.php", $php_code);

?>

So, now, we have constants. We can do stuff like:
<?php

//load a publication

require_once "publications.php";

echo $PUBLICATIONS[DEALNEWS]["name"];

?>
But, how about autoloading? It would be nice if I could just autoload the constants.
<?php

$sql = "select * from publications";

$res = $mysqli->query($sql);

while($row = $res->fetch_assoc()){

    $pubs[$row["publication_id"]] = $row;

    $constants[$row["publication_id"]] = strtoupper($row["name"]);

}

$pubs_ser = str_replace("'", "\\'", serialize($pubs));

$php_code = "<?php\n";

$php_code.= "class PUB_DATA {\n";

foreach($constants as $id=>$const){

    $php_code.= " const $const = $id;\n";

}

$php_code.= "    protected \$pubs_ser = '$pubs_ser';\n";

$php_code.= "}";

$php_code.= "?>";

file_put_contents("/some/path/pub_data.php", $php_code);

?>
Then we create a class in our autoloading directory that extends that object.
<?php

require_once "pub_data.php";

class Publication extends PUB_DATA {

    private $pub;

    public function __construct($pub_id) {

        $pubs = unserialize($this->pubs_ser);

        $this->pub = $pubs[$pub_id];

    }

    public function __get($var) {

        if(isset($this->pub[$var])){

            return $this->pub[$var];

        } else {

            // Exception

        }

    }

}

?>
Great, now we can do things like:
$pub = new Publication(Publication::DEALNEWS);

echo $pub->name;
The only problem that remains is dealing with getting the generated code to all your servers. We use rsync. It works quite well. You may have a different solution for your team. Back when we ran our own in house ad server we did all the ad work this way. None of the ad calls ever hit the database to get ads. We stored stats on disk in logs and processed them on a schedule. It was a very solid solution.

One more benefit of using generated files on disk is that they can be cached by APC or XCache. This means you don't have to actually hit disk for them all the time.

MySQL Conference Review

I am back home from a good week at the 2010 O'Reilly MySQL Conference & Expo. I had a great time and got to see some old friends I had not seen in a while.

Oracle gave the opening keynote and it went pretty much like I thought it would. Oracle said they will keep MySQL alive. They talked about the new 5.5 release. It was pretty much the same keynote Sun gave last year. Time will tell what Oracle does with MySQL.

The expo hall was sparse. Really sparse. There were a fraction of the booths compared to the past. I don't know why the vendors did not come. Maybe because they don't want to compete with Oracle/Sun? In the past you would see HP or Intel have a booth at the conference. But, with Oracle/Sun owning MySQL, why even try. Or maybe they are not allowed? I don't know. It was just sad.

I did stop by the Maatkit booth and was embarrassed to tell Baron (its creator) I was not already using it. I had heard people talk about it in the past, but never stopped to see what it does. It would have only saved me hours and hours of work over the last few years. Needless to say it is now being installed on our servers. If you use MySQL, just go install Maatkit now and start using it. Don't be like me. Don't wait for years, writing the same code over and over to do simple maintenance tasks.

Gearman had a good deal of coverage at the conference. There were three talks and a BoF. All were well attended. Some people seemed to have an AHA! moment where they saw how Gearman could help their architecture. I also got to sit down with the PECL/gearman maintainers and discuss the recent bug I found that is keeping me from using it.

I spoke about Memcached as did others. Again, there was a BoF. It was well attended and people had good questions about it. There seemed to be some FUD going around that memcached is somehow inefficient or not keeping up with technology. However, I have yet to see numbers or anything that proves any of this. They are just wild claims by people that have something to sell. Everyone wants to be the caching company since there is no "Memcached, Inc.". There is no company in charge. That is a good thing, IMO.

That brings me to my favorite topic for the conference, Drizzle. I wrote about Drizzle here on this blog when it was first announced. At the time MySQL looked like it was moving forward at a good pace. So, I had said that it would only replace MySQL in one part of our stack. However, after what, in my opinion, has been a lack of real change in MySQL, I think I may have changed my mind. Brian Aker echoed this sentiment in his keynote address about Drizzle. He talked about how MySQL AB and later Sun had stopped focusing on the things that made MySQL popular and started trying to be a cheap version of Oracle. That is my interpretation of what he said, not his words.

Why is Drizzle different? Like Memcached and Gearman, there is no "Drizzle, Inc.". It is an Open Source project that is supported by the community. It is being supported by companies like Rackspace who hired five developers to work on it. The code is kept on Launchpad and is completely open. Anyone can create a branch and work on the code. If your patches are good, they will be merged into the main branch. But, you can keep your own branch going if you want to. Unlike the other forks, Drizzle has started over in both the code and the community. I personally see it as the only way forward. It is not ready today, but my money is on Drizzle five or ten years from now.

DevOps at dealnews.com

I was telling someone how we roll changes to production at dealnews and they seemed really amazed by it. I have never really thought it was that impressive. It just made sense. It has kind of happened organically here over the years. Anyhow, I thought I would share.

Version Control

So, to start with, everything is in SVN. PHP code, Apache configs, DNS and even the scripts we use to deploy code. That is huge. We even have a misc directory in SVN where we put any useful scripts we use on our laptops for managing our code base. Everyone can share that way. Everyone can see what changed when. We can roll things back, branch if we need to, etc. I don't know how anyone lives with out. We did way back when. It was bad. People were stepping on each other. It was a mess. We quickly decided it did not work.

For our PHP code, we have trunk and a production branch. There are also a couple of developers (me) that like to have their own branch because they break things for weeks at a time. But, everything goes into trunk from my branch before going into production. We have a PHP script that can merge from a developer branch into trunk with conflict resolution assistance built in. It is also capable of merging changes from trunk back into a branch. Once it is in trunk we use our staging environment to put it into production.

Staging/Testing

Everything has a staging point. For our PHP code, it is a set of test staging servers in our home office that have a checkout of the production branch. To roll code, the developer working on the project logs in via ssh to a staging server as a restricted user and uses a tool we created that is similar to the Python based svnmerge.py. Ours is written in PHP and tailored for our directory structure and roll out procedures. It also runs php -l on all .php and .html files as a last check for any errors. Once the merge is clean, the developer(s) use the staging servers just as they would our public web site. The database on the staging server is updated nightly from production. It is as close to a production view of our site as you can get without being on production. Assuming the application performs as expected, the developer uses the merge tool to commit the changes to the production branch. They then use the production staging servers to deploy.

Rolling to Production

For deploying code and hands on configuration changes into our production systems, we have a staging server in our primary data center. The developer (that is key IMO) logs in to the production staging servers, as a restricted user, and uses our Makefile to update the checkout and rsync the changes to the servers. Each different configuration environment has an accompanying nodes file that lists the servers that are to receive code from the checkout. This ensures that code is rolled to servers in the correct order. If an application server gets new markup before the supporting CSS or images are loaded onto the CDN source servers, you can get an ugly page. The Makefile is also capable of copying files to a single node. We will often do this for big changes. We can remove a node from service, check code out to it, and via VPN access that server directly to review how the changes worked.

For some services (cron, syslog, ssh, snmp and ntp) we use Puppet to manage configuration and to ensure the packages are installed. Puppet and Gentoo get along great. If someone mistakenly uninstalls cron, Puppet will put it back for us. (I don't know how that could happen, but ya never know). We hope to deploy more and more Puppet as we get comfortable with it.

Keeping Everyone in the Loop

Having everyone know what is going on is important. To do that, we start with Trac for ticketing. Secondly, we use OpenFire XMPP server throughout the company. The devops team has a channel that everyone is in all day. When someone rolls code to production, the scripts mentioned above that sync code out to the servers sends a message via an XMPP bot that we wrote using Ruby (Ruby has the best multi-user chat libraries for XMPP). It interfaces with Trac via HTTP and tells everyone what changesets were just rolled and who committed them. So, in 5 minutes if something breaks, we can go back and look at what just rolled.

In addition to bots telling us things, there is a cultural requirement. Often before a big roll out, we will discuss it in chat. That is the part than can not be scripted or programmed. You have to get your developers and operations talking to each other about things.

Final Thoughts

There are some subtle concepts in this post that may not be clear. One is that the code that is written on a development server is the exact same code that is used on a production server. It is not massaged in any way. Things like database server names, passwords, etc. are all kept in configuration files on each node. They are tailored for the data center that server lives in. Another I want to point out again is that the person that wrote the code is responsible all the way through to production. While at first this may make some developers nervous, it eventually gives them a sense of ownership. Of course, we don't hire someone off the street and give them that access.  But it is expected that all developers will have that responsibility eventually.

Logging with MySQL

I was reading a post by Cassandra is my NoSQL solution but..". In the post, Dathan explains that he uses Cassandra to store clicks because it can write a lot faster than MySQL. However, he runs into problems with the read speed when he needs to get a range of data back from Cassandra. This is the number one problem I have with NoSQL solutions.

SQL is really good at retrieving a set of data based on a key or range of keys. Whereas NoSQL products are really good at writing things and retrieving one item from storage. When looking at redoing our architecture a few years ago to be more scalable, I had to consider these two issues. For what it is worth, the NoSQL market was not nearly as mature as it is now. So, my choices were much more limited. In the end, we decided to stick with MySQL. It turns out that a primary or unique key lookup on a MySQL/InnoDB table is really fast. It is sort of like having a key/value storage system. And, I can still do range based queries against it.

But, back to Dathan's problem: clicks. We store clicks at dealnews. Lots of clicks. We also store views. We store more views than we do clicks. So, lots of views and lots of clicks. (Sorry for the vague numbers, company secrets and all. We are a top 1,000 Compete.com site during peak shopping season.) And we do it all in MySQL. And we do it all with one server. I should disclose we are deploying a second server, but it is more for high availability than processing power. Like Dathan, we only use about the last 24 hours of data at any given time. There are three keys for us doing logging like this in MySQL.

Use MyISAM

MyISAM supports concurrent inserts. Concurrent inserts means that inserts can add rows to the end of a table while selects are being performed on other parts of the data set. This is exactly the use case for our logging. There are caveats with range queries as pointed out by the MySQL Performance Blog.

Rotating tables

MySQL (and InnoDB in particular) really sucks at deleting rows. Like, really sucks. Deleting causes locks. Bleh. So, we never delete rows from our logging tables. Instead, nightly we rotate the tables. RENAME TABLE is an (near) atomic process in MySQL. So, we just create a new table.
create table clicks_new like clicks;
rename table clicks to clicks_2010032500001, clicks_new to clicks;

Tada! We now have an empty table for today's clicks. We now drop any table with a date stamp that is longer than x days old. Drops are fast, we like drops.

For querying these tables, we use UNION. It works really well. We just issue a SHOW TABLES LIKE 'clicks%' and union the query across all the tables. Works like a charm.

Gearman

So, I get a lot of flack at work for my outright lust for Gearman. It is my new duct tape. When you have a scalability problem, there is a good chance you can solve it with Gearman. So, how does this help with logging to MySQL? Well, sometimes, MySQL can become backed up with inserts. It happens to the best of us. So, instead of letting that pile up in our web requests, we let it pile up in Gearman. Instead of having our web scripts write to MySQL directly, we have them fire Gearman background jobs with the logging data in them. The Gearman workers can then write to the MySQL server when it is available. Under normal operating procedure, that is in near real time. But, if the MySQL server does get backed up, the jobs just queue up in Gearman and are processed when the MySQL server is available.

BONUS! Insert Delayed

This is our old trick before we used Gearman. MySQL (MyISAM) has a neat feature where you can have inserts delayed until the table is available. The query is sent to the MySQL server and it answers with success immediately to the client. This means your web script can continue on and not get blocked waiting for the insert. But, MySQL will only queue up so many before it starts erroring out. So, it is not as fool proof as a job processing system like Gearman.

Summary

To log with MySQL:
  • Use MyISAM with concurrent inserts
  • Rotate tables daily and use UNION to query
  • Use delayed inserts with MySQL or a job processing agent like Gearman
Happy logging!

PS: You may be asking, "Brian, what about Partitioned Tables?" I asked myself that before deploying this solution. More importantly, in IRC I asked Brian Aker about MySQL partitioned tables. I am paraphrasing, but he said that if I ever think I might alter that table, I would not trust it with the partitions in MySQL. So, that kind of turned me off of them.