Most epic ticket of the day

UPDATE: I should clarify. This ticket is an internal ticket at DealNews. It is about what the defaults on our servers should be. It is not about what the defaults should be in MySQL. The frustration that UTF8 support in MySQL is only 3 bytes is quite real.

 This epic ticket of the day is brought to you by Joe Hopkinson.

#7940: Default charset should be utf8mb4
------------------------------------------------------------------------
 The RFC for UTF-8 states, AND I QUOTE:

 > In UTF-8, characters from the U+0000..U+10FFFF range (the UTF-16
 accessible range) are encoded using sequences of 1 to 4 octets.

 What's that? You don't believe me?! Well, you can read it for yourself
 here!

 What is an octet, you ask? It's a unit of digital information in computing
 and telecommunications that consists of eight bits. (Hence, __oct__et.)

 "So what?", said the neck bearded MySQL developer dressed as Neo from the
 Matrix, as he smuggly quaffed a Surge and settled down to play Virtua
 Fighter 4 on his dusty PS2.

 So, if you recall from your Pre-Intro to Programming, 8 bits = 1 byte.
 Thus, the RFC states that the storage maximum storage requirements for a
 multibyte character must be 4 bytes, as required.

 I know that RFCs are more of GUIDELINE, right? It's not like they could be
 considered a standard or anything! It's not like there should be an
 implicit contract when an implementor decides to use a label like "UTF-8",
 right?

 Because of you, we have to strip our reader's carefully crafted emojii.
 Because of you, our search term data will never be exact. Because of you,
 we have to spend COUNTLESS HOURS altering every table that we have (which
 is a lot, by the way) to make sure that we can support a standard that was
 written in 2003!

 A cursory search shows that shortly after 2003, MySQL release quality
 started to tank. I can only assume that was because of you.

 Jerk.

 * The default charset should be utf8mb4.
 * Alter and test critical business processes.
 * Change OrderedFunctionSet to generate the appropriate tables.
 * Generate ptosc or propagator scripts to update everything else, as needed.
 * Curse the MySQL developer who caused this.

Keeping your data work on the server using UNION

I have found myself using UNION in MySQL more and more lately. In this example, I am using it to speed up queries that are using IN clauses. MySQL handles the IN clause like a big OR operation. Recently, I created what looks like a very crazy query using UNION, that in fact helped our MySQL servers perform much better.

With any technology you use, you have to ask yourself, "What is this tech good at doing?" For me, MySQL has always been excelent at running lots of small queries that use primary, unique, or well defined covering indexes. I guess most databases are good at that. Perhaps that is the bare minimum for any database. MySQL seems to excel at doing this however. We had a query that looked like this:

select category_id, count(*) from some_table
where
    article_id in (1,2,3,4,5,6,7,8,9) and
    category_id in (11,22,33,44,55,66,77,88,99) and
    some_date_time > now() - interval 30 day
group by
    category_id

There were more things in the where clause. I am not including them all in these examples. MySQL does not have a lot it can do with that query. Maybe there is a key on the date field it can use. And if the date field limits the possible rows, a scan of those rows will be quick. That was not the case here. We were asking for a lot of data to be scanned. Depending on how many items were in the in clauses, this query could take as much as 800 milliseconds to return. Our goal at DealNews is to have all pages generate in under 300 milliseconds. So, this one query was 2.5x our total page time.

In case you were wondering what this query is used for, it is used to calculate the counts of items in sub categories on our category navigation pages. On this page it's the box on the left hand side labeled "Category". Those numbers next to each category are what we are asking this query to return to us.

Because I know how my data is stored and structured, I can fix this slow query. I happen to know that there are many fewer rows for each item for article_id than there is for category_id. There is also a key on this table on article_id and some_date_time. That means, for a single article_id, MySQL could find the rows it wants very quickly. Without using a union, the only solution would be to query all this data in a loop in code and get all the results back and reassemble them in code. That is a lot of wasted round trip work for the application however. You see this pattern a fair amount in PHP code. It is one of my pet peeves. I have written before about keeping the data on the server. The same idea applies here. I turned the above query into this:

select category_id, sum(count) as count from 
(
    (
        select category_id, count(*) as count from some_table
        where
            article_id=1 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=2 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=3 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=4 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=5 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=6 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=7 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=8 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=9 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
) derived_table
group by
    category_id

Pretty gnarly looking huh? The run time of that query is 8ms. Yes, MySQL has to perform 9 subqueries and then the outer query. And because it can use good keys for the subqueries, the total execution time for this query is only 8ms. The data comes back from the database ready to use in one trip to the server. The page generation time for those pages went from a mean of 213ms with a standard deviation of 136ms to a mean of 196ms and standard deviation of 81ms. That may not sound like a lot. Take a look at how much less work the MySQL servers are doing now.

 

mysql graph showing decrease in rows read

The arrow in the image is when I rolled the change out. Several other graphs show the change in server performance as well.

The UNION is a great way to keep your data on the server until it's ready to come back to your application. Do you think it can be of use to you in your application?

Lock Wait Timeout Errors or Leave Your Data on the Server

If you use MySQL with InnoDB (most everyone) then you will likely see this error at some point. There is some confusion sometimes about what this means. Let me try and explain it.

Let's say we have a connection called A to the database. Connection A tries to update a row. But, it receives a lock wait timeout error. That does not mean that connection A did anything wrong. It means that another connection, call it B, is also updating a row that connection A wants to update. But, connection B has an open transaction that has not been committed yet. So, MySQL won't let you update that row from connection A. Make sense?

The first mistake people may make is looking at the code that throws the error to find a solution. It is hardly ever the code that throws the error that is the problem. In our case, it was code that was doing a simple insert into a table. I had a look at our processing logs around the time that the errors were thrown and I found a job that was running during that time. I then looked for code in that job that updates the table that was locked. This was where the problem lied.

So, why does this happen? Well, there can be very legitimate reasons. There can also be very careless reasons. The genesis of this blog post was some code that appeared to be legitimate at first, but upon further inspection was careless. This is basically what the code did.

  1. Start Transaction on database1
  2. Clear out some old data from the table
  3. Select a bunch of data from database2.table
  4. Loop in PHP, updating each row in its own query to update one column
  5. Select a bunch of data from database2.other_table
  6. Loop in PHP, updating each row in its own query to update another column
  7. Commit database1

This code ran in about 20 minutes on the data set we had. It kept a transaction open the whole time. It appeared legit at first because you can't join the data as there are sums and counts going on that have a one to many relationship which would cause some duplication of the sums and counts. It also looks legit because you are having to pull data from one database into another. However, there is a solution for this. We need to stop pulling all this data into PHP land and let it stay on the server where it lives. So, I changed it to this.

  1. Create temp table on database2 to hold mydata
  2. Select data from database2.table into my temp table
  3. Select data from database2.other_table into my temp table
  4. Move my temp table using extended inserts via PHP from database2 to database1
  5. Start Transaction on database1
  6. Clear out some old data from the table
  7. Do a multi-table bulk update of my real table using the temp table
  8. Commit database1

This runs in 3 minutes and only requires a 90 second transaction lock. Our lock wait timeout on this server is 50 seconds though. However, we have a 3 time retry rule for any lock wait timeout in our DB code. So, this should allow for our current workload to be processed without any data loss.

So, why did this help so much? We are not moving data from MySQL to PHP over and over. This applies to any language, not just PHP. The extended inserts for moving the temp table from one db to another really help. That is the fastest part of the whole thing. It moves about 2 million records from one to the other in about 1.5 seconds.

So, if you see a lock wait timeout, don't think you should sleep longer between retries. And don't dissect the code that is throwing the error. You have to dig in and find what else is running when it happens. Good luck.

Bonus: If you have memory issues in your application code, these techniques can help with those too.

Living in the Prove It Culture

Engineering cultures differ from shop to shop. I have been in the same culture for 13 years so I am not an expert on what all the different types are. Before that I was living in Dilbert world. The culture there was really weird. The ideas were never yours. It was always some need some way off person had. A DBA, a UI "expert" and some product manager would dictate what code you wrote. Creativity was stifled and met with resistance.

I then moved to the early (1998) days of the web. It was a start up environment. In the beginning there were just two of us writing code. So, we thought everything we did was awesome. Then we added some more guys. Lucky for us we mostly hired well. The good hires where type A personalities that had skills we didn't have. They challenged us and we challenged them. On top of that, we had a CEO who had been a computer hacker in his teens. So, he had just enough knowledge to challenge us as well. Over the years we kept hiring more and more people. We always asked in the interview if the person could take criticism and if they felt comfortable defending their ideas. We decided to always have a white board session. We would ask them questions and have them work it out on a white board or talk it out with us in a group setting. The point of this was not to see if they always knew the answer. The point was to see how they worked in that setting. Looking back, the hires that did not work out also did not excel in that phase of the interview. The ones that have worked out always questioned our methods in the interview. They did not belittle our methods or dismiss them. They just asked questions. They would ask if we had tried this or that. Even if we could quickly explain why our method was right for us, they still questioned it. They challenged us.

When dealing with people outside the engineering team, we subconsciously applied these same tactics. The philosophy came to be that if you came to us with an idea, you had to throw it up on the proverbial wall. We would then try to knock it down. If it stuck, it was probably a good idea. Some people could handle this and some could not. The ones that could not handle that did not always get their ideas pushed through. It may not mean they were bad ideas. And that is maybe the down side of this culture. But, it has worked pretty well for us.

We apply this to technology too. My first experience on Linux was with RedHat. The mail agent I cut my teeth on was qmail. I used djbdns. When Daniel Beckham, our now director of operations, came on, he had used sendmail and bind. He immediately challenged qmail. I went through some of the reasons I prefered it. He took more shots. In the end, he agreed that qmail was better than sendmail. However, his first DNS setup for us was bind. It took a few more years of bind hell for him to come around to djbdns.

When RedHat splintered into RedHat Enterprise and Fedora, we tried out Fedora on one server. We found it to be horribly unstable. It got the axe. We looked around for other distros. We found a not very well known distro that was known as the ricer distro of the Linux world called Gentoo. We installed it on one server to see what it was all about. I don't remember now whose idea it was. Probably not mine. We eventually found it to be the perfect distro for us. It let us compile our core tools like Apache, PHP and MySQL while at the same time using a package system. We never trusted RPMs for those things on RedHat. Sure, bringing a server online took longer but it was so worth it. Eventually we bought in and it is now the only distro in use here.

We have done this over and over and over. From the fact that we all use Macs now thanks to Daniel and his willingness to try it out at our CEO's prodding to things like memcached, Gearman, etc. We even keep evaluating the tools we already have. When we decided to write our own proxy we discounted everything we knew and evaluated all the options. In the end, Apache was known and good at handling web requests and PHP could do all we needed in a timely, sane manner. But, we looked at and tested everything we could think of. Apache/PHP had to prove itself again.

Now, you might think that a culture of skepticism like this would lead to new employees having a hard time getting any traction. Quite the opposite. Because we hire people that fit the culture, they can have a near immediate impact. We have a problem I want solved and a developer that has been here less than a year suggested that Hadoop may be a solution, but was not sure we would use it. I recently sent this in an email to the whole team in response to that.
The only thing that is *never* on the table is using a Windows server. If you can get me unique visitors for an arbitrary date range in milliseconds and it require Hadoop, go for it.
You see, we don't currently use Hadoop here. But, if that is what it takes to solve my problem and you can prove it and it will work, we will use it.

Recently we had a newish team member suggest we use a SAN for our development servers to use as a data store. Specifically he suggested we could use it to house our MySQL data for our development servers. We told him he was insane. SANs are magical boxes of pain. He kept pushing. He got Dell to come in and give us a test unit. Turns out it is amazing. We can have a hot copy of our production database on our dev slices in about 3 minutes. A full, complete copy of our production database in 3 minutes. Do you know how amazing that is? Had we not had the culture we do and had not hired the right person that was smart enough to pull it off and confident enough to fight for the solution, we would not have that. He has been here less than a year and has had a huge impact to our productivity. There is talk of using this in production too. I am still in the "prove it" mode on this. We will see.
I know you will ask how our dev db works, here you go:
1. Replicate production over VPN to home office
2. Write MySQL data on SAN
3. Stop replication, flush tables, snapshot FS
4. Copy snapshot to a new location
5. On second dev server, umount SAN, mount new snapshot
6. Restart MySQL all around
7. Talk in dev chat how bad ass that is

We had a similar thing happen with our phone system. We had hired a web developer that previously worked for a company that created custom Asterisk solutions. When our propietary PBX died, he stepped up and proved that Asterisk would work for us. Not a job for a web developer. But he was confident he could make it work. It now supports 3 offices and several home bound users world wide. He also had only been here a short time when that happened.

Perhaps it sounds like a contradiction. It may sound like we just hop on any bandwagon technology out there. But no. We still use MySQL. We are still on 5.0 in fact. It works. We are evaluating Percona 5.5 now. We tried MySQL 5.1. We found no advantage and the Gentoo package maintainer found it to be buggy. So, we did not switch. We still use Apache. It works. Damn well. We do use Apache with the worker MPM with PHP which is supposedly bad. But, it works great for us. But, we had to prove it would work. We ran a single node with worker for months before trusting it. Gearman was begrudgingly accepted. The idea of daemonized PHP code was not a comforting one. But once you write a worker and use it, you feel like a god. And then you see the power. Next thing you know, it is a core, mission critical part of your infrastructure. That is how it is with us now. In fact, Gearman has went from untrusted to the go to tech. When someone proposes a solution that does not involve Gearman, someone will ask if part of the problem can be solved using Gearman and not whatever idea they have. There is then a discussion about why it is or is not a good fit. Likewise, if you want to a build a daemon to listen on a port and answer requests, the question is "Why can't you just use Apache and a web service?" And it is a valid question. If you can solve your problem with a web service on already proven tech, why build something new?

This culture is not new. We are not unique. But, in a world of "brogramming" where "engineers" rage on code that is awesome before it is even working and people are said to be "killing it" all the time, I am glad I live in a world where I have to prove myself everyday. I am the most senior engineer on the team. And even still I get shot down. I often pitch an idea in dev chat and someone will shoot it down or point out an obvious flaw. Anyone, and I mean anyone, on the team can question my code, ideas or decisions and I will listen to them and consider their opinion. Heck, people outside the team can question me too. And regularly do. And that is fine. I don't mind the questions. I once wrote here that I like to be made to feel dumb. It is how I get smarter. I have met people that thought they were smarter than everyone else. They were annoying. I have interviewed them. It is hard to even get through those interviews.

Is it for everyone? Probably not. It works for us. And it has gotten us this far. You can't get comfortable though. If you do foster this type of culture, there is a risk of getting comfortable. If you start thinking you have solved all the hard problems, you will look up one day and realize that you are suffering. Keep pushing forward and questioning your past decisions. But before you adopt the latest and greatest new idea, prove that the decisions your team makes are the right ones at every step. Sometimes that will take a five minute discussion and sometimes it will take a month of testing. And other times, everyone in the room will look at something and think "Wow that is so obvious how did we not see it?" When it works, it is an awesome world to live in.

PHP Frameworks

Last week I spoke at and attended the first ever PHP Community Conference. It was very good. It was also very different from my normal conference. I usually go for very technical stuff. I don't often stop and smell the roses of the community or history of my chosen (by me or it I am not sure sometimes) profession. There was a lot of community at this one.

One thing that seemed to be a hot topic at the conference was frameworks. CakeDC, the money behind CakePHP was the platinum sponsor. I chatted with Larry Masters the owner of CakeDC for a bit while walking one night. Great guy. Joël Perras gave a tutorial about Lithum. I attended most of this one. He did very well. Joël was frank and honest about the benefits and problems with using frameworks, including having to deal with more than one at a time. There was also a tutorial about Zend Framework patterns by Matthew Weier O'Phinney. I missed this one. On the second day, things were different. Rasmus Lerdorf warned about the bloat in most of the PHP frameworks and expressed hope that they would do better in the newer versions. I received several questions about frameworks in my session. I also spoke out about them a bit. Terry Chay wrapped up the day with his closing keynote and touched on them again. More on that later. I want to kind of summarize what I said (or meant to say).

PHP is a framework

In my session, I talked about the history of Phorum. One of the things I covered was the early days of PHP. Back in the 90s, before PHP, most dynamic web work was done in C or Perl. At that time, in those worlds, you had to do all the HTTP work yourself. If you wanted a content type of text/html, you had to set it, in code, on every single response. Errors in CGI scripts would often result in Apache internal error pages and made debugging very hard. All HTML work had to be done by writing to output. There was no embedding code with HTML even as a templating language. PHP changed all that. You had a default content type of text/html. You had automatic handling of request variables. Cookies were easily ingested and output. You could template your HTML with script instead of having to write everything out via print or stdout. It was amazing. Who could ask for more?

Frameworks as a tool

Well, apparently a world that I honestly don't work in could ask for more. There are three major segments of web developers these days. There are developers that work for a company that has a web site, but its business is not the web site. Maybe it is a network hardware company or some other industry where their business merits having a staff to run their site, but it is not their core business. Then there are developers like myself that work for a company where the web site is the business. Everything about the business goes through the web. We have the public web site and the internal web site our team uses. It is everything. The last type are those developers that are constantly building new sites or updating existing sites for clients. I will be honest, this is not a segment I have considered much in the past when writing code or blog posts. But, I met more of those people at this conference than any of the other two types. They seem to be the ones that are motivated and interested. Or at least, because PHP and the web are their business, they sent their people to the conference.

You see, I have spoken out about frameworks. Not very publicly, but those that know me have heard me comment about them. I have never really seen the point. Why start with something generic that will most likely not fit your ultimate need when you need to scale or expand beyond its abilities? Well, for thousands of web sites, that are likely being built by agencies, that time never occurs. Most likely, before that happens, the site will be redesigned and completely replaced. So, if you spend every day building a new site, why do all that groundwork every time?

In addition, why have to deal with every different client's needs? I often say that Apache is my controller. I don't like to use PHP as my controller. But, if I was deploying a site every week to a different stack, I can't rely on Apache with mod_rewrite or whatever things I rely on in my job today. So, you need to have full control in the application. What database will the client this week use? I don't care, the framework abstracts that for me. These are all very good reasons to use a framework.

Framework Trade-Off

There are some trade-offs though. The biggest one I see is the myriad of choices. Several of the pro-framework people even mentioned that there are a lot out there. And it seems that someone is making a new one everyday. With all these choices, it is likely that some of the benefit you get from a framework could be lost. If a client already has a site based on CakePHP and your agency uses Lithium what do you do? Say no to the work or have to deal with the differences? Some of them are big enough to be a real issue. Some are so small, you may not notice them until it's too late. That is a tough place to be.

The other issue is performance. Frameworks are notoriously inefficient. It has just been their nature. The more you abstract away from the core, the less efficient you are. This is even true with PHP. Terry Chay pointed out that PHP is less efficient than Java or C in his keynote. But, you gain power with PHP in way of quicker development cycles. Frameworks have that same benefit. But, have not solved this issue any better than PHP has over C. They abstract away the low level (for PHP at least) stuff that is going on. And that means loss of efficiency. This can be solved or at least worked on, however, and I hope it is.

Frameworks as a Commodity

So, this gets me back to something Terry Chay said. He talked about the motivation of companies to open source their technology. He used Facebook's Open Compute Project as an example. He pointed out that a major reason Facebook would open up this information would be in hopes that others would do the same in their data centers. If that happened, it would be easier for Facebook to move to a new data center because it was already mostly setup the way they like it.

Transitioning this same thought frameworks, the commoditization here, that I see, is in the interest of developers. If the framework you support becomes the de facto standard, then all those developers working in agencies using it are now ready to come to work for you. Plus, if you are the company behind it, there are opportunities for books, conferences, training, support, and all the other peripherals that come from the commercial/open source interaction. Need proof of that? Look no further than the "PHP Company", Zend. They could have committed developers to PEAR, but instead created Zend Framework. I see job listings very often for Zend Framework experience. Originally Zend tried to monetize the server with their optimizers and Zend Server. They had moderate success. The community came up with APC and XCache that sort of stole their thunder. I feel they have had much better success with Zend Framework in terms of market penetration. The money is with the people that write the code, not run the servers.

Frameworks are EVERYWHERE

I will close with something else that Terry Chay said. This was kind of an aha! moment for me. Terry pointed out that frameworks are everywhere. Wordpress, Drupal and even my project, Phorum, are frameworks. You can build a successful site using just those applications. It is not just the new breed code libraries that can be viewed as frameworks. In fact, Phorum's very own Maurice Makaay is building his new web site using only Phorum 5.3 (current development branch). Phorum offers easier database interaction, output handling, templating, a pluggable module system and even authentication and group based permissions. Wow, I have always kind of shunned this idea. In fact, when Maurice first showed me his site, I kind of grimaced. Why would you want to do that? You know why? Because the main thing that drives his site is Phorum. His users come to the site for Phorum. So, why would he want to install Phorum, invest in making it all it can be and then have to start from scratch for all the other parts of the site that are not part of the message board. Duh, I kind of feel stupid for never looking at things from this perspective before. Feeling dumb is ok. I get smarter when I feel dumb. New ideas make me a better developer. And I hope that is what comes out of this experience for me. You never know, I may throw my name in this hat and see how Phorum's groundwork could be useful outside of the application itself.

What is next for message board software?

When I was hired at dealnews.com in 1998, my primary focus was to get our message board (Phorum) up to speed. I had written the first version as a side project for the site. Message boards were a lot simpler back then. Matt's WWWBoard was the gold standard of the time. And really, the functionality has been only evolutionary since. We added attachments to Phorum in 2003 or something. That was a major new feature. In Phorum 5 we added a module system that was awesome. But, that was just about the admin and not the user. From the user's perspective, message boards have not changed much since 1997. I saw this tweet from Amy Hoy and it got me to thinking about how message boards work. Here is the typical user experience:
  1. Go to the message board
  2. See a list of categories
  3. Drill down to the category they want to read
  4. Scroll through a list of messages that are in reverse cronological order by original post date or most recent post date
  5. Click a message and read it.
  6. Go to #3, repeat
Every message board software package pretty much works like that and has for over 10 years. And it kind of sucks. What a user would probably rather experience is:
  1. Go to the message board
  2. The most interesting things (to this user) are listed right there on the page. No drill down needed.
  3. Click one and read it.
  4. Goto #2, repeat.
Sounds easy? That #2 is easy to type but very hard to accomplish. I think it is conceivably doable if you are running a site that has all the data. Stackoverflow comes close. When you land on the site, they default the page to the "interesting" posts. However, they are not always interesting to me. They are making general assumptions about their audience. For example, right now, the first one is tagged "delphi". I could care less about that language and any posts about it. Its a good try, but misses by oh so far. This is not a Stackoverflow hate post. They are doing a good job. So, what do I do when I land there? I ignore the front page and click Tags (#2 in the first list), then pick a tag I want to read about (#3 in the first list). Low and behold the page I get is "newest". So, I end up doing exactly what is in the first list I mentioned. They do offer other sort options. But, they chose newest as the default. And from years of watching user behavior, 80% - 90% of people go with the good ol' default. This kind of brings me to another point though about the types of message boards there are.

Stackoverflow is a classic example of a help message board. People come there and ask a question. Other people come along and answer the question. Then more people come along and vote on whether the answers (and questions) are any good. This is one really nice feature that I think will have to become a core feature in any message board of the future. The signal to noise ratio can get so out of whack, you need human input to help decide what is good and what is noise. I think the core of the application has to rely on that if we are ever going to achieve the desired experience.

The second type of message board is a conversational system. It is almost like a delayed chat room. People come to a message board and post about their cat or asking who watched a TV show, that kind of thing. This has a completely different dynamic to it than the help message board. You can't really vote if a post is good or bad. The obvious exception being spam would of course want to be recognized and dealt with.

So, how do you know what content is desirable for the user that is entering the site right now? This concept has already been laid out for us: the social graph. You have to give users a way to associate with other users. If Bob really likes Tom's posts, he is probably more interested to read Tom's post from 30 minutes ago than some new guy that just joined the site and posted 1 minute ago. The challenge here is getting people to interconnect...but not too much. Everyone has that aunt on Facebook that follows you, your roommate and anybody else she can. She would follow your dog if he had a Facebook account. So, those people would still get a crappy experience if the whole system relied on the social graph. The other side is the people that will never "follow", "like" or whatever you call it another person. Their experience would lack as well. One key ingredient here is that you need to own this data. You can't just throw like buttons and Facebook connect on your message board and think you can leverage that data. That data is for Facebook, not you. I think the help message boards could benefit from the social graph as well.

Another aspect of what is most important to a user is discussions they are involved in. That could mean ones they started, ones they have replied to or simply ones they have read. Which of those aspects are more important than the others? Clearly if you started a discussion and someone has replied, that is going to interest you. If you posted a reply, you may be done with the topic or you may be waiting on a response. It would take some serious natural language algorithms to decide which is the case. For things you have read, I think you have to consider how many times the user has read the discussion. If every time it is updated they read it, they probably will want to read it again the next time it is updated. If they have only read it once, maybe they are not as interested.

The last aspect of message boards is grouping things. This is the part I actually struggle with the most. The easy first answer is tagging. Don't force the user down a single trail, let them tag posts instead of only posting them in one neat contained area. That gets you half way there. Let's use Stackoverflow (I really do like the site) as an example again. The first thing I do is go to Tags and click on PHP. I like helping people with PHP problems. So,  is that really any different from categorization? Sure, there could be someone out there that really likes helping with Javascript. And if the same post was tagged with both tags then their coverage of potential help is larger. But, some of the time those tags are wrong when they tag it with more than one tag. The problem they need help with is either PHP or Javascript, most likely not both. They just don't know what they are doing. For example, there is this post on Stackoverflow. The user tagged it PHP and database-design. There is no PHP in the question. I am guessing he is using PHP for the app. But, it really never comes up and he is only talking about database design. So, who did the PHP tag help there? I don't think it helped him. And it only wasted my time. Having written all that, a free-for-all approach where there is no filtering sucks too. ARGH! It just all sucks. That brings us back to what Amy said in a way. Perhaps moderated tagging is an answer. I have not seen a way on Stackoverflow to untag a post. That would let people correct others. I am gonna write that down. If you work at Stackoverflow and are reading this, you can use that idea. Just put a comment in the code about how brilliant I am or something that aliens will find one day.

So, I am done. I know exactly what to do right? I just have to make code that does everything I put in the previous paragraphs. Man I wish it were that easy. When you want to write a distributed application to do it, the task is even more daunting. If I controlled the data and the servers and the code, I could do crazy things that would make great conference talks. But, it kind of falls apart when I want to give this code to a 60 year old retired guy that is starting a hobby site for watching humming birds on a crappy GoDaddy account. Yeah, he is not installing Sphinx or HandlerSocket or Gearman. Those are all things I would want to use to solve this problem in a scalable fashion. At that point you have two choices. Aim for the small time or the big time. If you aim for the small time, you may get lots of installs, but, you will be hamstrung. If you aim for the big time, you may be the only guy that ever uses the code. That is a tough decision.

What have I missed? I know I missed something. Are there other types of message boards? I can definitely see some sub-types. Perhaps a board where ideas instead of help messages are posted. Or maybe the conversations are more show off based as in a user posting pictures or videos for comment. Is there already something out there doing this and I have just missed it? Let me know what I have missed please.

PHP 5.3 and mysqlnd - Unexpected results

I have started seriously using PHP 5.3 recently due to it finally making it into Portage. (Gentoo really isn't full of bleeding edge packages people.) I have used mysqlnd a little here and there in the past, but until it was really coming to my servers I did not put too much time into it.

What is mysqlnd?

mysqlnd is short for MySQL Native Driver. In short, it is a driver for MySQL for PHP that uses internal functions of the PHP engine rather than using the externally linked libmysqlclient that has been used in the past. There are two reasons for this. The first reason is licensing. MySQL is a GPL project. The GPL and the PHP License don't play well together. The second is better memory management and hopefully more performance. Being a performance junky, this is what peaked my interests. Enabling mysqlnd means it is used by the older MySQL extension, the newer MySQLi extension and the MySQL PDO driver.

New Key Feature - fetch_all

One new feature of mysqlnd was the fetch_all method on MySQLi Result objects. At both dealnews.com and in Phorum I have written a function to simply run a query and fetch all the results into an array and return it. It is a common operation when writing API or ORM layers. mysqlnd introduces a native fetch_all method that does this all in the extension. No PHP code needed. PDO already offers a fetchAll method, but PDO comes with a little more overhead than the native extensions and I have been using mysql functions for 14 years. I am very happy using them.

Store Result vs. Use Result

I have spoken in the past (see my slides and interview: MySQL Tips and Tricks) about using mysql_unbuffered_query or using mysqli_query with the MYSQLI_USE_RESULT flag. Without going into a whole post about that topic, it basically allows you to stream the results from MySQL back into your PHP code rather than having them buffered in memory. In the case of libmysqlclient, they could be buffered twice. So, my natural thought was that using MYSQLI_USE_RESULT with fetch_all would yield the most awesome performance ever. The data would not be buffered and it would get put into a PHP array in C instead of native code. The code I had hoped to use would look like:
$res = $db->query($sql, MYSQLI_USE_RESULT);
$rows = $res->fetch_all(MYSQLI_ASSOC);
But, I quickly found out that this does not work. For some reason, this is not supported. fetch_all only works with the default which is MYSQLI_STORE_RESULT. I filed a bug which was marked bogus. Which I put back to new because I really don't see a reason this should not work other than a complete oversight by the mysqlnd developers. So, I started doing some tests in hopes I could show the developers how much faster using MYSQLI_USE_RESULT could be. What happened next was not expected. I ended up benchmarking several different options for fetching all the rows of a result into an array.

Test Data

I tested using PHP 5.3.3 and MySQL 5.1.44 using InnoDB tables. For test data I made a table that has one varchar(255) column. I filled that table with 30k rows of random lengths between 10 and 255 characters. I then selected all rows and fetched them using 4 different methods.
  1. mysqli_result::fetch_all*
  2. PDOStatement::fetchAll
  3. mysqli_query with MYSQLI_STORE_RESULT followed by a loop
  4. mysqli_query with MYSQLI_USE_RESULT followed by a loop
In addition, I ran this test with mysqlnd enabled and disabled. For mysqli_result::fetch_all, only mysqlnd was tested as it is only available with mysqlnd. I ran each test 6 times and threw out the worst and best result for each test. FWIW, the best and worst did not show any major deviation for any of the tests. For measuring memory usage, I read the VmRSS value from Linux's /proc data. memory_get_usage() does not show the hidden memory used by libmysqlclient and does not seem to show all the memory used by mysqlnd either.



So, that is what I found. The memory usage graphs are all what I thought they would be. PDO has more overhead by its nature. Storing the result always uses more memory than using it. mysqli_result::fetch_all uses less memory than the loop, but more than directly using the results.

There are some very surprising things in the timing graphs however. First, the tried and true method of using the result followed by a loop is clearly still the right choice in libmysqlclient. However, it is a horrible choice for mysqlnd. I don't really see why this is so. It is nearly twice as slow. There is something really, really wrong with MYSQLI_USE_RESULT in mysqlnd. There is no reason it should ever be slower than storing the result and then reading it again. This is also evidenced in the poor performance of PDO (since even PDO uses mysqlnd when enabled). PDO uses an unbuffered query for its fetchAll method and it too got slower. It is noticably slower than libmysqlclient. The good news I guess is that if you are using mysqlnd, the fetch_all method is the best option for getting all the data back.

Next Steps

My next steps from here will be to find some real workloads that I can test this on. Phorum has several places where I can apply real world pages loads to these different methods and see how they perform. Perhaps the test data is too small. Perhaps the number of columns would have a different effect. I am not sure.

If you are reading this and have worked on or looked at the mysqlnd code and can explain any of it, please feel free to comment.

Want to understand MySQL indexes?

I have introduced some new people to MySQL recently and had to back track the years to figure out how I learned what I learned about MySQL indexes.  A quick way to get up to speed about MySQL indexes is these three podcasts by Sheeri Cabral.
  1. OurSQL Episode 13: The Nitty Gritty of Indexes
  2. OurSQL Episode 17: Hashing it out
  3. OurSQL Episode 18: De-myth-tifying Indexes
Those three episodes do a good job of explaining how indexes work so that you have a better understanding of how MySQL indexes find your data.

MySQL Conference Review

I am back home from a good week at the 2010 O'Reilly MySQL Conference & Expo. I had a great time and got to see some old friends I had not seen in a while.

Oracle gave the opening keynote and it went pretty much like I thought it would. Oracle said they will keep MySQL alive. They talked about the new 5.5 release. It was pretty much the same keynote Sun gave last year. Time will tell what Oracle does with MySQL.

The expo hall was sparse. Really sparse. There were a fraction of the booths compared to the past. I don't know why the vendors did not come. Maybe because they don't want to compete with Oracle/Sun? In the past you would see HP or Intel have a booth at the conference. But, with Oracle/Sun owning MySQL, why even try. Or maybe they are not allowed? I don't know. It was just sad.

I did stop by the Maatkit booth and was embarrassed to tell Baron (its creator) I was not already using it. I had heard people talk about it in the past, but never stopped to see what it does. It would have only saved me hours and hours of work over the last few years. Needless to say it is now being installed on our servers. If you use MySQL, just go install Maatkit now and start using it. Don't be like me. Don't wait for years, writing the same code over and over to do simple maintenance tasks.

Gearman had a good deal of coverage at the conference. There were three talks and a BoF. All were well attended. Some people seemed to have an AHA! moment where they saw how Gearman could help their architecture. I also got to sit down with the PECL/gearman maintainers and discuss the recent bug I found that is keeping me from using it.

I spoke about Memcached as did others. Again, there was a BoF. It was well attended and people had good questions about it. There seemed to be some FUD going around that memcached is somehow inefficient or not keeping up with technology. However, I have yet to see numbers or anything that proves any of this. They are just wild claims by people that have something to sell. Everyone wants to be the caching company since there is no "Memcached, Inc.". There is no company in charge. That is a good thing, IMO.

That brings me to my favorite topic for the conference, Drizzle. I wrote about Drizzle here on this blog when it was first announced. At the time MySQL looked like it was moving forward at a good pace. So, I had said that it would only replace MySQL in one part of our stack. However, after what, in my opinion, has been a lack of real change in MySQL, I think I may have changed my mind. Brian Aker echoed this sentiment in his keynote address about Drizzle. He talked about how MySQL AB and later Sun had stopped focusing on the things that made MySQL popular and started trying to be a cheap version of Oracle. That is my interpretation of what he said, not his words.

Why is Drizzle different? Like Memcached and Gearman, there is no "Drizzle, Inc.". It is an Open Source project that is supported by the community. It is being supported by companies like Rackspace who hired five developers to work on it. The code is kept on Launchpad and is completely open. Anyone can create a branch and work on the code. If your patches are good, they will be merged into the main branch. But, you can keep your own branch going if you want to. Unlike the other forks, Drizzle has started over in both the code and the community. I personally see it as the only way forward. It is not ready today, but my money is on Drizzle five or ten years from now.

Logging with MySQL

I was reading a post by Cassandra is my NoSQL solution but..". In the post, Dathan explains that he uses Cassandra to store clicks because it can write a lot faster than MySQL. However, he runs into problems with the read speed when he needs to get a range of data back from Cassandra. This is the number one problem I have with NoSQL solutions.

SQL is really good at retrieving a set of data based on a key or range of keys. Whereas NoSQL products are really good at writing things and retrieving one item from storage. When looking at redoing our architecture a few years ago to be more scalable, I had to consider these two issues. For what it is worth, the NoSQL market was not nearly as mature as it is now. So, my choices were much more limited. In the end, we decided to stick with MySQL. It turns out that a primary or unique key lookup on a MySQL/InnoDB table is really fast. It is sort of like having a key/value storage system. And, I can still do range based queries against it.

But, back to Dathan's problem: clicks. We store clicks at dealnews. Lots of clicks. We also store views. We store more views than we do clicks. So, lots of views and lots of clicks. (Sorry for the vague numbers, company secrets and all. We are a top 1,000 Compete.com site during peak shopping season.) And we do it all in MySQL. And we do it all with one server. I should disclose we are deploying a second server, but it is more for high availability than processing power. Like Dathan, we only use about the last 24 hours of data at any given time. There are three keys for us doing logging like this in MySQL.

Use MyISAM

MyISAM supports concurrent inserts. Concurrent inserts means that inserts can add rows to the end of a table while selects are being performed on other parts of the data set. This is exactly the use case for our logging. There are caveats with range queries as pointed out by the MySQL Performance Blog.

Rotating tables

MySQL (and InnoDB in particular) really sucks at deleting rows. Like, really sucks. Deleting causes locks. Bleh. So, we never delete rows from our logging tables. Instead, nightly we rotate the tables. RENAME TABLE is an (near) atomic process in MySQL. So, we just create a new table.
create table clicks_new like clicks;
rename table clicks to clicks_2010032500001, clicks_new to clicks;

Tada! We now have an empty table for today's clicks. We now drop any table with a date stamp that is longer than x days old. Drops are fast, we like drops.

For querying these tables, we use UNION. It works really well. We just issue a SHOW TABLES LIKE 'clicks%' and union the query across all the tables. Works like a charm.

Gearman

So, I get a lot of flack at work for my outright lust for Gearman. It is my new duct tape. When you have a scalability problem, there is a good chance you can solve it with Gearman. So, how does this help with logging to MySQL? Well, sometimes, MySQL can become backed up with inserts. It happens to the best of us. So, instead of letting that pile up in our web requests, we let it pile up in Gearman. Instead of having our web scripts write to MySQL directly, we have them fire Gearman background jobs with the logging data in them. The Gearman workers can then write to the MySQL server when it is available. Under normal operating procedure, that is in near real time. But, if the MySQL server does get backed up, the jobs just queue up in Gearman and are processed when the MySQL server is available.

BONUS! Insert Delayed

This is our old trick before we used Gearman. MySQL (MyISAM) has a neat feature where you can have inserts delayed until the table is available. The query is sent to the MySQL server and it answers with success immediately to the client. This means your web script can continue on and not get blocked waiting for the insert. But, MySQL will only queue up so many before it starts erroring out. So, it is not as fool proof as a job processing system like Gearman.

Summary

To log with MySQL:
  • Use MyISAM with concurrent inserts
  • Rotate tables daily and use UNION to query
  • Use delayed inserts with MySQL or a job processing agent like Gearman
Happy logging!

PS: You may be asking, "Brian, what about Partitioned Tables?" I asked myself that before deploying this solution. More importantly, in IRC I asked Brian Aker about MySQL partitioned tables. I am paraphrasing, but he said that if I ever think I might alter that table, I would not trust it with the partitions in MySQL. So, that kind of turned me off of them.