The rise of the GLAMMP stack

First there was LAMP.  But are you using GLAMMP?  You have probably not heard of it because we just coined the term while chatting at work.  You know LAMP (Linux, Apache, MySQL and PHP or Perl and sometimes Python). So, what are the extra letters for?

The G is for Gearman - Gearman is a system to farm out work to other machines, dispatching function calls to machines that are better suited to do work, to do work in parallel, to load balance lots of function calls, or to call functions between languages.

The extra M is for Memcached - memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.

More and more these days, you can't run a web site on just LAMP.  You need these extra tools (or ones like them) to do all the cool things you want to do.  What other tools do we need to work into the acronym?  PostgreSQL replaces MySQL in lots of stacks to form LAPP.  I guess Drizzle may replace MySQL in some stacks soon.  For us, it will likely be added to the stack.  Will that make it GLAMMPD?  We need more vowels!  If you are starting the next must use tool for running web sites on open source software, please use a vowel for the first letter.

Replication is much better than cold backups

So, I wrote about the begining of our wild database issues. Since then, I have been fighting a cold, coaching little league football and trying to help out in getting our backup solutions working in top shape.  That does not leave much time for blogging.

Never again will we have ONLY a cold backup of anything.  We were moving nightly full database dumps and hourly backups of critical tables over to that box all day long.  Well, when the filesystem fails on both the primary database server and your cold backup server, you question everything.  A day after my marathon drive to fix the backup server and get it up and running, the backup mysql server died again with RAID errors.  I guess that was the problem all along.  In the end, we had to have a whole new RAID subsystem in our backup database server.  So, my coworker headed over to the data center to pull the all nighter to get the original, main database server up and running.  The filesystem was completely shot.  ReiserFS failed us miserably.  It is no longer to be used at dealnews.

Well, today at 6:12PM, the main database server stops responding again.  ARGH!!  Input/Ouput errors.  That means RAID based on last weeks experience.  We reboot it.  It reports memory or battery errors on the RAID card.  So, I call Dell.  Our warranty on these servers includes 4 hour, onsite service.  They are important.  While on the phone with Dell, I run the Dell diagnostic tool on the box.  During the diagnostic test, the box shuts down.  Luckily, the Dell service tech had heard enough.  He orders a whole new RAID subsystem for this one as well.

There is one cool thing about the PERC4 (aka, LSI Megaraid) RAID cards in these boxes.  They write the RAID configuration to the drives as well as on the card.  So, when a new blank RAID card is installed, it finds the RAID config on the drives and boots the box up.  Neato.  I am sure all the latest cards do it.  It was just nice to see it work.

So, box came up, but this time we had Innodb corruption.  XFS did a fine job in keeping the filesystem in tact.  So, we had to go from backups.  But, this time we had a live replicated database that we could just dump and restore.  We should have had it all along, but in the past (i.e. before widespread Innodb) we were gun shy about replication.  We had large MyISAM tables that would constantly get corrupted on the master or slave and would halt replication on a weekly basis.  It was just not worth the hassle.  But, we have used it for over a year now in our front end database servers with an all Innodb data set.  As of now, only two tables in our main database are not Innodb.  And I am trying to drop the need for a Full-Text index on those right now.

So, here is to hoping our database problems are behind us.  We have replaced almost everything in one except the chassis.  The other has had all internal parts but a motherboard.  Kudos to Dell's service.  The tech was done with the repair in under 4 hours.  Glad to have that service.  I recommend it to anyone that needs it.

An Introduction to MySQL - Birmingham, AL

I am giving a talk titled "An Introduction to MySQL" here in Birmingham, AL on June 21, 2008 at 3PM.

I love living in Alabama.  I was born and raised in Huntsville.  However, Birmingham has always seemed a bit behind in technology compared to what I do for a living.  There is good reason.  The industry here is medical, banking, industrial and utilities.  I don't really want my doctors keeping my medical records in an alpha release of anything.  Same goes for my banking and utilities.  But, as this page shows, the companies here are catching up.  So, I am happy to present MySQL to as many people as I can in this town.  Hopefully I will help some folks that have not been exposed to MySQL or any open source for that matter.

The event is part of our local Linux user group's (BALU) planned events.

Example my.cnf files

NEW UPDATE: MySQL Forge is being end of lifed. And honestly, there were never that many examples there. It just never took off. But, there is good news. Percona, the company in my opinion that is leading the way for binary compatible MySQL progress, has an online tool for creating a my.cnf file based on your input about your server, your data and your workload. It seems to work very well. I recommend using it for creating your my.cnf file.

UPDATE: There are some examples being added at the MySQL Forge now.

When I first started installing MySQL for myself, it was quite handy to have the example my.cnf files in the source package. I was a noob to the MySQL configuration. Even after I became more experienced, I would use them as a starting point. However, I now find that they are so behind the times they are not as useful. Here are some of the comments from the files.

my-small.cnf

# This is for a system with little memory (<= 64M) where MySQL is only used
# from time to time and it's important that the mysqld daemon
# doesn't use much resources.

my-medium.cnf

# This is for a system with little memory (32M - 64M) where MySQL plays
# an important part, or systems up to 128M where MySQL is used together with
# other programs (such as a web server)

my-large.cnf

# This is for a large system with memory = 512M where the system runs mainly
# MySQL.

my-huge.cnf

# This is for a large system with memory of 1G-2G where the system runs mainly
# MySQL.

I end up using the large or huge files as a starting point for every server I set up by hand. The small and medium should be renamed underpowered and teeny-tiny. Who has less than 64MB of RAM on a server now? Can you even buy sticks of memory that small in any modern system? Most come with 256MB sticks minimum. And they never come with just one stick.

I will use the large example as a starting point for a server that has 2GB of RAM and will be running an entire site on one server. I use huge for any server that runs only MySQL. And even then, most of them have 4GB of RAM or more.

I don't know if anyone at MySQL has plans on tweaking these files or not. Perhaps those good guys at the MySQL Performance Blog or Percona could create some example my.cnf files. I could put some out there, but I fear their sole purpose would be for someone to point out what I am doing wrong. =P Hey, they work for me. Hmm, maybe this would make a good MySQL Forge section. A whole area of user contributed my.cnf files. They could be architecture specific and everything. What runs best on Solaris? Linux? BSD? Windows? 32-bit? 64-bit?

One thing I would for sure like to see is example files for InnoDB dominant servers. Most of our servers all run primariy InnoDB tables. None of these above examples covers InnoDB. They have comments, but no preconfigured values. I have seen more than one server using InnoDB tables without any custom configuration in their my.cnf. In the end that is the fault of the server admin/owner no doubt.

What do you say? Anyone up for a MySQL Forge section for my.cnf files?

Local: Best practices for SQL backed web applications

When
Tuesday, March 11, 2008 at 12:00 PM

Where
BizTech
515 Sparkman Drive
Huntsville , AL 35816

Details
Brian Moon of dealnews.com will be discussing best practices for writing database backed web based applications. Many users teach themselves SQL and programming on the web. Other developers may have experience in enterprise desktop applications. No matter what your background, there are common mistakes made when deploying web based applications that use a database.

Also, at this event, we will be giving away two copies of NuSphere's PhpED. Plus, everyone who attends can purchase any NuSphere product at 50% off.

Lunch will be served at this event.

Speaking at MySQL Conference 2008

I had mentioned a while back that I submitted three proposals for the 2008 MySQL Conference.  Well, two were accepted.

From one server to a cluster

In the last 10 years, dealnews.com has grown from a single shared hosting account to an entire rack of equipment. Luckily, we started using PHP and MySQL very early in the company's history.

From the early days of growing a forum to surviving Slashdotting, Digging and even a Yahoo! front page mention, we have had to adapt both our hardware and software many times to keep up with the growth.

I will discuss the traps, bottlenecks, and even some big wins we have encountered along the way using PHP and MySQL. From the small scale to using replication and even some MySQL Cluster.  We have done many interesting things to give our readers (and our content team) a good experience when using our web site.

MySQL hacks and tricks to make Phorum fast

Phorum is the message board software used by MySQL. One reason they chose Phorum was because of its speed. We have to use some tricks and fancy SQL to make this happen. Things we will talk about in this session include:

  • Using temporary tables for good uses.

  • Why PHP and MySQL can be a bad mix with large data sets.

  • What mysqlnd will bring to the table with the future of PHP and MYSQL.

  • How Phorum uses full text indexing and some fancy SQL to make our search engine fast.

  • Forcing MySQL to use indexes to ensure proper query performance.


You can find my conference page here.  (as Terry would say, me, me, me!)

Apache Worker and PHP

The PHP manual basically tells you not to use Apache 2 with a threaded MPM and PHP as an Apache module. In general, it may be good advice. But, at dealnews.com, we have found it very valuable.

Apache threaded MPMs

Well, first, what is an MPM? It stands for Multi-Processing Module. It is the process model that Apache uses for its children process. Each request that comes in is handed to a child. Apache 1 used only one model for this, the prefork model. That uses one process per Apache child. The most commonly used threaded MPM is the Worker MPM. In this MPM, you have several processes that run multiple threads within it. This is the one I will be talking about. You can read more on Apache MPMs at the Apache web site.

Huge memory savings

With the Apache prefork or even FastCGI, each apache/php process allocates its own memory. Most healthy sites I have worked on use about 15MB of memory per apache process. Code that has problems will use even more than this. I have seen some use as much as 50MB of RAM. But, lets stick with healthy. So, a server with 1GB of RAM will only realistically be able to run 50 Apache processes or 50 PHP children for FastCGI if each uses 15MB or RAM. That is 750MB total. That leaves just 256MB for the OS and other applications. Now, if you are Yahoo! or someone else with lots of money and lots of equipment, you can just keep adding hardware. But, most of us can't do that.

As I wrote above, the worker MPM apache uses children (processes) and threads. If you configure it to use 10 child processes, each with 10 threads you would have 100 total threads or clients to answer requests. The good news is, because 10 threads are in one process, they can reuse memory that is allocated by other threads in the same process. At dealnews, our application servers use 25 threads per child. In our experience, each child process uses about 35MB of RAM. So, that works out to about 1.4MB per thread. That is 10% the usage for a prefork server per client.

Some say that you will run out of CPU way before RAM. That was not what we experienced before switching to worker. Machines with 2GB of RAM were running out of memory before we hit CPU as a bottleneck due to having just 100 Apache clients running. Now, with worker, I am happy to say that we don't have that problem.

Building PHP for best success with Worker

This is an important part. You can't use radical extensions in PHP when you are using worker. I don't have a list of extensions that will and won't work. We stick with the ones we need to do our core job. Mainly, most pages use the mysql and memcached extension. I would not do any fancy stuff in a worker based server. Keep a prefork server around for that. Or better yet, do funky memory sucking stuff in a cron job and push that data somewhere your web servers can get to it.

Other benefits like static content

Another big issue you hear about with Apache and PHP is running some other server for serving static content to save resources. Worker allows you to do this without running two servers. Having a prefork Apache/PHP process that has 15MB of RAM allocated serve a 10k jpeg image or some CSS file is a waste of resources. With worker, like I wrote above, the memory savings negate this issue. And, from my benchmarks (someone prove me wrong) Apache 2 can keep up with the lighttpds and litespeeds of the world in terms of requests per second for this type of content. This was actually the first place we used the worker mpm. It may still be a good idea to have dedicated apache daemons running just for that content if you have lots of requests for it. That will keep your static content requests from over running your dynamic content requests.

Some issues we have seen

Ok, it is not without problems (but, neither was prefork). There are some unknown (meaning undiagnosed by us) things that will occasionally cause CPU spikes on the servers running worker. For example, we took two memcached nodes offline and the servers that were connected to them spiked their CPU. We restarted Apache and all was fine. It was odd. We had another issue where a bug in my PHP code that was calling fsockopen() without a valid host name and a long timeout would cause a CPU spike and would not seem to let go. So, it does seem that bad PHP code makes the server more sensitive. So, your mileage may vary.

As with any new technology, you need to test a lot before you jump in with both feet. Anyone else have experience with worker and want to share?

One last tip

We have adopted a technique that Rasmus Lerdorf had mentioned. We decide how many MaxClients a server can run and we configure that number to always run. We set the min and max settings of the Apache configuration the same. Of course, we are running service specific servers. If you only have one or two servers and they run Apache and MySQL and mail and dns and... etc. you probably don't want to do that. But, then again, you need to make sure MaxClients will not kill your RAM/CPU as well. I see lots of servers that if MaxClients was actually reached, they would be using 20GB of RAM. And, these servers only have 2GB of RAM. So, check those settings. If you can, configure it to start up more (all if you can) Apache process rather than a few and make sure you won't blow out your RAM.

Managing two data centers

Call it paranoia.  Call it being prepared.  Whatever your stance, we are considering using more than one data center for dealnews.com.  It is not a capacity issue.  We can keep growing our current data center without a problem.  But, stories of power outages and power outages we have experience have us wanting to explore the idea.

Here is the problem.  No one in our company has experience with this.  And, there does not seem to be any resources on the internet talking about this.  Our problems are not so much with managing the data between the two.  The problem is failover and how to deal with one data center being out.  Here are some of the ideas that have been thrown on to the wall.

Round Robin DNS

This was the first idea.  It seems simple enough.  We have two data centers.  We publish different DNS for each data center and traffic goes to each one.  The problem here is that it is, well, random.

Global Traffic Management

There are devices that "balance" traffic  across multiple different locations.  But, I am unsure how those deal with outages at one of the locations.  It seems like there is still one point of failure.

BGP Routing

This is the biggest mystery to me.  I know what it is.  I know what it means.  I have no idea how to deploy this type of solution.  I understand that you can "move" your IP addresses with routing changes.  But, that means running routers.  Where are these routers?  Does this happen at some provider?  Is there a provider that handles this?  Does that mean that all of our data centers are with one provider?  I think one more peace of mind feature of this is that we would not be tied to just one vendor.  So, if one vendor had major issues or there was some legal troubles (we lived through the dot come boom and bust) we would have security in knowing we had other equipment that was not affected.

Is there something else?  Are we being way paranoid?  Maybe it is not cost effective in the end.  I/we have no idea really.  Anyone out there that has knowledge on this subject?

GoDaddy support is awful

Luckily, I don't have personal experience with them. But, based on the 2 to 3 users per week that come to the Phorum support forums and IRC, they have the worst support of any host on the internet.

Example 1

A user comes to the forums having trouble with his Phorum install. In the user's words, GoDaddy tells him "they couldnt help me costumize my scripts because it wasnt their job". In this case "customize" meant filling in the MySQL permissions into the Phorum config files. In the end, GoDaddy had to move him to a Linux hosting account. They claimed that the Windows hosting accounts do not support PHP. However, they are clearly wrong about their own hosting as this all started because the user received a PHP error about not connecting to MySQL.

Example 2

This user found that GoDaddy is using MySQL 4 on their servers. Their web site does not mention a version anywhere. So, users are locked in to a hosting plan before knowing this.

Example 3

This happened today in IRC. It was much like the first example. In this case, GoDaddy support told him "permissions are set via ftp". Um, MySQL permissions are set via FTP? The user had a MySQL server name. For some reason it did not exist. So, either he typed it wrong or they gave him the wrong server name. Either way, their support should recognize this and be able to help their clients.

Example 4

This is less a support issue and just plain crappy of them IMO. GoDaddy does not allow the creation of temporary tables. The Phorum search engine makes use of them to save lots of CPU and memory on the PHP side. Luckily for their users, Thomas felt sorry enough for them to make a module that used good old fashioned slow LIKE queries. So, that will work until their account is shut off because they have search queries clogging up the database servers.

So, if you are on GoDaddy, I feel for you. From where we sit, it really seems like they do not provide very good support. We end up having to support their users for things that GoDaddy should be able to answer.