Developers and Entropy

This is a selfish blog post. I read a great blog post titled "Why You Need To Hire Great Developers" but I could not find it in my browser history or chat history. It talks about entropy creators versus entropy reducers and how bad we are at knowing which one someone is during the hiring process. I wanted to mention it here so my followers could read it and so I could find it again when I was looking for it.

Living in the Prove It Culture

Engineering cultures differ from shop to shop. I have been in the same culture for 13 years so I am not an expert on what all the different types are. Before that I was living in Dilbert world. The culture there was really weird. The ideas were never yours. It was always some need some way off person had. A DBA, a UI "expert" and some product manager would dictate what code you wrote. Creativity was stifled and met with resistance.

I then moved to the early (1998) days of the web. It was a start up environment. In the beginning there were just two of us writing code. So, we thought everything we did was awesome. Then we added some more guys. Lucky for us we mostly hired well. The good hires where type A personalities that had skills we didn't have. They challenged us and we challenged them. On top of that, we had a CEO who had been a computer hacker in his teens. So, he had just enough knowledge to challenge us as well. Over the years we kept hiring more and more people. We always asked in the interview if the person could take criticism and if they felt comfortable defending their ideas. We decided to always have a white board session. We would ask them questions and have them work it out on a white board or talk it out with us in a group setting. The point of this was not to see if they always knew the answer. The point was to see how they worked in that setting. Looking back, the hires that did not work out also did not excel in that phase of the interview. The ones that have worked out always questioned our methods in the interview. They did not belittle our methods or dismiss them. They just asked questions. They would ask if we had tried this or that. Even if we could quickly explain why our method was right for us, they still questioned it. They challenged us.

When dealing with people outside the engineering team, we subconsciously applied these same tactics. The philosophy came to be that if you came to us with an idea, you had to throw it up on the proverbial wall. We would then try to knock it down. If it stuck, it was probably a good idea. Some people could handle this and some could not. The ones that could not handle that did not always get their ideas pushed through. It may not mean they were bad ideas. And that is maybe the down side of this culture. But, it has worked pretty well for us.

We apply this to technology too. My first experience on Linux was with RedHat. The mail agent I cut my teeth on was qmail. I used djbdns. When Daniel Beckham, our now director of operations, came on, he had used sendmail and bind. He immediately challenged qmail. I went through some of the reasons I prefered it. He took more shots. In the end, he agreed that qmail was better than sendmail. However, his first DNS setup for us was bind. It took a few more years of bind hell for him to come around to djbdns.

When RedHat splintered into RedHat Enterprise and Fedora, we tried out Fedora on one server. We found it to be horribly unstable. It got the axe. We looked around for other distros. We found a not very well known distro that was known as the ricer distro of the Linux world called Gentoo. We installed it on one server to see what it was all about. I don't remember now whose idea it was. Probably not mine. We eventually found it to be the perfect distro for us. It let us compile our core tools like Apache, PHP and MySQL while at the same time using a package system. We never trusted RPMs for those things on RedHat. Sure, bringing a server online took longer but it was so worth it. Eventually we bought in and it is now the only distro in use here.

We have done this over and over and over. From the fact that we all use Macs now thanks to Daniel and his willingness to try it out at our CEO's prodding to things like memcached, Gearman, etc. We even keep evaluating the tools we already have. When we decided to write our own proxy we discounted everything we knew and evaluated all the options. In the end, Apache was known and good at handling web requests and PHP could do all we needed in a timely, sane manner. But, we looked at and tested everything we could think of. Apache/PHP had to prove itself again.

Now, you might think that a culture of skepticism like this would lead to new employees having a hard time getting any traction. Quite the opposite. Because we hire people that fit the culture, they can have a near immediate impact. We have a problem I want solved and a developer that has been here less than a year suggested that Hadoop may be a solution, but was not sure we would use it. I recently sent this in an email to the whole team in response to that.
The only thing that is *never* on the table is using a Windows server. If you can get me unique visitors for an arbitrary date range in milliseconds and it require Hadoop, go for it.
You see, we don't currently use Hadoop here. But, if that is what it takes to solve my problem and you can prove it and it will work, we will use it.

Recently we had a newish team member suggest we use a SAN for our development servers to use as a data store. Specifically he suggested we could use it to house our MySQL data for our development servers. We told him he was insane. SANs are magical boxes of pain. He kept pushing. He got Dell to come in and give us a test unit. Turns out it is amazing. We can have a hot copy of our production database on our dev slices in about 3 minutes. A full, complete copy of our production database in 3 minutes. Do you know how amazing that is? Had we not had the culture we do and had not hired the right person that was smart enough to pull it off and confident enough to fight for the solution, we would not have that. He has been here less than a year and has had a huge impact to our productivity. There is talk of using this in production too. I am still in the "prove it" mode on this. We will see.
I know you will ask how our dev db works, here you go:
1. Replicate production over VPN to home office
2. Write MySQL data on SAN
3. Stop replication, flush tables, snapshot FS
4. Copy snapshot to a new location
5. On second dev server, umount SAN, mount new snapshot
6. Restart MySQL all around
7. Talk in dev chat how bad ass that is

We had a similar thing happen with our phone system. We had hired a web developer that previously worked for a company that created custom Asterisk solutions. When our propietary PBX died, he stepped up and proved that Asterisk would work for us. Not a job for a web developer. But he was confident he could make it work. It now supports 3 offices and several home bound users world wide. He also had only been here a short time when that happened.

Perhaps it sounds like a contradiction. It may sound like we just hop on any bandwagon technology out there. But no. We still use MySQL. We are still on 5.0 in fact. It works. We are evaluating Percona 5.5 now. We tried MySQL 5.1. We found no advantage and the Gentoo package maintainer found it to be buggy. So, we did not switch. We still use Apache. It works. Damn well. We do use Apache with the worker MPM with PHP which is supposedly bad. But, it works great for us. But, we had to prove it would work. We ran a single node with worker for months before trusting it. Gearman was begrudgingly accepted. The idea of daemonized PHP code was not a comforting one. But once you write a worker and use it, you feel like a god. And then you see the power. Next thing you know, it is a core, mission critical part of your infrastructure. That is how it is with us now. In fact, Gearman has went from untrusted to the go to tech. When someone proposes a solution that does not involve Gearman, someone will ask if part of the problem can be solved using Gearman and not whatever idea they have. There is then a discussion about why it is or is not a good fit. Likewise, if you want to a build a daemon to listen on a port and answer requests, the question is "Why can't you just use Apache and a web service?" And it is a valid question. If you can solve your problem with a web service on already proven tech, why build something new?

This culture is not new. We are not unique. But, in a world of "brogramming" where "engineers" rage on code that is awesome before it is even working and people are said to be "killing it" all the time, I am glad I live in a world where I have to prove myself everyday. I am the most senior engineer on the team. And even still I get shot down. I often pitch an idea in dev chat and someone will shoot it down or point out an obvious flaw. Anyone, and I mean anyone, on the team can question my code, ideas or decisions and I will listen to them and consider their opinion. Heck, people outside the team can question me too. And regularly do. And that is fine. I don't mind the questions. I once wrote here that I like to be made to feel dumb. It is how I get smarter. I have met people that thought they were smarter than everyone else. They were annoying. I have interviewed them. It is hard to even get through those interviews.

Is it for everyone? Probably not. It works for us. And it has gotten us this far. You can't get comfortable though. If you do foster this type of culture, there is a risk of getting comfortable. If you start thinking you have solved all the hard problems, you will look up one day and realize that you are suffering. Keep pushing forward and questioning your past decisions. But before you adopt the latest and greatest new idea, prove that the decisions your team makes are the right ones at every step. Sometimes that will take a five minute discussion and sometimes it will take a month of testing. And other times, everyone in the room will look at something and think "Wow that is so obvious how did we not see it?" When it works, it is an awesome world to live in.

Engineers Work In Their Sleep

My grandmother worked at the Marshall Space Flight Center. She worked with a lot of engineers. I have always remembered when she told my cousin and I that we should be engineers. Her reasoning was that if you saw a janitor sitting in a chair with his arms crossed and his eyes closed, you pretty much knew he was not working. But, if you see an engineer in the same position, you can not prove he is not working. Now, was she saying she wanted us to sleep on the job? No, of course not. The message was to have a job where you could use your mind. Although I think watching our grandfather work hard every day as a construction foreman may have shaped her opinion. You know what is funny about that? He was the healthiest man I have ever known. Maybe I don't need to be sitting behind a desk with my arms crossed and eyes closed?

For what it is worth, I don't believe that all jobs that use your mind are desk jobs. Some of the smartest people I know have jobs in fields that would be considered "manual" labor. I believe that smart people will always rise to the top no matter what your profession.

PHP Appalachia Corrections

Just got home finally from PHP Appalachia.  I enjoyed meeting all the great people.

I presented about what I learned and how we deal with importing large amounts of CSV data into MySQL.  I threw my idea onto the wiki at the last minute, made the slides while everyone ate breakfast and I had planned on researching it all (been a few years since I wrote it), but we had no reliable internet.  Some claims I made and their corrections.

  1. I said our largest file is about 1.8 million lines.  WRONG.  Actually it is about 4.6 million.  I was correct however that it does finish importing and indexing in about 5 minutes.

  2. I claimed I LOAD DATA INFILE to MyISAM first and then "insert into ... select from" into an InnoDB table for speed reasons.  WRONG.  In fact, I do that because I need to merge fields from the file sometimes into one field in the databaes.  I could not find a way to do that with LOAD DATA INFILE.  As to speed.  I can't say either way as I have no solid data.  Sounds like a good test.  MyISAM probably still wins on a LOAD DATA INFILE into a blank, fresh table based on my experience.

  3. Total rows currently indexed is 7.2 million.  I did not make a claim, but I thought I would just mention that.  I wanted to include that, but did not have Internet.  (Damn you Hughes)

Deploying Scalable Websites with Memcached

I spoke at the MySQL Conference and Expo this year about the architecture we have here at dealnews.com.  After my talk, Jimmy Guerrero of Sun/MySQL invited me to give a webinar on how dealnews uses memcached.  That is taking place next week, Thursday, October 09, 2008.  It is a free webinar.  We have used memcached in a variety of ways as we have grown. So, I will be talking about how dealnews used memcached in the past and present.

For more information, visit the MySQL web site.

strtotime() - The PHP, date swiss army knife

Man, what did I do before strtotime().  Oh, I know, I had a 482 line function to parse date formats and return timestamps.  And I still could not do really cool stuff.  Like tonight I needed to figure out when Thanksgiving was in the US.  I knew it was the 4th Thursday in November.  So, I started with some math stuff and checking what day of the week Nov. 1 would fall on.  All that was making my head hurt.  So, I just tried this for fun.
strtotime("thursday, november ".date("Y")." + 3 weeks")

That gives me Thanksgiving.  Awesome.  It is cool for other stuff too.  At its very basic, it can take a MySQL datetime field and turn it into a timestamp.  Very handy for date calculations.  It also understands RFC 2822 and ISO 8601 date formats.  These are common in HTTP headers and some XML documents like RSS and Atom feeds.  Also, PHP can output those two standard formats with the date() function.  So, this makes them a good standards compliant way to pass full, timezone specific dates around.

Caching and TTL behavior

So, I am working on MemProxy some.  Mainly, I am trying to implement more of the Cache-Control header's many options.  The one that has me a bit perplexed s-maxage.  Particularly when combined with max-age.

s-maxage is the maximum time in seconds an item should remain in a shared cache.  So, if s-maxage is set by the application server, my proxy should keep it for that amount of time at the most.  Up until now, I have just been looking at max-age.  But, s-maxage is the proper one for a proxy to use if it is present.  I do not send the s-maxage through because this is a reverse proxy and, IMO, that is proper behavior for an application accelerating proxy.  However, I do send forward the max-age value that is set by the application servers.  If no max-age is set, I send a default as defined in the script.  Also, if no-cache or no-store is set, I send those and a max-age of 0.

My problem arises when max-age is less than s-maxage.  Up until now, I have sent a max-age back to the client that represents the time left for the cached item in my proxy's cache.  So, if the app server sent back max-age=300 and a request comes in and the cache is found and the cache was created 100 seconds ago, I send max-age-200 back to the client.  But, I was only using max-age before.  Now, in cases where s-maxage is longer than max-age, I would come up with negative numbers.  That is not cool.  The easiest solution would be to always send the original max-age back to the client.  But, that seems kind of lame.

So, my question is, if you are using an application (HTTP or otherwise) accelerator, what would you expect?  If you application set a max-age of 300 would you always expect the end client to receive a max-age of 300?  Or should it count down over time?  The only experience I have is a CDN.  If you watch CDN traffic, the max-age gets smaller and smaller over time until it hits 0.  I have not tried sending an s-maxage to my CDN.  I don't know what they would do with that.  Maybe that is a good test.

UPDATE: Writing this gave me an idea.  If the item will be in the proxy cache longer than the max-age ttl, send the full max-age ttl.  Otherwise, send the time left in the proxy cache.  Thoughts on that?

(thanks for being my teddy bear blogosphere)

Velocity Conference Roundup

As I said before, I was invited to be on a panel at Velocity Conference.  I was delighted to go.  I had never been to San Francisco.  I have been to Portland and Santa Clara several times.  The panel was great.  It was the Brian and photo sharing sites show.  Seriously, it was me (dealnews.com), John Allspaw of Flickr, Don MacAskill of SmugMug and Farhan Mashraqi of Fotolog.  Oh, there was also Shayan Zadeh of Zoosk, a social dating network and Michael Halligan, a consultant from BitPusher.  We all had similar ideas.  I told my Yahoo story.  I told everyone that they should denormalize (or optimize as Farhan prefered) their data to improve performance.  Others agreed.  I have written about my methods for denormalizing normalized data before.  (See pushed cache)  Fun was had by all.

I mentioned John Allspaw above.  He gave a talk on his own as well.  It was good.  The slides are on SlideShare.  He and I see eye to eye on a lot of things.  One thing he says in there that may shock a lot of people is to test using produciton.  I agree fully.  We could have never been sure our infastructure was ready last year without testing the production servers.

I also learned about Varnish at the conference. It is a super fast reverse proxy.  It uses the virtual memory systems of recent kernels to store its cache.  The OS worries about moving things from memory to disk based on usage.  The claim is that the OSes are better at this than any programmer could do (without copying them of course).  It is fast.  The developers are proud.  And by proud I mean cocky.  I have been playing with it.  As you know, I have my own little caching proxy solution.  Varnish is much faster, as I expected.  However, storing cache in memcached is very attractive to me.  Varnish can't do that.  It would likely slow it down a great deal.  MemProxy does do that.  Also, because MemProxy is written in PHP and my application layer is PHP, I can do things at the proxy layer to inspect the request and take action.  Works well for my use.  But, if you are using squid or mod_cache or something, you may want to give Varnish a look.

There was a good bit of information about the client side of performance.  There were folks from Microsoft there talking about IE8.  It looks like IE8 will catch up with the other browsers in a lot of ways.  Yahoo talked about image optimization.  Good stuff in there.  I use Fireworks and it does a pretty good job of making small images.  I am looking more into combining images and making image maps that use CSS.  We use a CDN, but fewer connections is better for users.

There was also a lot of great debate.  SANs rock!  SANs suck!  Rails Scales!  Rails Sucks!  The Cloud is awesome!  The Cloud is a lie!  (lots of cloud)

I had dinner both nights with guys from Six Apart.  Good conversations were had.  I don't know if I am a big vegan fan though.  I mean, the food was good, but it all kinda tasted the same.  Perhaps I ordered poorly.  At dinner on Tuesday I met a guy going to work for Twitter soon.  He is an engineer that hopefully will be another step toward getting them back to 100% again.  Lets keep our fingers crossed.

They did announce that the conference would be held again next year.  I am definitely going back.  Probably two of us from dealnews will go.  OSCON is fun.  MySQL conference is too.  But, more and more, capacity planning and scaling is what I do.  And this conference is all about those topics.

Did you know I am going to be at Velocity?

Well, neither did I until today. HA!

Velocity is a new O'Reilly conference dedicated to "Optimizing Web Performance and Scalability".  It starts next Monday.  Yesterday I was contacted by Adam Jacobs of HJK Solutions about taking part in a panel discussion about what happens when success comes suddenly to a web site.  I think he thought I was in the bay area.  Little did he know I am in Alabama.  But, amazingly, I was able to work it all out so I can be there.  I wish I had known about this conference ahead of time.  It sounds really awesome.  Performance has always been something I focus on.  I hope to share some and learn at the same time.

So, if you are going to be there, come see our panel.

P.S. Thanks to John Allspaw of Flickr for recommending me to Adam.

An Introduction to MySQL - Birmingham, AL

I am giving a talk titled "An Introduction to MySQL" here in Birmingham, AL on June 21, 2008 at 3PM.

I love living in Alabama.  I was born and raised in Huntsville.  However, Birmingham has always seemed a bit behind in technology compared to what I do for a living.  There is good reason.  The industry here is medical, banking, industrial and utilities.  I don't really want my doctors keeping my medical records in an alpha release of anything.  Same goes for my banking and utilities.  But, as this page shows, the companies here are catching up.  So, I am happy to present MySQL to as many people as I can in this town.  Hopefully I will help some folks that have not been exposed to MySQL or any open source for that matter.

The event is part of our local Linux user group's (BALU) planned events.