Masks in Design and How They Relate to HTML/CSS

I have always struggled understanding the use of masks in design tools like Photoshop, Illustrator, and Sketch. I recently had to work on translating some creative from Sketch into a responsive design. I realized that a mask in these tools is like a containing element in HTML with overflow hidden and the contents being absolutely positioned.

This is not something you would want to do a lot of as there is a lot of the image being loaded that not being shown. But, it is useful in responsive designs.

See the Pen How masks in design tools translate to HTML by Brian Moon (@brianlmoon) on CodePen.

New Responsive Design

I have not blogged in over a year. Shame on me. I think part of the reason was because my blog template had become quite dated. I tweaked it a bit to make it semi-responsive a while back. But, I have never been happy with it.

So, I decided to build a new, mobile first template from scratch and use lots of modern (relative to my old template) CSS and responsive web design techniques. Things like rem for sizing things, flexbox, text-shadow, and background-size (for the header image). I know it is nothing ground-breaking. And, at the same time, it was nice to shed all the old compatibility layers and work with the code without worrying about fall backs.

For the font, I chose the super-popular Open Sans after reading "Serif or Sans Serif?" by Danielle Stone. I looked at many sans serif fonts. Open Sans just looked the cleanest to me.

I opted to go with an easy to read black text on white background after reading "How the Web Became Unreadable" by Kevin Marks. One great nugget in that article is where Kevin quotes Adam Schwartz:
A color is a color isn’t a color……not to computers…and not to the human eye.
I found this very interesting. I often look at my "black" SUV and think "my car looks kind of brown today".

With the help of the love of my life, Deedra, I found the header graphic, bought it from iStockPhoto and then played with the hue a bit to get it just right.

While testing, I found myself reading my old blog posts. It felt good. I really need to do this more.

HTML5 Experiment

I was looking through dealnews.com browser stats the other day to see how many of our visitors had browsers that could use CSS3. This was how it broke down.



58% of dealnews visitors have browsers that support some CSS3 elements like border-radius and box-shadow. This includes recent Webkit, Firefox 3.5+, Opera 10+ and IE9. Awesome! Another 37% fully support CSS2. This includes IE7, IE8 and Firefox 2 - 3.0.x.

We have started to sneak in some CSS3 elements into the CSS on dealnews.com. In places where rounded corners are optional (like the lightbox I created), we use border-radius instead of a lot of images. The same for shadows. We have started using box-shadow instead of, well, nothing. We just don't have shadows if the browser does not support box-shadow. In our recent redesign of our mobile site, we used all CSS3 for shadows, corners and gradients. But these were all places where things were optional.

The header of dealnews.com, on the other hand, requires a certain consistency. It is the first thing you see. If it looks largely different on your work computer running IE7 than it does on your Mac at home using Safari, that may be confusing. So, we have stuck with images for rounded corners, shadows and gradients. The images have problems though. On the iPad for instance, the page starts zoomed out a bit. The elements holding the rounded corner images don't always line up well at different zoom levels. Its a math thing. When zooming, you end up with an odd number of pixels at times which causes pixels to shift. So, we get gaps in our tabs or buttons. Not pretty. This has been bugging me, but its really just the iPad. Its not mission critical to fix a pixel on the iPad. Armed with the numbers above, I decided to try and reproduce the dealnews.com header using the most modern techniques possible and see how well I could degrade the older browsers.

Here is a screen shot in Firefox 4 of the current dealnews header. (You can click any of these images to see them full size.)



Now, here is the HTML5/CSS3 header in all the browsers I tried it in. I developed in Firefox 4 and tested/tweaked in others.

Firefox 4 (Mac)


Firefox 3.6 (Windows)


Chrome 10


Opera 11


Internet Explorer 9


Internet Explorer 8 (via IE9 Dev Tools)


Internet Explorer 7 (via IE9 Dev Tools)


Internet Explorer 6 (ZOMG!!1!)


Wow! This turned out way better than I expected. Even IE6 renders nicely. I did have to degrade in some older versions of Internet Explorer. This gets me 97% coverage for all the dealnews.com visitors. And, IMO, degrading this way is not all that bad. I am seeing it more and more around the internet. CNET is using border-radius and CSS3 gradients in their header. In Internet Explorer you see square corners and no gradient. Let's look a little deeper into what I used here.

Rounded corners

For all the rounded corners, I used a border-radius of 5px. I used CSS that would be most compatible. For border-radius that means a few lines just to get the one effect across browsers. The CSS for the tabs looks like this.

-moz-border-radius-topleft: 5px;
-moz-border-radius-topright: 5px;
border-top-left-radius: 5px;
border-top-right-radius: 5px;


The -moz is to support Firefox 3.5+. Everything else that supports border-radius recognizes the non-prefixed style.

For the older browsers, they just use square corners. I think they still look nice. The one thing we lose with this solution is the styled gradient border on the brown tabs. It fades to a silver near the bottom of the tabs. There is no solution for that in CSS at this time. That is a small price to pay to skip loading all those images IMO.

Shadows

For the shadows on the left and right side of the page (sorry, not real visible in the screen shots), I used box-shadow. This requires CSS such as:

-webkit-box-shadow: 0 0 0 transparent, 0 2px 2px #B2B1A6;
-moz-box-shadow: 0 0 0 transparent, 0 2px 2px #B2B1A6;
box-shadow: 0 0 0 transparent, 0 2px 2px #B2B1A6;


Again, -moz for Mozilla browsers and -webkit for WebKit based browsers. Now, there are some gotchas with box-shadow. The first is that Firefox requires a color. The other supporting browsers will simply use an alpha blended darker gradient. This makes it a little more work to get it all right in Firefox. The other tricky part was getting the vertical shadow to work. The shadow you make actual curves around the entire element. It has the same z-index as the element itself. So, a shadow on an element will appear on top of other elements around it with a lower z-index. I had issues keeping the shadow from the lower area (the vertical shadow) from appearing above the element and on top of the tabs. If you look really, really, really close where the vertical and horizontal shadows meet, you will see a tiny gap in the color. Luckily for me, it works due to all the blue around it. That helps mask that small flaw.

For the older IE browsers, I used conditional IE HTML blocks and added a light gray border to the elements. It is a bit more degraded than I like, but as time passes, those browsers will stop being used.

Gradient Backgrounds

Completing the holy trinity of CSS3 features that make life easier is gradient backgrounds. This is the least unified and most complex of the three features I have used in this experiement. For starters, no two browser use the same syntax for gradient backgrounds. Firefox does use the W3C recommended syntax. Webkit uses something they came up with and Internet Explorer uses its super proprietary filter CSS property and not the background property. The biggest problem with the filter property is that it makes IE9 not work with border-radius. The tabs could not use a gradient in IE9 because it applied a rectangular gradient to the element that exceeded the bounds of the rounded top corners. Bummer. Once the standard is set this should all clear itself up. As things stand today, I had to use the following syntax for the shadows.

background: #4b4ba8; /* old browsers */
background: -moz-linear-gradient(top, #4b4ba8 0%, #3F3F9A 50%, #303089 93%); /* firefox */
background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,#4b4ba8), color-stop(50%,#3F3F9A), color-stop(93%,#303089)); /* webkit */
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#4b4ba8', endColorstr='#303089',GradientType=0 ); /* ie */


As you can see, this is pretty complex. See the next section for a quick way to configure that block of CSS.

Another thing that makes working with these gradients more complex than images is a lack of rendering control. When you are making images, you can control the exact RGB values of the colors in the images. When you leave it up to the browser to render the gradient, sometimes they don't agree. I had to do a lot of fiddling with the RGB values to get all the browsers to render the gradient just right. Some of this had to do with the short vertical area I am using in the elements. That limits the number of colors that can be used to make the transition. So, just be careful when you are lining up two elements with gradients that transition from one element to the other like I have here with the blue tab and the blue navigation bar. 

CSS3 Online tools

Remembering all the CSS3 syntax is a little daunting. Luckily, there are some cool online tools to generate some of this stuff. Here are few I have used.


HTML5 Elements

To make this truly an HTML5 page, I wanted to use the new doctype and some of the new elements. I make use of the <header> and <nav> tags in this page. The <header> tag is just what you would think it is. It surrounds your header. This is all part of the new semantic logic behind HTML 5. The <nav> tag surrounds major site navigation. Not just any navigaion, major navigation. The HTML5 Doctor has more on that.

To make IE support them, I used a bit of javascript I found on the internet along with some CSS to set them to display as block level elements. The CSS is actually used by older Firefox versions as well.

A couple of non-CSS3 techniques

I did a couple of things that are not CSS3 or HTML5 in this page. One is that I put the CSS into the page and not in its own file. With modern broadband, the biggest issue in delivering pages fast is the number of HTTP requests, not (always) the total size of the data. The more HTTP requests required to start rendering your page, the longer it will take. Your optimization goals will determine if this technique is right for you. I currently include all CSS needed to render the header in the page so that the rendering can start without any additional HTTP requests. The CSS for the rest of the page is included via a link tag.

The other technique I used in this page that is not new, but is not widely used. Again, depending on your needs, it may be a possible win when trying to reduce HTTP requests. I used embedded image data URIs in the CSS for the three images I still needed for this page. Basically, you base64 encode the actual image file and put it into the CSS. The benefit is that this page is fully rendered with just one HTTP request. The downside is that the base size for the page (or CSS if it is external) that has to be downloaded every time is much larger. Probably a good compromise would be to put the CSS into an external file. This would mean just two HTTP requests would be needed and the CSS could be cached. For IE6, I just used conditional HTML to include an actual URL to the background images.

Th data URI technique is a little bit of a mysterious technique in that it does make for a larger page, but can help with render time on first load. This really comes down to what you are optimizing for. If you are optimizing for repeat visitors, it may be that the images are better off being separate requests. If you are optimizing for new visitors, this technique will yield a faster rendering page. In Chrome and IE9, the onLoad event fired much sooner (as much as half the time) using this technique than having the images as a separate reqeust. In Firefox, something else is going on. I am not sure what. The onLoad event still fires sooner, but not a whole lot sooner. The DOMContentLoaded event in Firefox however fires later with this technique than with the images in a separate request. Firefox was the only browser that showed this pattern.

OOCSS

It is not really HTML5 or CSS3 related, but I do want to give some credit to OOCSS. I used it for much of the layout. It makes laying out elements in different browsers very easy. I am using it in the current site as well as the HTML5 experiment. You should use it. It is awesome.

Conclusion

HTML5 and CSS3 have a lot to offer. And if you have a user base that is fairly modern, you can start using things now. While I may not redo the current dealnews web site using HTML5 and CSS3, our next redesign and any upcoming new designs will definitely include aspects of HTML5 and CSS3 where we can. It can save time and resources when you use these new techniques.

Here is a link to the HTML5 source. I also created a stripped down version of the HTML4 in use on the site now.

Tables for layout

I just read A case for table-based design and was thrilled to know I am not the only one that drowns in div soup from time to time.  I do not for the most part use tables for layout, but there are some cases where I just can't make a set of divs do my bidding. The classic example is having a two column layout where the left column LOADS FIRST and is elastic and the right column is a fixed size.  The "loads first" is important in a world where the rendering time of pages has become important.  Ideally with any page, the most important content would render first for the user. In my case, this fixed column is an ad. As a web developer, I don't care when the ad loads. The ads are a necessary evil in my page layout. I must ensure that they load in an acceptable time frame, but certainly not the first thing on the page. The specific layout I am talking about is that of the top of dealnews.com.  It has a fixed size 300x250 ad on the right of the page and the left side is elastic. I fiddled with divs for hours to get that to act the way I wanted it to act. We use the grid CSS from OOCSS.org. Wonderful piece of CSS that is. But, even with that in hand, I could not get the elements to behave, in all browsers, the way I could with a simple two column table where the left column's width is set to 100% and the right column contains a div of width 300 pixels. It was so easy to pull that off. Maybe CSS3 is going to solve this problem? I don't know. If you have the magic CSS that can do what this page does, let me know.

Custom carousel with YUI and OOCSS.org

YUI has a built in carousel widget. However, it requires fixed widths for all the parts. That does not fit well in a liquid CSS layout. In particular, we have to support people using large fonts at the OS level and large font settings in their browsers. I did not want for our carousel to break down in these cases. The built in YUI widget does. See this normal sized screen shot versus one with large fonts. Now look at dealnews with normal fonts and large fonts.

So, I decided to make my own using YUI and the grid CSS from Object-Oriented CSS.  You can see my working example with some explination here.

mod_substitute is cool. But, be careful with mod_proxy

For our development servers, we have always used output buffering to replace the URLs (dealnews.com) with the URL for that development environment.  Where we run into problems is with CSS and JavaScript.  If those files contains URLs for images (CSS) or AJAX (JS) the URLS would not get replaced.  Our solution has been to parse those files as PHP (on the dev boxes only) and have some output buffering replace the URLs in those files.  That has caused various problems over the years and even some confusion for new developers.  So, I got to looking for a different solution.  Enter mod_substitute for Apache 2.2.
mod_substitute provides a mechanism to perform both regular expression and fixed string substitutions on response bodies. - Apache Documentation
Cool!  I put in the URL mappings and VIOLA!  All was right in the world.

Fast forward a day.  Another developer is testing some new code and finds that his XML is getting munged.  At first we blamed libxml because we had just been through an ordeal with a bad combination of a libxml compile option and PHP a while back.  Maybe we missed that box when we fixed it.  We recompiled everything on the dev box but there was no change.  So I started to think what was recently different with the dev boxes.  So, I turn off mod_substitute.  Dang, that fixed it.  I looked at my substitution strings and everything looked fine.  After cursing and being depressed that such a cool tool was not working, I took a break to let it settle in my mind.

I came back to the computer and decided to try a virgin Apache 2.2 build.  I downloaded the source from the web site instead of building from Gentoo's Portage.  Sure enough, a simple test worked fine.  No munging.  So, I loaded up the dev box Apache configuration into the newly compiled Apache.  Sure enough, munged XML.  ARGH!!

Up until this point, I had configured the substitutions globally and not in a particular virtual host.  So, I moved it all into one virtual host configuration.  Still broken.

A little more background on our config.  We use mod_proxy to emulate some features that we get in production with our F5 BIG-IP load balancers.  So, all requests to a dev box hit a mod_proxy virtual host and are then directed to the appropriate virtual host via a proxied request. 

So, I got the idea to hit the virtual host directly on its port and skip mod_proxy.  Dang, what do you know.  It worked fine.  So, something about the output of the backend request and mod_proxy was not playing nice.  So, hmm.  I got the idea to move the mod_substitute directives into the mod_proxy virtual hosts configuration.  Tested and working fine.  So, basically, this ensures that the substitution filtering is done only after the proxy and all other requests have been processed.  I am no Apache developer, so I have not dug any deeper.  I have a working solution and maybe this blog post will reach someone that can explain it.  As for mod_substitute, here is the way my config looks.

In the VirtualHost that is our global proxy, I have this:

FilterDeclare DN_REPLACE_URLS
FilterProvider DN_REPLACE_URLS SUBSTITUTE resp=Content-Type $text/
FilterProvider DN_REPLACE_URLS SUBSTITUTE resp=Content-Type $/xml
FilterProvider DN_REPLACE_URLS SUBSTITUTE resp=Content-Type $/json
FilterProvider DN_REPLACE_URLS SUBSTITUTE resp=Content-Type $/javascript
FilterChain DN_REPLACE_URLS


Elsewhere, in a file that is local to each dev host, I keep the actual mappings for that particular host:

Substitute "s|http://dealnews.com|http://somedevbox.dealnews.com|in"
Substitute "s|http://dealmac.com|http://somedevbox.dealmac.com|in"
# etc....


I am trying to think of other really cool uses for this.  Any ideas?

Best practices for escaping HTML

I am working on Wordcraft, trying to get the last annoying HTML validation errors worked out.  Thinks like ampersands in URLs.  In doing so, I am asking myself where the escaping should take place. In the case of Wordcraft, there are several parts to it.
  1. The code that pulls data from the database.  Obviously not the right place.
  2. The code that formats data like dates and such.  It also organizes data from several data sources into one nice tidy array.  Hmm, maybe
  3. The parts of the code that set up the output data for the templates.
  4. The templates themselves.
Now, I am sure 1 is not the place.  And I really would not want 4 to be the place.  That would make for some ugly templating.  Plus, the templates, IMO, should assume the data is ready to be output.  So, that leaves the code that does the formatting and the code that does the data setup.

Of those two, I guess the place to do this job is in the data setup.  Wordcraft has a $WCDATA array that is available in the scope of the templates.  I suppose anything that goes into that array should be escaped as appropriate.

I largely wrote this blog post as a teddy bear exercise.  But, I am curious.  Where and when do you escape your data for use in HTML documents?

HTML vs. XHTML and validation

There is no shortage on the pages on the internet that talk about HTML vs. XHTML.  The vast majority of these (in the first few pages of Google) seem to favor XHTML.  I don't really have an agenda, so I thought I would post my thoughts on the topic.

I have stated on this blog that I use HTML 4.01 Transistional.  I do so because it is easiest for me.  Some people argue that XHMTL is easier because there are set rules and if you violate those rules, the documents will not render.  Is that a good thing?  Perhaps my time in the late 90's has made my mind work differently than newcomers to the World Wide Web.

The browser wars were ugly.  And I mean literally ugly.  If you wanted to do anything fancy, it required lots of images or compromise.  I learned early on that it was ok that the spacing in IE on my PC was larger than IE on the Mac.  The fonts were all different sizes from browser to browser and OS to OS.  I learned that graceful fallback was part of the web.  Even now, dealnews.com looks "adequate" in IE 6.  I could make it look perfect.  But, the declining traffic from IE6 does not merit my time to fix the errors in IE 6.

So, when I start thinking about HTML vs. XHTML, I want the more flexible of the two.  I find syntax like nowrap='nowrap' very annoying in XHTML.  Especially since I can't say nowrap='yeswrap' and it mean anything.  nowrap=1 I could handle.  But, no, it has to be nowrap='nowrap'.  Geez.

Ok, ok, this is turning into an XHTML hate post.  I don't want to do that.  There are some things about XHTML that I do like.  I like the self closing tags.  My OCD (which I have brought up before) has never liked having an open tag without a closing tag.  so, the <br /> format is appealing to me in that sense.  I love that XHTML elements should always be lower case.  I hate upper case HTML.  It just reads funny.  Like camel case function names.  Some folks on our content team used to use Adobe PageMaker to write up deals.  They would copy and paste the HTML from there into our CMS.  The output would be pretty ugly.

So, I like parts of both.  What is interesting to me is the fact that the "big sites" on the internet don't seem concerned with document types or validation.

Site DocType Validates
Google None No
Yahoo HTML 4.01 Strict No
Live.com (Microsoft) XHTML 1.0 Transitional No
MSN.com XHTML 1.0 Strict Yes
Facebook XHTML 1.0 Strict No
eBay HTML 4.01 Transitional No
YouTube HTML 4.01 Transitional No
Amazon.com None No
Wikipedia XHTML 1.0 Strict Yes
MySpace XHTML 1.0 Transitional No

So, of the 10 most popular sites on the internet (according to Compete.com), two don't include a document type in their front page at all.  Only two of the sites validate according to the W3C.  MSN and Wikipedia both validated on their front page with XHTML 1.0 Strict.  However, neither is sending a Content-Type of application/xhtml+xml.  According to this page, that is a bad thing.  And the search results page for XHTML on MSN.com did not validate.  Kudos to Wikipedia.  Their page on XHTML does validate.  Interestingly, they switch to XHTML 1.0 Transitional for that page.

So, is the internet broken?  No.  The most important validation is that of your users.  Can they use the site?  Does the site look right in their browser?  Most sites have much bigger navigation and content issues than they do document structure.

So, my idea of validation is this:   Does it render the same (or damn near) in the browsers that cover 90% of the internet users?  If so, then your page validates.  The only way to check that is (most likely without SkyNet) the human eye.

Open Source Web Design

So, my wife told me that my site design was boring.  Yeah, she was right.  I am no designer.  I just don't have that gene.  But, during my work on Wordcraft, I came across some cool places to find designs that are relased under Open Source licenses.
  • Open Designs - This is arguably the the prettiest of the three. The search, however, is painfully slow because all results return on one page.  I guess if you can wait, this is a plus as browsing is easier.  Also, you can pick multiple colors and choose by license.  They only list XHTML templates (at least as search options).  That could be a turn off if you like HTML 4 like me.
  • Open Web Design - The site itself could use a design overhaul.  But, the content is good.  The search lets you choose primary and secondary color, a unique feature among these sites.  Thumbnails are a bit small though.
  • Open Source Web Design - Their search is not as powerful as the others, but it does return very fast.  The thumbnails are a nice size.
You will find the same content on all three sometimes.  But, it comes down to browsing and searching.

I found my new design at one of those.  Not sure which, I looked at a lot of them.  I did not use the template's HTML exactly as I like HTML 4.0 and wanted a different sidebar than the original author.  But, the design is the hard part.  So, thanks for Deep Red.

Google Chrome and privacy

So, Google Chrome is out. If you don't know, it's Google's new browser. I downloaded it on my Windows XP machine and tried it out. I found this curious thing in the options.

Google Chrome Spying on you?

So, I thought, I will click "Learn more" to see what they are watching. I get this.

Uh OH! 404!

So, I unchecked the box. Let's hope the premature launch is the reason there is no more information out there.

UPDATE: The page comes up now and says:
Information that's sent to Google includes crash reports and statistics on how often you use Google Chrome features. When you choose to accept a suggested query or URL in the address bar, the text you typed and the corresponding suggestion is sent to Google. Google Chrome doesn't send other personal information, such as name, email address, or Google Account information.

So, if you use their suggestions, they know it.  And it tracks what features you use.  Hmm, I think I will disable.