EW Resource

Newsfeeds

There's a huge resource of Newsfeeds on the Net which brings up-to-date information and content on a wide range of subjects.

Here are just a few relating to web development.



A List Apart: The Full Feed
  • This week's sponsor: Asana 

    Asana’s new iOS 8 app now available! Use Asana to organize team tasks and conversations on web and mobile. Sign up free.

  • Getting Started With CSS Audits 

    This week I wrote about conducting CSS audits to organize your code, keeping it clean and performant—resulting in faster sites that are easier to maintain. Now that you understand the hows and whys of auditing, let’s take a look at some more resources that will help you maintain your CSS architecture. Here are some I’ve recently discovered and find helpful.

    Organizing CSS

    • Harry Roberts has put together a fantastic resource for thinking about how to write large CSS systems, CSS Guidelines.
    • Interested in making the style guide part of the audit easier? This Github repo includes a whole bunch of info on different generators.

    Help from task runners

    Do you like task runners such as grunt or gulp? Andy Osmani’s tutorial walks through using all kinds of task runners to find unused CSS selectors: Spring Cleaning Unused CSS Selectors.

    Accessibility

    Are you interested in auditing for accessibility as well (hopefully you are!)? There are tools for that, too. This article helps you audit your site for accessibility— it’s a great outline of exactly how to do it.

    Performance

    • Sitepoint takes a look at trimming down overall page weight, which would optimize your site quite a bit.
    • Google Chrome’s dev tools include a built-in audit tool, which suggests ways you could improve performance. A great article on HTML5 Rocks goes through this tool in depth.

    With these tools, you’ll be better prepared to clean up your CSS, optimize your site, and make the entire experience better for users. When talking about auditing code, many people are focusing on performance, which is a great benefit for all involved, but don’t forget that maintainability and speedier development time come along with a faster site.

  • Client Education and Post-Launch Success 

    What our clients do with their websites is just as important as the websites themselves. We may pride ourselves on building a great product, but it’s ultimately up to the client to see it succeed or fail. Even the best website can become neglected, underused, or messy without a little education and training.

    Too often, my company used to create amazing tools for clients and then send them out into the world without enough guidance. We’d watch our sites slowly become stale, and we’d see our strategic content overwritten with fluffy filler.

    It was no one’s fault but our own.

    As passionate and knowledgeable web enthusiasts, it’s literally our job to help our clients succeed in any way we can, even after launch. Every project is an opportunity to educate clients and build a mutually beneficial learning experience.

    Meeting in the middle

    If we want our clients to use our products to their full potential, we have to meet them in the middle. We have to balance our technical expertise with their existing processes and skills.

    At my company, Brolik, we learned this the hard way.

    We had a financial client whose main revenue came from selling in-depth PDF reports. Customers would select a report, generating an email to an employee who would manually create and email an unprotected PDF to the customer. The whole process would take about two days.

    To make the process faster and more secure, we built an advanced, password-protected portal where their customers could purchase and access only the reports they’d paid for. The PDFs themselves were generated on the fly from the content management system. They were protected even after they were downloaded and only viewable with a unique username and password generated with the PDF.

    The system itself was technically advanced and thoroughly solved our client’s needs. When the job was done, we patted ourselves on the back, added the project to our portfolio, and moved on to the next thing.

    The client, however, was generally confused by the system we’d built. They didn’t quite know how to explain it to their customers. Processes had been automated to the point where they seemed untrustworthy. After about a month, they asked us if we’d revert back to their previous system.

    We had created too large of a process change for our client. We upended a large part of their business model without really considering whether they were ready for a new approach.

    From that experience, we learned not only to create online tools that complement our clients’ existing business processes, but also that we can be instrumental in helping clients embrace new processes. We now see it as part of our job to educate our clients and explain the technical and strategic thought behind all of our decisions.

    Leading by example

    We put this lesson to work on a more recent project, developing a site-wide content tagging system where images, video, and other media could be displayed in different ways based on how they were tagged.

    We could have left our clients to figure out this new system on their own, but we wanted to help them adopt it. So we pre-populated content and tags to demonstrate functionality. We walked through the tagging process with as many stakeholders as we could. We even created a PDF guide to explain the how and why behind the new system.

    In this case, our approach worked, and the client’s cumbersome media management time was significantly reduced. The difference between the outcome of the two projects was simply education and support.

    Education and support can, and usually does, take the form of setting an example. Some clients may not fully understand the benefits of a content strategy, for instance, so you have to show them results. Create relevant and well-written sample blog posts for them, and show how they can drive website traffic. Share articles and case studies that relate to the new tools you’re building for them. Show them that you’re excited, because excitement is contagious. If you’re lucky and smart enough to follow Geoff Dimasi’s advice and work with clients who align with your values, this process will be automatic, because you’ll already be invested in their success.

    We should be teaching our clients to use their website, app, content management system, or social media correctly and wisely. The more adept they are at putting our products to use, the better our products perform.

    Dealing with budgets

    Client education means new deliverables, which have to be prepared by those directly involved in the project. Developers, designers, project managers, and other team members are responsible for creating the PDFs, training workshops, interactive guides, and other educational material.

    That means more organizing, writing, designing, planning, and coding—all things we normally bill for, but now we have to bill in the name of client education.

    Take this into account at the beginning of a project. The amount of education a client needs can be a consideration for taking a job at all, but it should at least factor into pricing. Hours spent helping your client use your product is billable time that you shouldn’t give away for free.

    At Brolik, we’ve helped a range of clients—from those who have “just accepted that the Web isn’t a fad” (that’s an actual quote from 2013), to businesses that have a team of in-house developers. We consider this information and price accordingly, because it directly affects the success of the entire product and partnership. If they need a lot of education but they’re not willing to pay for it, it may be smart to pass on the job.

    Most clients actually understand this. Those who are interested in improving their business are interested in improving themselves as well. This is the foundation for a truly fulfilling and mutually beneficial client relationship. Seek out these relationships.

    It’s sometimes challenging to justify a “client education” line item in your proposals, however. If you can’t, try to at least work some wiggle room into your price. More specifically, try adding a 10 percent contingency for “Support and Training” or “Onboarding.”

    If you can’t justify a price increase at all, but you still want the job, consider factoring in a few client education hours and their opportunity cost as part of your company’s overall marketing budget. Teaching your client to use your product is your responsibility as a digital business.

    This never ends (hopefully)

    What’s better than arming your clients with knowledge and tools, pumping them up, and then sending them out into the world to succeed? Venturing out with them!

    At Brolik, we’ve started signing clients onto digital strategy retainers once their websites are completed. Digital strategy is an overarching term that covers anything and everything to grow a business online. Specifically for us, it includes audience research, content creation, SEO, search and display advertising, website maintenance, social media, and all kinds of analysis and reporting.

    This allows us to continue to educate (and learn) on an ongoing basis. It keeps things interesting—and as a bonus, we usually upsell more work.

    We’ve found that by fostering collaboration post-launch, we not only help our clients use our product more effectively and grow their business, but we also alleviate a lot of the panic that kicks in right before a site goes live. They know we’ll still be there to fix, tweak, analyze, and even experiment.

    This ongoing digital strategy concept was so natural for our business that it’s surprising it took us so long to implement it. After 10 years making websites, we’ve only offered digital strategy for the last two, and it’s already driving 50 percent of our revenue.

    It pays to be along for the ride

    The extra effort required for client education is worth it. By giving our clients the tools, knowledge, and passion they need to be successful with what we’ve built for them, we help them improve their business.

    Anything that drives their success ultimately drives ours. When the tools we build work well for our clients, they return to us for more work. When their websites perform well, our portfolios look better and live longer. Overall, when their business improves, it reflects well on us.

    A fulfilling and mutually beneficial client relationship is good for the client and good for future business. It’s an area where we can follow our passion and do what’s right, because we get back as much as we put in.

  • CSS Audits: Taking Stock of Your Code 

    Most people aren’t excited at the prospect of auditing code, but it’s become one of my favorite types of projects. A CSS audit is really detective work. You start with a site’s code and dig deeper: you look at how many stylesheets are being called, how that affects site performance, and how the CSS itself is written. Your goal is to look for ways to improve on what’s there—to sleuth out fixes to make your codebase better and your site faster.

    I’ll share tips on how to approach your own audit, along with the advantages of taking a full inventory of your CSS and various tools.

    Benefits of an audit

    An audit helps you to organize your code and eliminate repetition. You don’t write any code during an audit; you simply take stock of what’s there and document recommendations to pass off to a client or discuss with your team. These recommendations ensure new code won’t repeat past mistakes. Let’s take a closer look at other benefits:

    • Reduce file sizes. A complete overview of the CSS lets you take the time to find ways to refactor the code: to clean it up and perhaps cut down on the number of properties. You can also hunt for any odds and ends, such as outdated versions of browser prefixes, that aren’t in use anymore. Getting rid of unused or unnecessary code trims down the file people have to download when they visit your site.
    • Ensure consistency with guidelines. As you audit, create documentation regarding your styles and what’s happening with the site or application. You could make a formal style guide, or you could just write out recommendations to note how different pieces of your code are used. Whatever form your documentation takes, it’ll save anyone coming onto your team a lot of time and trouble, as they can easily familiarize themselves with your site’s CSS and architecture.
    • Standardize your code. Code organization—which certainly attracts differing opinions—is essential to keeping your codebase more maintainable into the future. For instance, if you choose to alphabetize your properties, you can readily spot duplicates, because you’d end up with two sets of margin properties right next to each other. Or you may prefer to group properties according to their function: positioning, box model-related, etc. Having a system in place helps you guard against repetition.
    • Increase performance. I’ve saved the best for last. Auditing code, along with combining and zipping up stylesheets, leads to markedly faster site speeds. For example, Harry Roberts, a front-end architect in the UK who conducts regular audits, told me about a site he recently worked on:
      I rebuilt Fasetto.com with a view to improving its performance; it went from 27 separate stylesheets for a single-page site (mainly UI toolkits like Bootstrap, etc.) down to just one stylesheet (which is actually minified and inlined, to save on the HTTP request), which weighs in at just 5.4 kB post-gzip.

      This is a huge win, especially for people on slower connections—but everyone gains when sites load quickly.

    How to audit: take inventory

    Now that audits have won you over, how do you go about doing one? I like to start with a few tools that provide an overview of the site’s current codebase. You may approach your own audit differently, based on your site’s problem areas or your philosophy of how you write code (whether OOCSS or BEM). The important thing is to keep in mind what will be most useful to you and your own site.

    Once I’ve diagnosed my code through tools, I examine it line by line.

    Tools

    The first tool I reach for is Nicole Sullivan’s invaluable Type-o-matic, an add-on for Firebug that generates a JSON report of all the type styles in use across a site. As an added bonus, Type-o-matic creates a visual report as it runs. By looking at both reports, you know at a glance when to combine type styles that are too similar, eliminating unnecessary styles. I’ve found that the detail of the JSON report makes it easy to see how to create a more reusable type system.

    In addition to Type-o-matic, I run CSS Lint, an extremely flexible tool that flags a wide range of potential bugs from missing fallback colors to shorthand properties for better performance. To use CSS Lint, click the arrow next to the word “Lint” and choose the options you want. I like to check for repeated properties or too many font sizes, so I always run Maintainability & Duplication along with Performance. CSS Lint then returns recommendations for changes; some may be related to known issues that will break in older browsers and others may be best practices (as the tool sees them). CSS Lint isn’t perfect. If you run it leaving every option checked, you are bound to see things in the end report that you may not agree with, like warnings for IE6. That said, this is a quick way to get a handle on the overall state of your CSS.

    Next, I search through the CSS to review how often I repeat common properties, like float or margin. (If you’re comfortable with the command line, type grep along with instructions and plug in something like grep “float” styles/styles.scss to find all instances of “float”.) Note any properties you may cut or bundle into other modules. Trimming your properties is a balancing act: to reduce the number of repeated properties, you may need to add more classes to your HTML, so that’s something you’ll need to gauge according to your project.

    I like to do this step by hand, as it forces me to walk through the CSS on my own, which in turn helps me better understand what’s going on. But if you’re short on time, or if you’re not yet comfortable with the command line, tools can smooth the way:

    • CSS Dig is an automated script that runs through all of your code to help you see it visually. A similar tool is StyleStats, where you type in a url to survey its CSS.
    • CSS Colorguard is a brand-new tool that runs on Node and outputs a report based on your colors, so you know if any colors are too alike. This helps limit your color palette, making it easier to maintain in the future.
    • Dust-Me Selectors is an add-on for Firebug in Firefox that finds unused selectors.

    Line by line

    After you run your tools, take the time to read through the CSS; it’s worth it to get a real sense of what’s happening. For instance, comments in the code—that tools miss—may explain why some quirk persists.

    One big thing I double-check is the depth of applicability, or how far down an attribute string applies. Does your CSS rely on a lot of specificity? Are you seeing long strings of selectors, either in the style files themselves or in the output from a preprocessor? A high depth of applicability means your code will require a very specific HTML structure for styles to work. If you can scale it back, you’ll get more reusable code and speedier performance.

    Review and recommend

    Now to the fun part. Once you have all your data, you can figure out how to improve the CSS and make some recommendations.

    The recommendation document doesn’t have to be heavily designed or formatted, but it should be easy to read. Splitting it into two parts is a good idea. The first consists of your review, listing the things you’ve found. If you refer to the results of CSS Lint or Type-o-matic, be sure to include either screenshots or the JSON report itself as an attachment. The second half contains your actionable recommendations to improve the code. This can be as simple as a list, with items like “Consolidate type styles that are closely related and create mixins for use sitewide.”

    As you analyze all the information you’ve collected, look for areas where you can:

    • Tighten code. Do you have four different sets of styles for a call-out box, several similar link styles, or way too many exceptions to your standard grid? These are great candidates for repeatable modular styles. To make consolidation even easier, you could use a preprocessor like Sass to turn them into mixins or extend, allowing styles to be applied when you call them on a class. (Just check that the outputted code is sensible too.)
    • Keep code consistent. A good audit makes sure the code adheres to its own philosophy. If your CSS is written based on a particular approach, such as BEM or OOCSS, is it consistent? Or do styles veer from time to time, and are there acceptable deviations? Make sure you document these exceptions, so others on your team are aware.

    If you’re working with a client, it’s also important to explain the approaches you favor, so they understand where you’re coming from—and what things you may consider as issues with the code. For example, I prefer OOCSS, so I tend to push for more modularity and reusability; a few classes stacked up (if you aren’t using a preprocessor) don’t bother me. Making sure your client understands the context of your work is particularly crucial when you’re not on the implementation team.

    Hand off to the client

    You did it! Once you’ve written your recommendations (and taken some time to think on them and ensure they’re solid), you can hand them off to the client—be prepared for any questions they may have. If this is for your team, congratulations: get cracking on your list.

    But wait—an audit has even more rewards. Now that you’ve got this prime documentation, take it a step further: use it as the springboard to talk about how to maintain your CSS going forward. If the same issues kept popping up throughout your code, document how you solved them, so everyone knows how to proceed in the future when creating new features or sections. You may turn this document into a style guide. Another thing to consider is how often to revisit your audit to ensure your codebase stays squeaky clean. The timing will vary by team and project, but set a realistic, regular schedule—this a key part of the auditing process.

    Conducting an audit is a vital first step to keeping your CSS lean and mean. It also helps your documentation stay up to date, allowing your team to have a good handle on how to move forward with new features. When your code is structured well, it’s more performant—and everyone benefits. So find the time, grab your best sleuthing hat, and get started.

  • This week's sponsor: Stack 

    Stack is a simple task management system for devs and designers. Fully customizable and flexible to suit your workflow.

  • Rian van der Merwe on A View from a Different Valley: Work Life Imbalance 

    I’m old enough to remember when laptops entered the workforce. It was an amazing thing. At first only the select few could be seen walking around with their giant black IBMs and silver Dells. It took a few years, but eventually every new job came with the question we all loved to hear: “desktop or laptop?”

    I was so happy when I got my first laptop at work. “Man,” I thought, “now I can work anywhere, any time!” It was fun for a while, until I realized that now I could work anywhere, any time. Slowly our office started to reflect this newfound freedom. Work looked less and less like work, and more and more like home. Home offices became a big thing, and it’s now almost impossible to distinguish between home offices of famous designers and the workspaces (I don’t think we even call them “offices” any more) of most startups.

    Work and life: does it blend?

    There is a blending of work and life that woos us with its promise of barbecues at work and daytime team celebrations at movie theaters, but we’re paying for it in another way: a complete eradication of the line between home life and work life. “Love what you do,” we say. “Get a job you don’t want to take a vacation from,” we say—and we sit back and watch the retweets stream in.

    I don’t like it.

    I don’t like it for two reasons.

    It makes us worse at our jobs

    There’s plenty of research that shows when employers place strict limits on messaging, employees are happier and enjoy their work more. And productivity isn’t affected negatively at all. Clive Thompson’s article about this for Mother Jones is a great overview of what we know about the handful of experiments that have been done to research the effects of messaging limits.

    But that’s not even the whole story. It’s not just that constantly thinking about work makes us more stressed, it’s also that our fear of doing nothing—of not being productive every second of the day—is hurting us as well (we’ll talk about side projects another time). There’s plenty of research about this as well, but let’s stick with Jessica Stillman’s Bored at Work? Good. It’s a good overview of what scientists have found on the topic of giving your mind time to rest. In short, being idle tells your brain that it’s in need of something different, which stimulates creative thinking. So it’s something to be sought out and cherished—not something to be shunned.

    Sometimes when things clear away and you’re not watching anything and you’re in your car and you start going, oh no, here it comes, that I’m alone, and it starts to visit on you, just this sadness. And that’s why we text and drive. People are willing to risk taking a life and ruining their own because they don’t want to be alone for a second because it’s so hard.

    Louis C. K.

    It teaches that boundaries are bad

    The second problem I have with our constant pursuit of the productivity train is that it teaches us that setting boundaries to spend time with our friends and family = laziness. I got some raised eyebrows at work recently when I declined an invitation to watch a World Cup game in a conference room. But here’s the thing. If I watch the World Cup game with a bunch of people at work today, guess what I have to do tonight? I have to work to catch up, instead of spending time with my family. And that is not ok with me.

    I have a weird rule about this. Work has me—completely—between the hours of 8:30 a.m. and 6:00 p.m. It has 100 percent of my attention. But outside of those hours I consider it part of being a sane and good human to give my kids a bath, chat to my wife, read, and reflect on the day that’s past and the one that’s coming—without the pressure of having to be online all the time. I swear it makes me a better (and more productive) employee, but I can’t shake the feeling that I shouldn’t be writing this down because you’re just going to think I’m lazy.

    But hey, I’m going to face my fear and just come right out and say it: I try not to work nights. There. That felt good.

    It doesn’t always work out, and of course there are times when a need is pressing and I take care of it at night. I don’t have a problem with that. But I don’t sit and do email for hours every night. See, the time I spend with people is what gives my work meaning. I do what I do for them—for the people in my life, the people I know, and the people I don’t. If we never spend time away from our work, how can we understand the world and the people we make things for?

    Of course, the remaking of the contemporary tech office into a mixed work-cum-leisure space is not actually meant to promote leisure. Instead, the work/leisure mixing that takes place in the office mirrors what happens across digital, social and professional spaces. Work has seeped into our leisure hours, making the two tough to distinguish.

    Kate Losse, Tech aesthetics

    Permission to veg out

    So I guess this column is my attempt to give you permission to do nothing every once in a while. Not to be lazy, or not do your job. But to take the time you need to get better at what you do, and enjoy it a lot more.

    As this column evolves, I think this is what I’ll be talking about a lot. How to make the hours we have at work count more. How to think of what we do not as the tech business but the people business. How to give ourselves permission to experience the world around us and get inspiration for our work from that. How to be flâneur: wandering around with eyes wide open to inspiration.

  • Awkward Cousins 

    As an industry, we’re historically terrible at drawing lines between things. We try to segment devices based on screen size, but that doesn’t take into account hardware functionality, form factor, and usage context, for starters. The laptop I’m writing this on has the same resolution as a 1080p television. They’d be lumped into the same screen-size–dependent groups, but they are two totally different device classes, so how do we determine what goes together?

    That’s a simple example, but it points to a larger issue. We so desperately want to draw lines between things, but there are often too many variables to make those lines clean.

    Why, then, do we draw such strict lines between our roles on projects? What does the area of overlap between a designer and front-end developer look like? A front- and back-end developer? A designer and back-end developer? The old thinking of defined roles is certainly loosening up, but we still have a long way to go.

    The chasm between roles that is most concerning is the one between web designers/developers and native application designers/developers. We often choose a camp early on and stick to it, which is a mindset that may have been fueled by the false “native vs. web” battle a few years ago. It was positioned as an either-or decision, and hybrid approaches were looked down upon.

    The two camps of creators are drifting farther and farther apart, even as the products are getting closer and closer. John Gruber best described the overlap that users see:

    When I’m using Tweetbot, for example, much of my time in the app is spent reading web pages rendered in a web browser. Surely that’s true of mobile Facebook users, as well. What should that count as, “app” or “web”?

    I publish a website, but tens of thousands of my most loyal readers consume it using RSS apps. What should they count as, “app” or “web”?.

    The people using the things we build don’t see the divide as harshly as we do, if at all. More importantly, the development environments are becoming more similar, as well. Swift, Apple’s brand new programming language for iOS and Mac development, has a strong resemblance to the languages we know and love on the web, and that’s no accident. One of Apple’s top targets for Swift, if not the top target, is the web development community. It’s a massive, passionate, and talented pool of developers who, largely, have not done iOS or Mac work—yet.

    As someone who spans the divide regularly, it’s sad to watch these two communities keep at arm’s length like awkward cousins at a family reunion. We have so much in common—interests, skills, core values, and a ton of technological ancestry. The difference between the things we build is shrinking in the minds of our shared users, and the ways we build those things are aligning. I dream of the day when we get over our poorly drawn lines and become the big, happy community I know we can be.

    At the very least, please start reading each other’s blogs.

  • Watch: A New Documentary About Jeffrey Zeldman 
    You keep it by giving it away.
    Jeffrey Zeldman

    It’s a philosophy that’s always guided us at A List Apart: that we all learn more—and are more successful—when we share what we know with anyone who wants to listen. And it comes straight from our publisher, Jeffrey Zeldman.

    For 20 years, he’s been sharing everything he can with us, the people who make websites—from advice on table layouts in the ‘90s to Designing With Web Standards in the 2000s to educating the next generation of designers today. 

    Our friends at Lynda.com just released a documentary highlighting Jeffrey’s two decades of designing, organizing, and most of all sharing on the web. You should watch it.

    Jeffrey Zeldman: 20 years of Web Design and Community from lynda.com.

  • Git: The Safety Net for Your Projects 

    I remember January 10, 2010, rather well: it was the day we lost a project’s complete history. We were using Subversion as our version control system, which kept the project’s history in a central repository on a server. And we were backing up this server on a regular basis—at least, we thought we were. The server broke down, and then the backup failed. Our project wasn’t completely lost, but all the historic versions were gone.

    Shortly after the server broke down, we switched to Git. I had always seen version control as torturous; it was too complex and not useful enough for me to see its value, though I used it as a matter of duty. But once we’d spent some time on the new system, and I began to understand just how helpful Git could be. Since then, it has saved my neck in many situations.

    During the course of this article, I’ll walk through how Git can help you avoid mistakes—and how to recover if they’ve already happened.

    Every teammate is a backup

    Since Git is a distributed version control system, every member of our team that has a project cloned (or “checked out,” if you’re coming from Subversion) automatically has a backup on his or her disk. This backup contains the latest version of the project, as well as its complete history.

    This means that should a developer’s local machine or even our central server ever break down again (and the backup not work for any reason), we’re up and running again in minutes: any local repository from a teammate’s disk is all we need to get a fully functional replacement.

    Branches keep separate things separate

    When my more technical colleagues told me about how “cool” branching in Git was, I wasn’t bursting with joy right away. First, I have to admit that I didn’t really understand the advantages of branching. And second, coming from Subversion, I vividly remembered it being a complex and error-prone procedure. With some bad memories, I was anxious about working with branches and therefore tried to avoid it whenever I could.

    It took me quite a while to understand that branching and merging work completely differently in Git than in most other systems—especially regarding its ease of use! So if you learned the concept of branches from another version control system (like Subversion), I recommend you forget your prior knowledge and start fresh. Let’s start by understanding why branches are so important in the first place.

    Why branches are essential

    Back in the days when I didn’t use branches, working on a new feature was a mess. Essentially, I had the choice between two equally bad workflows:

    (a) I already knew that creating small, granular commits with only a few changes was a good version control habit. However, if I did this while developing a new feature, every commit would mingle my half-done feature with the main code base until I was done. It wasn’t very pleasant for my teammates to have my unfinished feature introduce bugs into the project.

    (b) To avoid getting my work-in-progress mixed up with other topics (from colleagues or myself), I’d work on a feature in my separate space. I would create a copy of the project folder that I could work with quietly—and only commit my feature once it was complete. But committing my changes only at the end produced a single, giant, bloated commit that contained all the changes. Neither my teammates nor I could understand what exactly had happened in this commit when looking at it later.

    I slowly understood that I had to make myself familiar with branches if I wanted to improve my coding.

    Working in contexts

    Any project has multiple contexts where work happens; each feature, bug fix, experiment, or alternative of your product is actually a context of its own. It can be seen as its own “topic,” clearly separated from other topics.

    If you don’t separate these topics from each other with branching, you will inevitably increase the risk of problems. Mixing different topics in the same context:

    • makes it hard to keep an overview—and with a lot of topics, it becomes almost impossible;
    • makes it hard to undo something that proved to contain a bug, because it’s already mingled with so much other stuff;
    • doesn’t encourage people to experiment and try things out, because they’ll have a hard time getting experimental code out of the repository once it’s mixed with stable code.

    Using branches gave me the confidence that I couldn’t mess up. In case things went wrong, I could always go back, undo, start fresh, or switch contexts.

    Branching basics

    Branching in Git actually only involves a handful of commands. Let’s look at a basic workflow to get you started.

    To create a new branch based on your current state, all you have to do is pick a name and execute a single command on your command line. We’ll assume we want to start working on a new version of our contact form, and therefore create a new branch called “contact-form”:

    $ git branch contact-form
    

    Using the git branch command without a name specified will list all of the branches we currently have (and the “-v” flag provides us with a little more data than usual):

    $ git branch -v
    
    Git screen showing the current branches of contact-form.

    You might notice the little asterisk on the branch named “master.” This means it’s the currently active branch. So, before we start working on our contact form, we need to make this our active context:

    $ git checkout contact-form
    

    Git has now made this branch our current working context. (In Git lingo, this is called the “HEAD branch”). All the changes and every commit that we make from now on will only affect this single context—other contexts will remain untouched. If we want to switch the context to a different branch, we’ll simply use the git checkout command again.

    In case we want to integrate changes from one branch into another, we can “merge” them into the current working context. Imagine we’ve worked on our “contact-form” feature for a while, and now want to integrate these changes into our “master” branch. All we have to do is switch back to this branch and call git merge:

    $ git checkout master
    $ git merge contact-form
    

    Using branches

    I would strongly suggest that you use branches extensively in your day-to-day workflow. Branches are one of the core concepts that Git was built around. They are extremely cheap and easy to create, and simple to manage—and there are plenty of resources out there if you’re ready to learn more about using them.

    Undoing things

    There’s one thing that I’ve learned as a programmer over the years: mistakes happen, no matter how experienced people are. You can’t avoid them, but you can have tools at hand that help you recover from them.

    One of Git’s greatest features is that you can undo almost anything. This gives me the confidence to try out things without fear—because, so far, I haven’t managed to really break something beyond recovery.

    Amending the last commit

    Even if you craft your commits very carefully, it’s all too easy to forget adding a change or mistype the message. With the —amend flag of the git commit command, Git allows you to change the very last commit, and it’s a very simple fix to execute. For example, if you forgot to add a certain change and also made a typo in the commit subject, you can easily correct this:

    $ git add some/changed/files
    $ git commit --amend -m "The message, this time without typos"
    

    There’s only one thing you should keep in mind: you should never amend a commit that has already been pushed to a remote repository. Respecting this rule, the “amend” option is a great little helper to fix the last commit.

    (For more detail about the amend option, I recommend Nick Quaranto’s excellent walkthrough.)

    Undoing local changes

    Changes that haven’t been committed are called “local.” All the modifications that are currently present in your working directory are “local” uncommitted changes.

    Discarding these changes can make sense when your current work is… well… worse than what you had before. With Git, you can easily undo local changes and start over with the last committed version of your project.

    If it’s only a single file that you want to restore, you can use the git checkout command:

    $ git checkout -- file/to/restore
    

    Don’t confuse this use of the checkout command with switching branches (see above). If you use it with two dashes and (separated with a space!) the path to a file, it will discard the uncommitted changes in a given file.

    On a bad day, however, you might even want to discard all your local changes and restore the complete project:

    $ git reset --hard HEAD
    

    This will replace all of the files in your working directory with the last committed revision. Just as with using the checkout command above, this will discard the local changes.

    Be careful with these operations: since local changes haven’t been checked into the repository, there is no way to get them back once they are discarded!

    Undoing committed changes

    Of course, undoing things is not limited to local changes. You can also undo certain commits when necessary—for example, if you’ve introduced a bug.

    Basically, there are two main commands to undo a commit:

    (a) git reset

    Illustration showing how the `git reset` command works.

    The git reset command really turns back time. You tell it which version you want to return to and it restores exactly this state—undoing all the changes that happened after this point in time. Just provide it with the hash ID of the commit you want to return to:

    $ git reset -- hard 2be18d9
    

    The —hard option is the easiest and cleanest approach, but it also wipes away all local changes that you might still have in your working directory. So, before doing this, make sure there aren’t any local changes you’ve set your heart on.

    (b) git revert

    Illustration showing how the `git revert` command works.

    The git revert command is used in a different scenario. Imagine you have a commit that you don’t want anymore—but the commits that came afterwards still make sense to you. In that case, you wouldn’t use the git reset command because it would undo all those later commits, too!

    The revert command, however, only reverts the effects of a certain commit. It doesn’t remove any commits, like git reset does. Instead, it even creates a new commit; this new commit introduces changes that are just the opposite of the commit to be reverted. For example, if you deleted a certain line of code, revert will create a new commit that introduces exactly this line, again.

    To use it, simply provide it with the hash ID of the commit you want reverted:

    $ git revert 2be18d9
    

    Finding bugs

    When it comes to finding bugs, I must admit that I’ve wasted quite some time stumbling in the dark. I often knew that it used to work a couple of days ago—but I had no idea where exactly things went wrong. It was only when I found out about git bisect that I could speed up this process a bit. With the bisect command, Git provides a tool that helps you find the commit that introduced a problem.

    Imagine the following situation: we know that our current version (tagged “2.0”) is broken. We also know that a couple of commits ago (our version “1.9”), everything was fine. The problem must have occurred somewhere in between.

    Illustration showing the commits between working and broken versions.

    This is already enough information to start our bug hunt with git bisect:

    $ git bisect start
    $ git bisect bad
    $ git bisect good v1.9
    

    After starting the process, we told Git that our current commit contains the bug and therefore is “bad.” We then also informed Git which previous commit is definitely working (as a parameter to git bisect good).

    Git then restores our project in the middle between the known good and known bad conditions:

    Illustration showing that the bisect begins between the versions.

    We now test this version (for example, by running unit tests, building the app, deploying it to a test system, etc.) to find out if this state works—or already contains the bug. As soon as we know, we tell Git again—either with git bisect bad or git bisect good.

    Let’s assume we said that this commit was still “bad.” This effectively means that the bug must have been introduced even earlier—and Git will again narrow down the commits in question:

    Illustration showing how additional bisects will narrow the commits further.

    This way, you’ll find out very quickly where exactly the problem occurred. Once you know this, you need to call git bisect reset to finish your bug hunt and restore the project’s original state.

    A tool that can save your neck

    I must confess that my first encounter with Git wasn’t love at first sight. In the beginning, it felt just like my other experiences with version control: tedious and unhelpful. But with time, the practice became intuitive, and gained my trust and confidence.

    After all, mistakes happen, no matter how much experience we have or how hard we try to avoid them. What separates the pro from the beginner is preparation: having a system in place that you can trust in case of problems. It helps you stay on top of things, especially in complex projects. And, ultimately, it helps you become a better professional.

    References

  • Running Code Reviews with Confidence 

    Growing up, I learned there were two kinds of reviews I could seek out from my parents. One parent gave reviews in the form of a shower of praise. The other parent, the one with a degree from the Royal College of Art, would put me through a design crit. Today the reviews I seek are for my code, not my horse drawings, but it continues to be a process I both dread and crave.

    In this article, I’ll describe my battle-tested process for conducting code reviews, highlighting the questions you should ask during the review process as well as the necessary version control commands to download and review someone’s work. I’ll assume your team uses Git to store its code, but the process works much the same if you’re using any other source control system.

    Completing a peer review is time-consuming. In the last project where I introduced mandatory peer reviews, the senior developer and I estimated that it doubled the time to complete each ticket. The reviews introduced more context-switching for the developers, and were a source of increased frustration when it came to keeping the branches up to date while waiting for a code review.

    The benefits, however, were huge. Coders gained a greater understanding of the whole project through their reviews, reducing silos and making onboarding easier for new people. Senior developers had better opportunities to ask why decisions were being made in the codebase that could potentially affect future work. And by adopting an ongoing peer review process, we reduced the amount of time needed for human quality assurance testing at the end of each sprint.

    Let’s walk through the process. Our first step is to figure out exactly what we’re looking for.

    Determine the purpose of the proposed change

    Our code review should always begin in a ticketing system, such as Jira or GitHub. It doesn’t matter if the proposed change is a new feature, a bug fix, a security fix, or a typo: every change should start with a description of why the change is necessary, and what the desired outcome will be once the change has been applied. This allows us to accurately assess when the proposed change is complete.

    The ticketing system is where you’ll track the discussion about the changes that need to be made after reviewing the proposed work. From the ticketing system, you’ll determine which branch contains the proposed code. Let’s pretend the ticket we’re reviewing today is 61524—it was created to fix a broken link in our website. It could just as equally be a refactoring, or a new feature, but I’ve chosen a bug fix for the example. No matter what the nature of the proposed change is, having each ticket correspond to only one branch in the repository will make it easier to review, and close, tickets.

    Set up your local environment and ensure that you can reproduce what is currently the live site—complete with the broken link that needs fixing. When you apply the new code locally, you want to catch any regressions or problems it might introduce. You can only do this if you know, for sure, the difference between what is old and what is new.

    Review the proposed changes

    At this point you’re ready to dive into the code. I’m going to assume you’re working with Git repositories, on a branch-per-issue setup, and that the proposed change is part of a remote team repository. Working directly from the command line is a good universal approach, and allows me to create copy-paste instructions for teams regardless of platform.

    To begin, update your local list of branches.

    git fetch
    

    Then list all available branches.

    git branch -a
    

    A list of branches will be displayed to your terminal window. It may appear something like this:

    * master
    remotes/origin/master
    remotes/origin/HEAD -> origin/master
    remotes/origin/61524-broken-link
    

    The * denotes the name of the branch you are currently viewing (or have “checked out”). Lines beginning with remotes/origin are references to branches we’ve downloaded. We are going to work with a new, local copy of branch 61524-broken-link.

    When you clone your project, you’ll have a connection to the remote repository as a whole, but you won’t have a read-write relationship with each of the individual branches in the remote repository. You’ll make an explicit connection as you switch to the branch. This means if you need to run the command git push to upload your changes, Git will know which remote repository you want to publish your changes to.

    git checkout --track origin/61524-broken-link
    

    Ta-da! You now have your own copy of the branch for ticket 61524, which is connected (“tracked”) to the origin copy in the remote repository. You can now begin your review!

    First, let’s take a look at the commit history for this branch with the command log.

    git log master..
    

    Sample output:

    Author: emmajane 
    Date: Mon Jun 30 17:23:09 2014 -0400
    
    Link to resources page was incorrectly spelled. Fixed.
    
    Resolves #61524.
    

    This gives you the full log message of all the commits that are in the branch 61524-broken-link, but are not also in the master branch. Skim through the messages to get a sense of what’s happening.

    Next, take a brief gander through the commit itself using the diff command. This command shows the difference between two snapshots in your repository. You want to compare the code on your checked-out branch to the branch you’ll be merging “to”—which conventionally is the master branch.

    git diff master
    

    How to read patch files

    When you run the command to output the difference, the information will be presented as a patch file. Patch files are ugly to read. You’re looking for lines beginning with + or -. These are lines that have been added or removed, respectively. Scroll through the changes using the up and down arrows, and press q to quit when you’ve finished reviewing. If you need an even more concise comparison of what’s happened in the patch, consider modifying the diff command to list the changed files, and then look at the changed files one at a time:

    git diff master --name-only
    git diff master <filename>
    

    Let’s take a look at the format of a patch file.

    diff --git a/about.html b/about.html
    index a3aa100..a660181 100644
    	--- a/about.html
    	+++ b/about.html
    @@ -48,5 +48,5 @@
    	(2004-05)
    
    - A full list of <a href="emmajane.net/events">public 
    + A full list of <a href="http://emmajane.net/events">public 
    presentations and workshops</a> Emma has given is available
    

    I tend to skim past the metadata when reading patches and just focus on the lines that start with - or +. This means I start reading at the line immediate following @@. There are a few lines of context provided leading up to the changes. These lines are indented by one space each. The changed lines of code are then displayed with a preceding - (line removed), or + (line added).

    Going beyond the command line

    Using a Git repository browser, such as gitk, allows you to get a slightly better visual summary of the information we’ve looked at to date. The version of Git that Apple ships with does not include gitk—I used Homebrew to re-install Git and get this utility. Any repository browser will suffice, though, and there are many GUI clients available on the Git website.

    gitk
    

    When you run the command gitk, a graphical tool will launch from the command line. An example of the output is given in the following screenshot. Click on each of the commits to get more information about it. Many ticket systems will also allow you to look at the changes in a merge proposal side-by-side, so if you’re finding this cumbersome, click around in your ticketing system to find the comparison tools they might have—I know for sure GitHub offers this feature.

    Screenshot of the gitk repository browser.

    Now that you’ve had a good look at the code, jot down your answers to the following questions:

    1. Does the code comply with your project’s identified coding standards?
    2. Does the code limit itself to the scope identified in the ticket?
    3. Does the code follow industry best practices in the most efficient way possible?
    4. Has the code been implemented in the best possible way according to all of your internal specifications? It’s important to separate your preferences and stylistic differences from actual problems with the code.

    Apply the proposed changes

    Now is the time to start up your testing environment and view the proposed change in context. How does it look? Does your solution match what the coder thinks they’ve built? If it doesn’t look right, do you need to clear the cache, or perhaps rebuild the Sass output to update the CSS for the project?

    Now is the time to also test the code against whatever test suite you use.

    1. Does the code introduce any regressions?
    2. Does the new code perform as well as the old code? Does it still fall within your project’s performance budget for download and page rendering times?
    3. Are the words all spelled correctly, and do they follow any brand-specific guidelines you have?

    Depending on the context for this particular code change, there may be other obvious questions you need to address as part of your code review.

    Do your best to create the most comprehensive list of everything you can find wrong (and right) with the code. It’s annoying to get dribbles of feedback from someone as part of the review process, so we’ll try to avoid “just one more thing” wherever we can.

    Prepare your feedback

    Let’s assume you’ve now got a big juicy list of feedback. Maybe you have no feedback, but I doubt it. If you’ve made it this far in the article, it’s because you love to comb through code as much as I do. Let your freak flag fly and let’s get your review structured in a usable manner for your teammates.

    For all the notes you’ve assembled to date, sort them into the following categories:

    1. The code is broken. It doesn’t compile, introduces a regression, it doesn’t pass the testing suite, or in some way actually fails demonstrably. These are problems which absolutely must be fixed.
    2. The code does not follow best practices. You have some conventions, the web industry has some guidelines. These fixes are pretty important to make, but they may have some nuances which the developer might not be aware of.
    3. The code isn’t how you would have written it. You’re a developer with battle-tested opinions, and you know you’re right, you just haven’t had the chance to update the Wikipedia page yet to prove it.

    Submit your evaluation

    Based on this new categorization, you are ready to engage in passive-aggressive coding. If the problem is clearly a typo and falls into one of the first two categories, go ahead and fix it. Obvious typos don’t really need to go back to the original author, do they? Sure, your teammate will be a little embarrassed, but they’ll appreciate you having saved them a bit of time, and you’ll increase the efficiency of the team by reducing the number of round trips the code needs to take between the developer and the reviewer.

    If the change you are itching to make falls into the third category: stop. Do not touch the code. Instead, go back to your colleague and get them to describe their approach. Asking “why” might lead to a really interesting conversation about the merits of the approach taken. It may also reveal limitations of the approach to the original developer. By starting the conversation, you open yourself to the possibility that just maybe your way of doing things isn’t the only viable solution.

    If you needed to make any changes to the code, they should be absolutely tiny and minor. You should not be making substantive edits in a peer review process. Make the tiny edits, and then add the changes to your local repository as follows:

    git add .
    git commit -m "[#61524] Correcting <list problem> identified in peer review."
    

    You can keep the message brief, as your changes should be minor. At this point you should push the reviewed code back up to the server for the original developer to double-check and review. Assuming you’ve set up the branch as a tracking branch, it should just be a matter of running the command as follows:

    git push
    

    Update the issue in your ticketing system as is appropriate for your review. Perhaps the code needs more work, or perhaps it was good as written and it is now time to close the issue queue.

    Repeat the steps in this section until the proposed change is complete, and ready to be merged into the main branch.

    Merge the approved change into the trunk

    Up to this point you’ve been comparing a ticket branch to the master branch in the repository. This main branch is referred to as the “trunk” of your project. (It’s a tree thing, not an elephant thing.) The final step in the review process will be to merge the ticket branch into the trunk, and clean up the corresponding ticket branches.

    Begin by updating your master branch to ensure you can publish your changes after the merge.

    git checkout master
    git pull origin master
    

    Take a deep breath, and merge your ticket branch back into the main repository. As written, the following command will not create a new commit in your repository history. The commits will simply shuffle into line on the master branch, making git log −−graph appear as though a separate branch has never existed. If you would like to maintain the illusion of a past branch, simply add the parameter −−no-ff to the merge command, which will make it clear, via the graph history and a new commit message, that you have merged a branch at this point. Check with your team to see what’s preferred.

    git merge 61524-broken-link
    

    The merge will either fail, or it will succeed. If there are no merge errors, you are ready to share the revised master branch by uploading it to the central repository.

    git push
    

    If there are merge errors, the original coders are often better equipped to figure out how to fix them, so you may need to ask them to resolve the conflicts for you.

    Once the new commits have been successfully integrated into the master branch, you can delete the old copies of the ticket branches both from your local repository and on the central repository. It’s just basic housekeeping at this point.

    git branch -d 61524-broken-link
    git push origin --delete 61524-broken-link
    

    Conclusion

    This is the process that has worked for the teams I’ve been a part of. Without a peer review process, it can be difficult to address problems in a codebase without blame. With it, the code becomes much more collaborative; when a mistake gets in, it’s because we both missed it. And when a mistake is found before it’s committed, we both breathe a sigh of relief that it was found when it was.

    Regardless of whether you’re using Git or another source control system, the peer review process can help your team. Peer-reviewed code might take more time to develop, but it contains fewer mistakes, and has a strong, more diverse team supporting it. And, yes, I’ve been known to learn the habits of my reviewers and choose the most appropriate review style for my work, just like I did as a kid.

  • Rachel Andrew on the Business of Web Dev: Getting to the Action 

    Freelancers and self-employed business owners can choose from a huge number of conferences to attend in any given year. There are hundreds of industry podcasts, a constant stream of published books, and a never-ending supply of sites all giving advice. It is very easy to spend a lot of valuable time and money just attending, watching, reading, listening and hoping that somehow all of this good advice will take root and make our business a success.

    However, all the good advice in the world won’t help you if you don’t act on it. While you might leave that expensive conference feeling great, did your attendance create a lasting change to your business? I was thinking about this subject while listening to episode 14 of the Working Out podcast, hosted by Ashley Baxter and Paddy Donnelly. They were talking about following through, and how it is possible to “nod along” to good advice but never do anything with it.

    If you have ever been sent to a conference by an employer, you may have been expected to report back. You might even have been asked to present to your team on the takeaway points from the event. As freelancers and business owners, we don’t have anyone making us consolidate our thoughts in that way. It turns out that the way I work gives me a fairly good method of knowing which things are bringing me value.

    Tracking actionable advice

    I’m a fan of the Getting Things Done technique, and live by my to-do lists. I maintain a Someday/Maybe list in OmniFocus into which I add items that I want to do or at least investigate, but that aren’t a project yet.

    If a podcast is worth keeping on my playlist, there will be items entered linking back to certain episodes. Conference takeaways might be a link to a site with information that I want to read. It might be an idea for an article to write, or instructions on something very practical such as setting up an analytics dashboard to better understand some data. The first indicator of a valuable conference is how many items I add during or just after the event.

    Having a big list of things to do is all well and good, but it’s only one half of the story. The real value comes when I do the things on that list, and can see whether they were useful to my business. Once again, my GTD lists can be mined for that information.

    When tickets go on sale for that conference again, do I have most of those to-do items still sat in Someday/Maybe? Is that because, while they sounded like good ideas, they weren’t all that relevant? Or, have I written a number of blog posts or had several articles published on themes that I started considering off the back of that conference? Did I create that dashboard, and find it useful every day? Did that speaker I was introduced to go on to become a friend or mentor, or someone I’ve exchanged emails with to clarify a topic I’ve been thinking about?

    By looking back over my lists and completed items, I can start to make decisions about the real value to my business and life of the things I attend, read, and listen to. I’m able to justify the ticket price, time, and travel costs by making that assessment. I can feel confident that I’m not spending time and money just to feel as if I’m moving forward, yet gaining nothing tangible to show for it.

    A final thought on value

    As entrepreneurs, we have to make sure we are spending our time and money on things that will give us the best return. All that said, it is important to make time in our schedules for those things that we just enjoy, and in particular those things that do motivate and inspire us. I don’t think that every book you read or event you attend needs to result in a to-do list of actionable items.

    What we need as business owners, and as people, is balance. We need to be able to see that the things we are doing are moving our businesses forward, while also making time to be inspired and refreshed to get that actionable work done.

    Footnotes

    • 1. Have any favorite hacks for getting maximum value from conferences, workshops, and books? Tell us in the comments!
  • 10 Years Ago in ALA: Pocket Sized Design 

    The web doesn’t do “age” especially well. Any blog post or design article more than a few years old gets a raised eyebrow—heck, most people I meet haven’t read John Allsopp’s “A Dao of Web Design” or Jeffrey Zeldman’s “To Hell With Bad Browsers,” both as relevant to the web today as when they were first written. Meanwhile, I’ve got books on my shelves older than I am; most of my favorite films came out before I was born; and my iTunes library is riddled with music that’s decades, if not centuries, old.

    (No, I don’t get invited to many parties. Why do you ask oh I get it)

    So! It’s probably easy to look at “Pocket-Sized Design,” a lovely article by Jorunn Newth and Elika Etemad that just turned 10 years old, and immediately notice where it’s beginning to show its age. Written at a time when few sites were standards-compliant, and even fewer still were mobile-friendly, Newth and Etemad were urging us to think about life beyond the desktop. And when I first re-read it, it’s easy to chuckle at the points that feel like they’re from another age: there’s plenty of talk of screens that are “only 120-pixels wide”; of inputs driven by stylus, rather than touch; and of using the now-basically-defunct handheld media type for your CSS. Seems a bit quaint, right?

    And yet.

    Looking past a few of the details, it’s remarkable how well the article’s aged. Modern users may (or may not) manually “turn off in-line image loading,” but they may choose to use a mobile browser that dramatically compresses your images. We may scoff at the idea of someone browsing with a stylus, but handheld video game consoles are impossibly popular when it comes to browsing the web. And while there’s plenty of excitement in our industry for the latest versions of iOS and Android, running on the latest hardware, most of the web’s growth is happening on cheaper hardware, over slower networks (PDF), and via slim data plans—so yes, 10 years on, it’s still true that “downloading to the device is likely to be [expensive], the processors are slow, and the memory is limited.”

    In the face of all of that, what I love about Newth and Etemad’s article is just how sensible their solutions are. Rather than suggesting slimmed-down mobile sites, or investing in some device detection library, they take a decidedly standards-focused approach:

    Linearizing the page into one column works best when the underlying document structure has been designed for it. Structuring the document according to this logic ensures that the page organization makes sense not only in Opera for handhelds, but also in non-CSS browsers on both small devices and the desktop, in voice browsers, and in terminal-window browsers like Lynx.

    In other words, by thinking about the needs of the small screen first, you can layer on more complexity from there. And if you’re hearing shades of mobile first and progressive enhancement here, you’d be right: they’re treating their markup—their content—as a foundation, and gently layering styles atop it to make it accessible to more devices, more places than ever before.

    So, no: we aren’t using @media handheld or display: none for our small screen-friendly styles—but I don’t think that’s really the point of Newth and Etemad’s essay. Instead, they’re putting forward a process, a framework for designing beyond the desktop. What they’re arguing is for a truly device-agnostic approach to designing for the web, one that’s as relevant today as it was a decade ago.

    Plus ça change, plus c’est la même chose.

  • Dependence Day: The Power and Peril of Third-Party Solutions 

    “Why don’t we just use this plugin?” That’s a question I started hearing a lot in the heady days of the 2000s, when open-source CMSes were becoming really popular. We asked it optimistically, full of hope about the myriad solutions only a download away. As the years passed, we gained trustworthy libraries and powerful communities, but the graveyard of crufty code and abandoned services grew deep. Many solutions were easy to install, but difficult to debug. Some providers were eager to sell, but loath to support.

    Years later, we’re still asking that same question—only now we’re less optimistic and even more dependent, and I’m scared to engage with anyone smart enough to build something I can’t. The emerging challenge for today’s dev shop is knowing how to take control of third-party relationships—and when to avoid them. I’ll show you my approach, which is to ask a different set of questions entirely.

    A web of third parties

    I should start with a broad definition of what it is to be third party: If it’s a person and I don’t compensate them for the bulk of their workload, they’re third party. If it’s a company or service and I don’t control it, it’s third party. If it’s code and my team doesn’t grasp every line of it, it’s third party.

    The third-party landscape is rapidly expanding. Github has grown to almost 7 million users and the WordPress plugin repo is approaching 1 billion downloads. Many of these solutions are easy for clients and competitors to implement; meanwhile, I’m still in the lab debugging my custom code. The idea of selling original work seems oddly…old-fashioned.

    Yet with so many third-party options to choose from, there are more chances than ever to veer off-course.

    What could go wrong?

    At a meeting a couple of years ago, I argued against using an external service to power a search widget on a client project. “We should do things ourselves,” I said. Not long after this, on the very same project, I argued in favor of a using a third party to consolidate RSS feeds into a single document. “Why do all this work ourselves,” I said, “when this problem has already been solved?” My inconsistency was obvious to everyone. Being dogmatic about not using a third party is no better than flippantly jumping in with one, and I had managed to do both at once!

    But in one case, I believed the third party was worth the risk. In the other, it wasn’t. I just didn’t know how to communicate those thoughts to my team.

    I needed, in the parlance of our times, a decision-making framework. To that end, I’ve been maintaining a collection of points to think through at various stages of engagement with third parties. I’ll tour through these ideas using the search widget and the RSS digest as examples.

    The difference between a request and a goal

    This point often reveals false assumptions about what a client or stakeholder wants. In the case of the search widget, we began researching a service that our client specifically requested. Fitted with ajax navigation, full-text searching, and automated crawls to index content, it seemed like a lot to live up to. But when we asked our clients what exactly they were trying to do, we were surprised: they were entirely taken by the typeahead functionality; the other features were of very little perceived value.

    In the case of the RSS “smusher,” we already had an in-house tool that took an array of feed URLs and looped through them in order, outputting x posts per feed in some bespoke format. They’re too good for our beloved multi-feed widget? But actually, the client had a distinctly different and worthwhile vision: they wanted x results from their array of sites in total, and they wanted them ordered by publication date, not grouped by site. I conceded.

    It might seem like an obvious first step, but I have seen projects set off in the wrong direction because the end goal is unknown. In both our examples now, we’re clear about that and we’re ready to evaluate solutions.

    To dev or to download

    Before deciding to use a third party, I find that I first need to examine my own organization, often in four particular ways: strengths, weaknesses, betterment, and mission.

    Strengths and weaknesses

    The search task aligned well with our strengths because we had good front-end developers and were skilled at extending our CMS. So when asked to make a typeahead search, we felt comfortable betting on ourselves. Had we done it before? Not exactly, but we could think through it.

    At that same time, backend infrastructure was a weakness for our team. We had happened to have a lot of turnover among our sysadmins, and at times it felt like we weren’t equipped to hire that sort of talent. As I was thinking through how we might build a feed-smusher of our own, I felt like I was tempting a weak underbelly. Maybe we’d have to set up a cron job to poll the desired URLs, grab feed content, and store that on our servers. Not rocket science, but cron tasks in particular were an albatross for us.

    Betterment of the team

    When we set out to achieve a goal for a client, it’s more than us doing work: it’s an opportunity for our team to better themselves by learning new skills. The best opportunities for this are the ones that present challenging but attainable tasks, which create incremental rewards. Some researchers cite this effect as a factor in gaming addiction. I’ve felt this myself when learning new things on a project, and those are some of my favorite work moments ever. Teams appreciate this and there is an organizational cost in missing a chance to pay them to learn. The typeahead search project looked like it could be a perfect opportunity to boost our skill level.

    Organizational mission

    If a new project aligns well with our mission, we’re going to resell it many times. It’s likely that we’ll want our in-house dev team to iterate on it, tailoring it to our needs. Indeed, we’ll have the budget to do so if we’re selling it a lot. No one had asked us for a feed-smusher before, so it didn’t seem reasonable to dedicate an R&D budget to it. In contrast, several other clients were interested in more powerful site search, so it looked like it would be time well spent.

    We’ve now clarified our end goals and we’ve looked at how these projects align with our team. Based on that, we’re doing the search widget ourselves, and we’re outsourcing the feed-smusher. Now let’s look more closely at what happens next for both cases.

    Evaluating the unknown

    The frustrating thing about working with third parties is that the most important decisions take place when we have the least information. But there are some things we can determine before committing. Familiarity, vitality, extensibility, branding, and Service Level Agreements (SLAs) are all observable from afar.

    Familiarity: is there a provider we already work with?

    Although we’re going to increase the number of third-party dependencies, we’ll try to avoid increasing the number of third-party relationships.

    Working with a known vendor has several potential benefits: they may give us volume pricing. Markup and style are likely to be consistent between solutions. And we just know them better than we’d know a new service.

    Vitality: will this service stick around?

    The worst thing we could do is get behind a service, only to have it shut down next month. A service with high vitality will likely (and rightfully) brag about enterprise clients by name. If it’s open source, it will have a passionate community of contributors. On the other hand, it could be advertising a shutdown. More often, it’s somewhere in the middle. Noting how often the service is updated is a good starting point in determining vitality.

    Extensibility: can this service adapt as our needs change?

    Not only do we have to evaluate the core service, we have to see how extensible it is by digging into its API. If a service is extensible, it’s more likely to fit for the long haul.

    APIs can also present new opportunities. For example, imagine selecting an email-marketing provider with an API that exposes campaign data. This might allow us to build a dashboard for campaign performance in our CMS—a unique value-add for our clients, and a chance to keep our in-house developers invested and excited about the service.

    Branding: is theirs strong, or can you use your own?

    White-labeling is the practice of reselling a service with your branding instead of that of the original provider. For some companies, this might make good sense for marketing. I tend to dislike white-labeling. Our clients trust us to make choices, and we should be proud to display what those choices are. Either way, you want to ensure you’re comfortable with the brand you’ll be using.

    SLAs: what are you getting, beyond uptime?

    For client-side products, browser support is a factor: every external dependency represents another layer that could abandon older browsers before we’re ready. There’s also accessibility. Does this new third-party support users with accessibility needs to the degree that we require? Perhaps most important of all is support. Can we purchase a priority support plan that offers fast and in-depth help?

    In the case of our feed-smusher service, there was no solution that ran the table. The most popular solution actually had a shutdown notice! There were a couple of smaller providers available, but we hadn’t worked with either before. Browser support and accessibility were moot since we’d be parsing the data and displaying it ourselves. The uptime concern was also diminished because we’d be sure to cache the results locally. Anyway, with viable candidates in hand, we can move on to more productive concerns than dithering between two similar solutions.

    Relationship maintenance

    If someone else is going to do the heavy lifting, I want to assume as much of the remaining burden as possible. Piloting, data collection, documentation, and in-house support are all valuable opportunities to buttress this new relationship.

    As exciting as this new relationship is, we don’t want to go dashing out of the gates just yet. Instead, we’ll target clients for piloting and quarantine them before unleashing it any further. Cull suggestions from team members to determine good candidates for piloting, garnering a mix of edge-cases and the norm.

    If the third party happens to collect data of any kind, we should also have an automated way to import a copy of it—not just as a backup, but also as a cached version we can serve to minimize latency. If we are serving a popular dependency from a CDN, we want to send a local version if that call should fail.

    If our team doesn’t have a well-traveled directory of provider relationships, the backstory can get lost. Let a few months pass, throw in some personnel turnover, and we might forget why we even use a service, or why we opted for a particular package. Everyone on our team should know where and how to learn about our third-party relationships.

    We don’t need every team member to be an expert on the service, yet we don’t want to wait for a third-party support staff to respond to simple questions. Therefore, we should elect an in-house subject-matter expert. It doesn’t have to be a developer. We just need somebody tasked with monitoring the service at regular intervals for API changes, shutdown notices, or new features. They should be able to train new employees and route more complex support requests to the third party.

    In our RSS feed example, we knew we’d read their output into our database. We documented this relationship in our team’s most active bulletin, our CRM software. And we made managing external dependencies a primary part of one team member’s job.

    DIY: a third party waiting to happen?

    Stop me if you’ve heard this one before: a prideful developer assures the team that they can do something themselves. It’s a complex project. They make something and the company comes to rely on it. Time goes by and the in-house product is doing fine, though there is a maintenance burden. Eventually, the developer leaves the company. Their old product needs maintenance, no one knows what to do, and since it’s totally custom, there is no such thing as a community for it.

    Once you decide to build something in-house, how can you prevent that work from devolving into a resented, alien dependency? 

    • Consider pair-programming. What better way to ensure that multiple people understand a product, than to have multiple people build it?
    • “Job-switch Tuesdays.” When feasible, we have developers switch roles for an entire day. Literally, in our ticketing system, it’s as though one person is another. It’s a way to force cross-training without doubling the hours needed for a task.
    • Hold code reviews before new code is pushed. This might feel slightly intrusive at first, but that passes. If it’s not readable, it’s not deployable. If you have project managers with a technical bent, empower them to ask questions about the code, too.
    • Bring moldy code into light by displaying it as phpDoc, JSDoc, or similar.
    • Beware the big. Create hourly estimates in Fibonacci increments. As a project gets bigger, so does its level of uncertainty. The Fibonacci steps are biased against under-budgeting, and also provide a cue to opt out of projects that are too difficult to estimate. In that case, it’s likely better to toe-in with a third party instead of blazing into the unknown by yourself.

    All of these considerations apply to our earlier example, the typeahead search widget. Most germane is the provision to “beware the big.” When I say “big,” I mean that relative to what usually works for a given team. In this case, it was a deliverable that felt very familiar in size and scope: we were being asked to extend an open-source CMS. If instead we had been asked to make a CMS, alarms would have gone off.

    Look before you leap, and after you land

    It’s not that third parties are bad per se. It’s just that the modern web team strikes me as a strange place: not only do we stand on the shoulders of giants, we do so without getting to know them first—and we hoist our organizations and clients up there, too.

    Granted, there are many things you shouldn’t do yourself, and it’s possible to hurt your company by trying to do them—NIH is a problem, not a goal. But when teams err too far in the other direction, developers become disenfranchised, components start to look like spare parts, and clients pay for solutions that aren’t quite right. Using a third party versus staying in-house is a big decision, and we need to think hard before we make it. Use my line of questions, or come up with one that fits your team better. After all, you’re your own best dependency.

  • One Step Ahead: Improving Performance with Prebrowsing 

    We all want our websites to be fast. We optimize images, create CSS sprites, use CDNs, cache aggressively, and gzip and minimize static content. We use every trick in the book.

    But we can still do more. If we want faster outcomes, we have to think differently. What if, instead of leaving our users to stare at a spinning wheel, waiting for content to be delivered, we could predict where they wanted to go next? What if we could have that content ready for them before they even ask for it?

    We tend to see the web as a reactive model, where every action causes a reaction. Users click, then we take them to a new page. They click again, and we open another page. But we can do better. We can be proactive with prebrowsing.

    The three big techniques

    Steve Souders coined the term prebrowsing (from predictive browsing) in one of his articles late last year. Prebrowsing is all about anticipating where users want to go and preparing the content ahead of time. It’s a big step toward a faster and less visible internet.

    Browsers can analyze patterns to predict where users are going to go next, and start DNS resolution and TCP handshakes as soon as users hover over links. But to get the most out of these improvements, we can enable prebrowsing on our web pages, with three techniques at our disposal:

    • DNS prefetching
    • Resource prefetching
    • Prerendering

    Now let’s dive into each of these separately.

    DNS prefetching

    Whenever we know our users are likely to request a resource from a different domain than our site, we can use DNS prefetching to warm the machinery for opening the new URL. The browser can pre-resolve the DNS for the new domain ahead of time, saving several milliseconds when the user actually requests it. We are anticipating, and preparing for an action.

    Modern browsers are very good at parsing our pages, looking ahead to pre-resolve all necessary domains ahead of time. Chrome goes as far as keeping an internal list with all related domains every time a user visits a site, pre-resolving them when the user returns (you can see this list by navigating to chrome://dns/ in your Chrome browser). However, sometimes access to new URLs may be hidden behind redirects or embedded in JavaScript, and that’s our opportunity to help the browser.

    Let’s say we are downloading a set of resources from the domain cdn.example.com using a JavaScript call after a user clicks a button. Normally, the browser would have to resolve the DNS at the time of the click, but we can speed up the process by including a dns-prefetch directive in the head section of our page:

    <link rel="dns-prefetch" href="http://cdn.example.com">
    

    Doing this informs the browser of the existence of the new domain, and it will combine this hint with its own pre-resolution algorithm to start a DNS resolution as soon as possible. The entire process will be faster for the user, since we are shaving off the time for DNS resolution from the operation. (Note that browsers do not guarantee that DNS resolution will occur ahead of time; they simply use our hint as a signal for their own internal pre-resolution algorithm.)

    But exactly how much faster will pre-resolving the DNS make things? In your Chrome browser, open chrome://histograms/DNS and search for DNS.PrefetchResolution. You’ll see a table like this:

    Histogram for DNS.PrefetchResolution

    This histogram shows my personal distribution of latencies for DNS prefetch requests. On my computer, for 335 samples, the average time is 88 milliseconds, with a median of approximately 60 milliseconds. Shaving 88 milliseconds off every request our website makes to an external domain? That’s something to celebrate.

    But what happens if the user never clicks the button to access the cdn.example.com domain? Aren’t we pre-resolving a domain in vain? We are, but luckily for us, DNS prefetching is a very low-cost operation; the browser will need to send only a few hundred bytes over the network, so the risk incurred by a preemptive DNS lookup is very low. That being said, don’t go overboard when using this feature; prefetch only domains that you are confident the user will access, and let the browser handle the rest.

    Look for situations that might be good candidates to introduce DNS prefetching on your site:

    • Resources on different domains hidden behind 301 redirects
    • Resources accessed from JavaScript code
    • Resources for analytics and social sharing (which usually come from different domains)

    DNS prefetching is currently supported on IE11, Chrome, Chrome Mobile, Safari, Firefox, and Firefox Mobile, which makes this feature widespread among current browsers. Browsers that don’t currently support DNS prefetching will simply ignore the hint, and DNS resolution will happen in a regular fashion.

    Resource prefetching

    We can go a little bit further and predict that our users will open a specific page in our own site. If we know some of the critical resources used by this page, we can instruct the browser to prefetch them ahead of time:

    <link rel="prefetch" href="http://cdn.example.com/library.js">
    

    The browser will use this instruction to prefetch the indicated resources and store them on the local cache. This way, as soon as the resources are actually needed, the browser will have them ready to serve.

    Unlike DNS prefetching, resource prefetching is a more expensive operation; be mindful of how and when to use it. Prefetching resources can speed up our websites in ways we would never get by merely prefetching new domains—but if we abuse it, our users will pay for the unused overhead.

    Let’s take a look at the average response size of some of the most popular resources on a web page, courtesy of the HTTP Archive:

    Chart of average response size of web page resources

    On average, prefetching a script file (like we are doing on the example above) will cause 16kB to be transmitted over the network (without including the size of the request itself). This means that we will save 16kB of downloading time from the process, plus server response time, which is amazing—provided it’s later accessed by the user. If the user never accesses the file, we actually made the entire workflow slower by introducing an unnecessary delay.

    If you decide to use this technique, prefetch only the most important resources, and make sure they are cacheable by the browser. Images, CSS, JavaScript, and font files are usually good candidates for prefetching, but HTML responses are not since they aren’t cacheable.

    Here are some situations where, due to the likelihood of the user visiting a specific page, you can prefetch resources ahead of time:

    • On a login page, since users are usually redirected to a welcome or dashboard page after logging in
    • On each page of a linear questionnaire or survey workflow, where users are visiting subsequent pages in a specific order
    • On a multi-step animation, since you know ahead of time which images are needed on subsequent scenes

    Resource prefetching is currently supported on IE11, Chrome, Chrome Mobile, Firefox, and Firefox Mobile. (To determine browser compatibility, you can run a quick browser test on prebrowsing.com.)

    Prerendering

    What about going even further and asking for an entire page? Let’s say we are absolutely sure that our users are going to visit the about.html page in our site. We can give the browser a hint:

    <link rel="prerender" href="http://example.com/about.html">
    

    This time the browser will download and render the page in the background ahead of time, and have it ready for the user as soon as they ask for it. The transition from the current page to the prerendered one would be instantaneous.

    Needless to say, prerendering is the most risky and costly of these three techniques. Misusing it can cause major bandwidth waste—especially harmful for users on mobile devices. To illustrate this, let’s take a look at this chart, also courtesy of the HTTP Archive:

    Graph of total transfer size and total requests to render a web page

    In June of this year, the average number of requests to render a web page was 96, with a total size of 1,808kB. So if your user ends up accessing your prerendered page, then you’ve hit the jackpot: you’ll save the time of downloading almost 2,000kB, plus server response time. But if you’re wrong and your user never accesses the prerendered page, you’ll make them pay a very high cost.

    When deciding whether to prerender entire pages ahead of time, consider that Google prerenders the top results on its search page, and Chrome prerenders pages based on the historical navigation patterns of users. Using the same principle, you can detect common usage patterns and prerender target pages accordingly. You can also use it, just like resource prefetching, on questionnaires or surveys where you know users will complete the workflow in a particular order.

    At this time, prerendering is only supported on IE11, Chrome, and Chrome Mobile. Neither Firefox nor Safari have added support for this technique yet. (And as with resource prefetching, you can check prebrowsing.com to test whether this technique is supported in your browser.)

    A final word

    Sites like Google and Bing are using these techniques extensively to make search instant for their users. Now it’s time for us to go back to our own sites and take another look. Can we make our experiences better and faster with prefetching and prerendering?

    Browsers are already working behind the scenes, looking for patterns in our sites to make navigation as fast as possible. Prebrowsing builds on that: we can combine the insight we have on our own pages with further analysis of user patterns. By helping browsers do a better job, we speed up and improve the experience for our users.

  • Valediction 

    When I first met Kevin Cornell in the early 2000s, he was employing his illustration talent mainly to draw caricatures of his fellow designers at a small Philadelphia design studio. Even in that rough, dashed-off state, his work floored me. It was as if Charles Addams and my favorite Mad Magazine illustrators from the 1960s had blended their DNA to spawn the perfect artist.

    Kevin would deny that label, but artist he is. For there is a vision in his mind, a way of seeing the world, that is unlike anyone else’s—and he has the gift to make you see it too, and to delight, inspire, and challenge you with what he makes you see.

    Kevin was part of a small group of young designers and artists who had recently completed college and were beginning to establish careers. Others from that group included Rob Weychert, Matt Sutter, and Jason Santa Maria. They would all go on to do fine things in our industry.

    It was Jason who brought Kevin on as house illustrator during the A List Apart 4.0 brand overhaul in 2005, and Kevin has worked his strange magic for us ever since. If you’re an ALA reader, you know how he translates the abstract web design concepts of our articles into concrete, witty, and frequently absurd situations. Above all, he is a storyteller—if pretentious designers and marketers haven’t sucked all the meaning out of that word.

    For nearly 10 years, Kevin has taken our well-vetted, practical, frequently technical web design and development pieces, and elevated them to the status of classic New Yorker articles. Tomorrow he publishes his last new illustrations with us. There will never be another like him. And for whatever good it does him, Kevin Cornell has my undying thanks, love, and gratitude.

  • My Favorite Kevin Cornell 

    After 200 issues—yes, two hundred—Kevin Cornell is retiring from his post as A List Apart’s staff illustrator. Tomorrow’s issue will be the last one featuring new illustrations from him.

    Sob.

    For years now, we’ve eagerly awaited Kevin’s illustrations each issue, opening his files with all the patience of a kid tearing into a new LEGO set.

    But after nine years and more than a few lols, it’s time to give Kevin’s beautifully deranged brain a rest.

    We’re still figuring out what comes next for ALA, but while we do, we’re sending Kevin off the best way we know how: by sharing a few of our favorite illustrations. Read on for stories from ALA staff, past and present—and join us in thanking Kevin for his talent, his commitment, and his uncanny ability to depict seemingly any concept using animals, madmen, and circus figures.

    Of all the things I enjoyed about working on A List Apart, I loved anticipating the reveal: seeing Kevin’s illos for each piece, just before the issue went live. Every illustration was always a surprise—even to the staff. My favorite, hands-down, was his artwork for “The Discipline of Content Strategy,” by Kristina Halvorson. In 2008, content was web design’s “elephant in the room” and Kevin’s visual metaphor nailed it. In a drawing, he encapsulated thoughts and feelings many had within the industry but were unable to articulate. That’s the mark of a master.

    —Krista Stevens, Editor-in-chief, 2006–2012

    In the fall of 2011, I submitted my first article to A List Apart. I was terrified: I didn’t know anyone on staff. The authors’ list read like a who’s who of web design. The archives were intimidating. But I had ideas, dammit. I hit send.

    I told just one friend what I’d done. His eyes lit up. “Whoa. You’d get a Kevin Cornell!” he said.

    Whoa indeed. I might get a Kevin Cornell?! I hadn’t even thought about that yet.

    Like Krista, I fell in love with Kevin’s illustration for “The Discipline of Content Strategy”—an illustration that meant the world to me as I helped my clients see their own content elephants. The idea of having a Cornell of my own was exciting, but terrifying. Could I possibly write something worthy of his illustration?

    Months later, there it was on the screen: little modular sandcastles illustrating my article on modular content. I was floored.

    Now, after two years as ALA’s editor in chief, I’ve worked with Kevin through dozens of issues. But you know what? I’m just as floored as ever.

    Thank you, Kevin, you brilliant, bizarre, wonderful friend.

    —Sara Wachter-Boettcher, Editor-in-chief

    It’s impossible for me to choose a favorite of Kevin’s body of work for ALA, because my favorite Cornell illustration is the witty, adaptable, humane language of characters and symbols underlying his years of work. If I had to pick a single illustration to represent the evolution of his visual language, I think it would be the hat-wearing nested egg with the winning smile that opened Andy Hagen’s “High Accessibility is Effective Search Engine Optimization.” An important article but not, perhaps, the juiciest title A List Apart has ever run…and yet there’s that little egg, grinning in his slightly dopey way.

    If my memory doesn’t fail me, this is the second appearance of the nested Cornell egg—we saw the first a few issues before in Issue 201, where it represented the nested components of an HTML page. When it shows up here, in Issue 207, we realize that the egg wasn’t a cute one-off, but the first syllable of a visual language that we’ll see again and again through the years. And what a language! Who else could make semantic markup seem not just clever, but shyly adorable?

    A wander through the ALA archives provides a view of Kevin’s changing style, but something visible only backstage was his startlingly quick progression from reading an article to sketching initial ideas in conversation with then-creative director Jason Santa Maria to turning out a lovely miniature—and each illustration never failed to make me appreciate the article it introduced in a slightly different way. When I was at ALA, Kevin’s unerring eye for the important detail as a reader astonished me almost as much as his ability to give that (often highly technical, sometimes very dry) idea a playful and memorable visual incarnation. From the very first time his illustrations hit the A List Apart servers he’s shared an extraordinary gift with its readers, and as a reader, writer, and editor, I will always count myself in his debt.

    —Erin Kissane, Editor-in-chief, contributing editor, 1999–2009

    So much of what makes Kevin’s illustrations work are the gestures. The way the figure sits a bit slouched, but still perched on gentle tippy toes, determinedly occupied pecking away on his phone. With just a few lines, Kevin captures a mood and moment anyone can feel.

    —Jason Santa Maria, Former creative director

    I’ve had the pleasure of working with Kevin on the illustrations for each issue of A List Apart since we launched the latest site redesign in early 2013. By working, I mean replying to his email with something along the lines of “Amazing!” when he sent over the illustrations every couple of weeks.

    Prior to launching the new design, I had to go through the backlog of Kevin’s work for ALA and do the production work needed for the new layout. This bird’s eye view gave me an appreciation of the ongoing metaphorical world he had created for the magazine—the birds, elephants, weebles, mad scientists, ACME products, and other bits of amusing weirdness that breathed life into the (admittedly, sometimes) dry topics covered.

    If I had to pick a favorite, it would probably be the illustration that accompanied the unveiling of the redesign, A List Apart 5.0. The shoe-shine man carefully working on his own shoes was the perfect metaphor for both the idea of design as craft and the back-stage nature of the profession—working to make others shine, so to speak. It was a simple and humble concept, and I thought it created the perfect tone for the launch.

    —Mike Pick, Creative director

    So I can’t pick one favorite illustration that Kevin’s done. I just can’t. I could prattle on about this, that, or that other one, and tell you everything I love about each of ’em. I mean, hell: I still have a print of the illustration he did for my very first ALA article. (The illustration is, of course, far stronger than the essay that follows it.)

    But his illustration for James Christie’s excellent “Sustainable Web Design” is a perfect example of everything I love about Kevin’s ALA work: how he conveys emotion with a few deceptively simple lines; the humor he finds in contrast; the occasional chicken. Like most of Kevin’s illustrations, I’ve seen it whenever I reread the article it accompanies, and I find something new to enjoy each time.

    It’s been an honor working alongside your art, Kevin—and, on a few lucky occasions, having my words appear below it.

    Thanks, Kevin.

    —Ethan Marcotte, Technical editor

    Kevin’s illustration for Cameron Koczon’s “Orbital Content” is one of the best examples I can think of to show off his considerable talent. Those balloons are just perfect: vaguely reminiscent of cloud computing, but tethered and within arm’s reach, and evoking the fun and chaos of carnivals and county fairs. No other illustrator I’ve ever worked with is as good at translating abstract concepts into compact, visual stories. A List Apart won’t be the same without him.

    —Mandy Brown, Former contributing editor

    Kevin has always had what seems like a preternatural ability to take an abstract technical concept and turn it into a clear and accessible illustration.

    For me, my favorite pieces are the ones he did for the 3rd anniversary of the original “Responsive Web Design” article…the web’s first “responsive” illustration? Try squishing your browser here to see it in action—Ed

    —Tim Murtaugh, Technical director

    I think it may be impossible for me to pick just one illustration of Kevin’s that I really like. Much like trying to pick your one favorite album or that absolutely perfect movie, picking a true favorite is simply folly. You can whittle down the choices, but it’s guaranteed that the list will be sadly incomplete and longer (much longer) than one.

    If held at gunpoint, however ridiculous that sounds, and asked which of Kevin’s illustrations is my favorite, close to the top of the list would definitely be “12 Lessons for Those Afraid of CSS Standards.” It’s just so subtle, and yet so pointed.

    What I personally love the most about Kevin’s work is the overall impact it can have on people seeing it for the first time. It has become commonplace within our ranks to hear the phrase, “This is my new favorite Kevin Cornell illustration” with the publishing of each issue. And rightly so. His wonderfully simple style (which is also deceptively clever and just so smart) paired with the fluidity that comes through in his brush work is magical. Case in point for me would be his piece for “The Problem with Passwords” which just speaks volumes about the difficulty and utter ridiculousness of selecting a password and security question.

    We, as a team, have truly been spoiled by having him in our ranks for as long as we have. Thank you Kevin.

    —Erin Lynch, Production manager

    The elephant was my first glimpse at Kevin’s elegantly whimsical visual language. I first spotted it, a patient behemoth being studied by nonplussed little figures, atop Kristina Halvorson’s “The Discipline of Content Strategy,” which made no mention of elephants at all. Yet the elephant added to my understanding: content owners from different departments focus on what’s nearest to them. The content strategist steps back to see the entire thing.

    When Rachel Lovinger wrote about “Content Modelling,” the elephant made a reappearance as a yet-to-be-assembled, stylized elephant doll. The unflappable elephant has also been the mascot of product development at the hands of a team trying to construct it from user research, strutted its stuff as curated content, enjoyed the diplomatic guidance of a ringmaster, and been impersonated by a snake to tell us that busting silos is helped by a better understanding of others’ discourse conventions.

    The delight in discovering Kevin’s visual rhetoric doesn’t end there. With doghouses, birdhouses, and fishbowls, Kevin speaks of environments for users and workers. With owls he represents the mobile experience and smartphones. With a team arranging themselves to fit into a group photo, he makes the concept of responsive design easier to grasp.

    Not only has Kevin trained his hand and eye to produce the gestures, textures, and compositions that are uniquely his, but he has trained his mind to speak in a distinctive visual language—and he can do it on deadline. That is some serious mastery of the art.

    —Rose Weisburd, Columns editor

  • Measure Twice, Cut Once 

    Not too long ago, I had a few rough days in support of a client project. The client had a big content release, complete with a media embargo and the like. I woke up on the day of the launch, and things were bad. I was staring straight into a wall of red.

    A response and downtime report

    Thanks to the intrinsic complexity of software engineering, these situations happen—I’ve been through them before, and I’ll certainly be through them again. While the particulars change, there are two guiding principles I rely on when I find myself looking up that hopelessly tall cliff of red.

    You can’t be at the top of your game while stressed and nervous about the emergency, so unless there’s an obvious, quick-to-deploy resolution, you need to give yourself some cover to work.

    What that means will be unique to every situation, but as strange as it may sound, don’t dive into work on the be-all and end-all solution right off the bat. Take a few minutes to find a way to provide a bit of breathing room for you to build and implement the long-term solution in a stable, future-friendly way.

    Ideally, the cover you’re providing shouldn’t affect the users too much. Consider beefing up your caching policies to lighten the load on your servers as much as possible. If there’s any functionality that is particularly taxing on your hardware and isn’t mission critical, disable it temporarily. Even if keeping the servers alive means pressing a button every 108 minutes like you’re Desmond from Lost, do it.

    After you’ve got some cover, work the problem slowly and deliberately. Think solutions through two or three times to be sure they’re the right course of action.

    With the pressure eased, you don’t have to rush through a cycle of building, deploying, and testing potential fixes. Rushing leads to oversight of important details, and typically, that cycle ends the first time a change fixes (or seemingly fixes) the issue, which can lead to sloppy code and weak foundations for the future.

    If the environment doesn’t allow you to ease the pressure enough to work slowly, go ahead and cycle your way to a hacky solution. But don’t forget to come back and work the root issue, or else temporary fixes will pile up and eat away at your system’s architecture like a swarm of termites.

    Emergencies often require more thought and planning than everyday development, so be sure to give yourself the necessary time. Reactions alone may patch an issue, but thoughtfulness can solve it.

     

  • How We Read 

    I want you to think about what you’re doing right now. I mean really think about it. As your eyes move across these lines and funnel information to your brain, you’re taking part in a conversation I started with you. The conveyance of that conversation is the type you’re reading on this page, but you’re also filtering it through your experiences and past conversations. You’re putting these words into context. And whether you’re reading this book on paper, on a device, or at your desk, your environment shapes your experience too. Someone else reading these words may go through the same motions, but their interpretation is inevitably different from yours.

    This is the most interesting thing about typography: it’s a chain reaction of time and place with you as the catalyst. The intention of a text depends on its presentation, but it needs you to give it meaning through reading.

    Type and typography wouldn’t exist without our need to express and record information. Sure, we have other ways to do those things, like speech or imagery, but type is efficient, flexible, portable, and translatable. This is what makes typography not only an art of communication, but one of nuance and craft, because like all communication, its value falls somewhere on a spectrum between success and failure.

    The act of reading is beautifully complex, and yet, once we know how, it’s a kind of muscle memory. We rarely think about it. But because reading is so intrinsic to every other thing about typography, it’s the best place for us to begin. We’ve all made something we wanted someone else to read, but have you ever thought about that person’s reading experience?

    Just as you’re my audience for this book, I want you to look at your audience too: your readers. One of design’s functions is to entice and delight. We need to welcome readers and convince them to sit with us. But what circumstances affect reading?

    Readability

    Just because something is legible doesn’t mean it’s readable. Legibility means that text can be interpreted, but that’s like saying tree bark is edible. We’re aiming higher. Readability combines the emotional impact of a design (or lack thereof ) with the amount of effort it presumably takes to read. You’ve heard of TL;DR (too long; didn’t read)? Length isn’t the only detractor to reading; poor typography is one too. To paraphrase Stephen Coles, the term readability doesn’t ask simply, “Can you read it?” but “Do you want to read it?”

    Each decision you make could potentially hamper a reader’s understanding, causing them to bail and update their Facebook status instead. Don’t let your design deter your readers or stand in the way of what they want to do: read.

    Once we bring readers in, what else can we do to keep their attention and help them understand our writing? Let’s take a brief look at what the reading experience is like and how design influences it.

    The act of reading

    When I first started designing websites, I assumed everyone read my work the same way I did. I spent countless hours crafting the right layout and type arrangements. I saw the work as a collection of the typographic considerations I made: the lovingly set headlines, the ample whitespace, the typographic rhythm (fig 1.1). I assumed everyone would see that too.

    A normal paragraph of text
    Fig 1.1: A humble bit of text. But what actually happens when someone reads it?

    It’s appealing to think that’s the case, but reading is a much more nuanced experience. It’s shaped by our surroundings (am I in a loud coffee shop or otherwise distracted?), our availability (am I busy with something else?), our needs (am I skimming for something specific?), and more. Reading is not only informed by what’s going on with us at that moment, but also governed by how our eyes and brains work to process information. What you see and what you’re experiencing as you read these words is quite different.

    As our eyes move across the text, our minds gobble up the type’s texture—the sum of the positive and negative spaces inside and around letters and words. We don’t linger on those spaces and details; instead, our brains do the heavy lifting of parsing the text and assembling a mental picture of what we’re reading. Our eyes see the type and our brains see Don Quixote chasing a windmill.

    Or, at least, that’s what we hope. This is the ideal scenario, but it depends on our design choices. Have you ever been completely absorbed in a book and lost in the passing pages? Me too. Good writing can do that, and good typography can grease the wheels. Without getting too scientific, let’s look at the physical process of reading.

    Saccades and fixations

    Reading isn’t linear. Instead, our eyes perform a series of back and forth movements called saccades, or lightning-fast hops across a line of text (fig 1.2). Sometimes it’s a big hop; sometimes it’s a small hop. Saccades help our eyes register a lot of information in a short span, and they happen many times over the course of a second. A saccade’s length depends on our proficiency as readers and our familiarity with the text’s topic. If I’m a scientist and reading, uh, science stuff, I may read it more quickly than a non-scientist, because I’m familiar with all those science-y words. Full disclosure: I’m not really a scientist. I hope you couldn’t tell.

    Paragraph showing saccades or the movement our eyes make as we read a line of text
    Fig 1.2: Saccades are the leaps that happen in a split second as our eyes move across a line of text.

    Between saccades, our eyes stop for a fraction of a second in what’s called a fixation (fig 1.3). During this brief pause we see a couple of characters clearly, and the rest of the text blurs out like ripples in a pond. Our brains assemble these fixations and decode the information at lightning speed. This all happens on reflex. Pretty neat, huh?

    Paragraph showing the fixations or stopping points our eyes make as we read a paragraph
    Fig 1.3: Fixations are the brief moments of pause between saccades.

    The shapes of letters and the shapes they make when combined into words and sentences can significantly affect our ability to decipher text. If we look at an average line of text and cover the top halves of the letters, it becomes very difficult to read. If we do the opposite and cover the bottom halves, we can still read the text without much effort (fig 1.4).

    Paragraph showing how the upper half of letters are still readable to the human eyes
    Fig 1.4: Though the letters’ lower halves are covered, the text is still mostly legible, because much of the critical visual information is in the tops of letters.

    This is because letters generally carry more of their identifying features in their top halves. The sum of each word’s letterforms creates the word shapes we recognize when reading.

    Once we start to subconsciously recognize letters and common words, we read faster. We become more proficient at reading under similar conditions, an idea best encapsulated by type designer Zuzana Licko: “Readers read best what they read most.”

    It’s not a hard and fast rule, but close. The more foreign the letterforms and information are to us, the more slowly we discern them. If we traveled back in time to the Middle Ages with a book typeset in a super-awesome sci-fi font, the folks from the past might have difficulty with it. But here in the future, we’re adept at reading that stuff, all whilst flying around on hoverboards.

    For the same reason, we sometimes have trouble deciphering someone else’s handwriting: their letterforms and idiosyncrasies seem unusual to us. Yet we’re pretty fast at reading our own handwriting (fig 1.5).

    Three paragraphs of handwritten text
    Fig 1.5: While you’re very familiar with your own handwriting, reading someone else’s (like mine!) can take some time to get used to.

    There have been many studies on the reading process, with only a bit of consensus. Reading acuity depends on several factors, starting with the task the reader intends to accomplish. Some studies show that we read in word shapes—picture a chalk outline around an entire word—while others suggest we decode things letter by letter. Most findings agree that ease of reading relies on the visual feel and precision of the text’s setting (how much effort it takes to discern one letterform from another), combined with the reader’s own proficiency.

    Consider a passage set in all capital letters (fig 1.6). You can become adept at reading almost anything, but most of us aren’t accustomed to reading lots of text in all caps. Compared to the normal sentence-case text, the all-caps text feels pretty impenetrable. That’s because the capital letters are blocky and don’t create much contrast between themselves and the whitespace around them. The resulting word shapes are basically plain rectangles (fig 1.7).

    Paragraph illustrating the difficulty of reading text in all caps
    Fig 1.6: Running text in all caps can be hard to read quickly when we’re used to sentence case.
    Paragraph showing how words are recognizable by the shapes they form
    Fig 1.7: Our ability to recognize words is affected by the shapes they form. All-caps text forms blocky shapes with little distinction, while mixed-case text forms irregular shapes that help us better identify each word.

    Realizing that the choices we make in typefaces and typesetting have such an impact on the reader was eye-opening for me. Small things like the size and spacing of type can add up to great advantages for readers. When they don’t notice those choices, we’ve done our job. We’ve gotten out of their way and helped them get closer to the information.

    Stacking the deck

    Typography on screen differs from print in a few key ways. Readers deal with two reading environments: the physical space (and its lighting) and the device. A reader may spend a sunny day at the park reading on their phone. Or perhaps they’re in a dim room reading subtitles off their TV ten feet away. As designers, we have no control over any of this, and that can be frustrating. As much as I would love to go over to every reader’s computer and fix their contrast and brightness settings, this is the hand we’ve been dealt.

    The best solution to unknown unknowns is to make our typography perform as well as it can in all situations, regardless of screen size, connection, or potential lunar eclipse. We’ll look at some methods for making typography as sturdy as possible later in this book.

    It’s up to us to keep the reading experience unencumbered. At the core of typography is our audience, our readers. As we look at the building blocks of typography, I want you to keep those readers in mind. Reading is something we do every day, but we can easily take it for granted. Slapping words on a page won’t ensure good communication, just as mashing your hands across a piano won’t make for a pleasant composition. The experience of reading and the effectiveness of our message are determined by both what we say and how we say it. Typography is the primary tool we use as designers and visual communicators to speak.

     

  • The Most Dangerous Word In Software Development 

    “Just put it up on a server somewhere.”

    “Just add a favorite button to the right side of the item.”

    “Just add [insert complex option here] to the settings screen.”

    Usage of the word “just” points to a lot of assumptions being made. A few months ago, Brad Frost shared some thoughts on how the word applies to knowledge.

    “Just” makes me feel like an idiot. “Just” presumes I come from a specific background, studied certain courses in university, am fluent in certain technologies, and have read all the right books, articles, and resources.

    He points out that learning is never as easy as it is made to seem, and he’s right. But there is a direct correlation between the amount of knowledge you’ve acquired and the danger of the word “just.” The more you know, the bigger the problems you solve, and the bigger the assumptions are that are hiding behind the word.

    Take the comment, “Just put it up on a server somewhere.” How many times have we heard that? But taking a side project running locally and deploying it on real servers requires time, money, and hard work. Some tiny piece of software somewhere will probably be the wrong version, and will need to be addressed. The system built locally probably isn’t built to scale perfectly.

    “Just” implies that all of the thinking behind a feature or system has been done. Even worse, it implies that all of the decisions that will have to be made in the course of development have already been discovered—and that’s never the case.

    Things change when something moves from concept to reality. As Dave Wiskus said on a recent episode of Debug, “everything changes when fingers hit glass.”

    The favorite button may look fine on the right side, visually, but it might be in a really tough spot to touch. What about when favoriting isn’t the only action to be taken? What happens to the favorite button then?

    Even once favoriting is built and in testing, it should be put through its paces again. In use, does favoriting provide enough value to warrant is existence? After all, “once that feature’s out there, you’re stuck with it.”

    When you hear the word “just” being thrown around, dig deep into that statement and find all of the assumptions made within it. Zoom out and think slow.

    Your product lives and dies by the decisions discovered between ideation and creation, so don’t just put it up on a server somewhere.

  • Gardens, Not Graves 

    The stream—that great glut of ideas, opinions, updates, and ephemera that pours through us every day—is the dominant way we organize content. It makes sense; the stream’s popularity springs from the days of the early social web, when a huge number of users posted all types of content on unpredictable schedules. The simplest way to show updates to new readers focused on reverse chronology and small, discrete chunks, as sorting by newness called for content quick to both produce and digest. This approach saw wide adoption in blogs, social networks, notification systems, etc., and ever since we’ve flitted from one stream to another like sugar-starved hummingbirds.

    Problem is, the stream’s emphasis on the new above all else imposes a short lifespan on content. Like papers piled on your desk, the stream makes it easy to find the last thing you’ve added, while anything older than a day effectively disappears. Solely relying on reverse-chronology turns our websites into graveyards, where things pile up atop each other until they fossilize. We need to start treating our websites as gardens, as places worthy of cultivation and renewal, where new things can bloom from the old.

    The stream, in print

    The stream’s focus on the now isn’t novel, anyway. Old-school modes of publishing like newspapers and magazines shared a similar disposability: periodic updates went out to subscribers and were then thrown away. No one was expected to hang onto them for long.

    Over the centuries with print, however, we came up with a number of ways to preserve and showcase older material. Newspapers put out annual indexes cataloguing everything they print ordered by subject and frequency. Magazines get rebound into larger, more substantial anthologies. Publishers frequently reach into their back catalogue and reprint books with new forewords or even chapters. These acts serve two purposes: to maintain widespread and cheap access to material that has gone out of print, and to ensure that material is still relevant and useful today.

    But we haven’t yet developed patterns for slowing down on the web. In some ways, access is simpler. As long as the servers stay up, content remains a link away from interested readers. But that same ease of access makes the problem of outdated or redundant content more pronounced. Someone looking at an old magazine article also holds the entire issue it was printed with. With an online article, someone can land directly on the piece with little indication of who it’s by, what it’s for, and whether it’s gone out of date. Providing sufficient context for content already out there is a vital factor to consider and design for.

    You don’t need to be a writer to help fix this. Solutions can come from many fields, from targeted writing and design tweaks to more overarching changes in content strategy and information architecture.

    Your own websites are good places to start. Here are some high-level guidelines, ordered by the amount of effort they’ll take. Your site will demand its own unique set of approaches, though, so recombine and reinvent as needed.

    Reframe

    Emma is a travel photographer. She keeps a blog, and many years ago she wrote a series about visiting Tibet. Back then, she was required to travel with a guided tour. That’s no longer the case, as visitors only need to obtain a permit.

    The most straightforward thing to do is to look through past content and identify what’s outdated: pieces you’ve written, projects you worked on, things you like. The goal is triage: sorting things into what needs attention and what’s still fine.

    Once you’ve done that, find a way to signal their outdated status. Perhaps you have a design template for “archived” content that has a different background color, more strongly emphasizes when it was written, or adds a sentence or two at the top of your content that explains why it’s outdated. If entire groups of content need mothballing, see whether it makes sense to pull them into separate areas. (Over time, you may have to overhaul the way your entire site is organized—a complicated task we’ll address below.)

    Emma adds an <outdated> tag to her posts about her guided tour and configures the site’s template to show a small yellow notification at the top telling visitors that her information is from 2008 and may be irrelevant. She also adds a link on each post pointing to a site that explains the new visa process and ways to obtain Tibetan permits.

    On the flip side, separate the pieces that you’re particularly proud of. Your “best-of” material is probably getting scattered by the reverse-chronology organization of your website, so list all of them in a prominent place for people visiting for the first time.

    Recontextualize

    I hope that was easy! The next step is to look for old content you feel differently about today.

    When Emma first started traveling, she hated to fly. She hated waiting in line, hated sitting in cramped seats, and especially hated the food. There are many early blog posts venting about this.

    Maybe what you wrote needs additional nuance or more details. Or maybe you’ve changed since then. Explain why—lead readers down the learning path you took. It’s a chance for you to reflect on the delta.

    Now that she’s gotten more busy and has to frequently make back-to-back trips for clients, she finds that planes are the best time for her to edit photos from the last trip, catch up on email, and have some space for reflection. So she writes about how she fills up her flying time now, leaving more time when she’s at her destination to shoot and relax.

    Or expand on earlier ideas. What started as a rambling post you began at midnight can turn into a series or an entire side project. Or, if something you wrote provokes a big response online, you could gather those links at the bottom of your piece. It’s a service to your new readers to collect connected pieces together, so that they don’t have to hunt around to find them all.

    Revise and reorganize

    Hopefully that takes care of most of your problematic content. But for content so dire you’re embarrassed to even look at it, much less having other people reading it, consider more extreme measures: the act of culling, revising, and rewriting.

    Looking back: maybe you were completely wrong about something, and you would now argue the opposite. Rewrite it! Or you’re shocked to find code you wrote one rushed Friday afternoon—well, set aside some time to start from the ground up and do it right.

    Emma started her website years ago as a typical reverse-chron blog, but has started to work on a redesign based around the concepts of LOCATIONS and TRIPS. Appearing as separate items in the navigation, they act as different ways for readers to approach and make sense of her work. The locations present an at-a-glance view of where she’s been and how well-traveled she is. The trips (labeled Antarctica: November 2012, Bangkok: Fall 2013, Ghana: early 2014, etc.) retain the advantages of reverse-chronology by giving people updates on what she’s done recently, but these names are more flexible and easier to explain than dates and timestamps on their own. Someone landing directly on a post from a trip two years ago can easily get to the other posts from that trip, but they would be lost if the entries were only timestamped.

    If the original structure no longer matches the reality of what’s there, it’s also the best case for redesigning and reorganizing your website. Now is the time to consider your content as a whole. Think about how you’d explain your website to someone you’re having lunch with. Are you a writer, photographer, artist, musician, cook? What kind? What sorts of topics does your site talk about? What do you want people to see first? How do they go deeper on the things they find interesting? This gets rather existential, but it’s important to ask yourself.

    Remove

    If it’s really, truly foul, you can throw it out. (It’s okay. You officially have permission.) Not everything needs to live online forever, but throwing things out doesn’t have to be your first option when you get embarrassed by the past.

    Deploying the internet equivalent of space lasers does, I must stress, come with some responsibility. Other sites can be affected by changes in your links:

    • If you’re consolidating or moving content, it’s important to set up redirects for affected URLs to the new pages.
    • If someone links to a tutorial you wrote, it may be better to archive it and link to more updated information, rather than outright deleting it.

    Conclusion

    Everything we’ve done so far applies to more than personal websites, of course. Where else?

    Businesses have to maintain scores of announcements, documentation, and customer support. Much of it is subject to greatly change over time, and many need help looking at things from a user’s perspective. Content strategy has been leading the charge on this, from developing content models and relationships, to communicating with empathy in touchy situations, to working out content standards.

    Newspapers and magazines relentlessly publish new pieces and sweep the old away from public view. Are there opportunities to highlight material from their archives? What about content that can always stay interesting? How can selections be best brought together to generate new connections and meaning?

    Museums and libraries, as they step into their digital shoes, will have to think about building places online for histories and archives for the long term. Are there new roles and practices that bridge the old world with the networked, digital one? How do they preserve entirely new categories of things for the public?

    No one has all the answers. But these are questions that come from leaving the stream and approaching content from the long view. These are problems that the shapers and caretakers of the web are uniquely positioned to think about and solve.

    As a community, we take pride in being makers and craftsmen. But for years, we’ve neglected the disciplines of stewardship—the invisible and unglamorous work of collecting, restoring, safekeeping, and preservation. Maybe the answer isn’t to post more, to add more and more streams. Let’s return to our existing content and make it more durable and useful.

    You don’t even have to pick up a shovel.

  • Radio-Controlled Web Design 

    Interactive user interfaces are a necessity in our responsive world. Smaller screens constrain the amount of content that can be displayed at any given time, so we need techniques to keep navigation and secondary information out of the way until they’re needed. From tabs and modal overlays to hidden navigation, we’ve created many powerful design patterns that show and hide content using JavaScript.

    JavaScript comes with its own mobile challenges, though. Network speeds and data plans vary wildly, and every byte we deliver has an impact on the render speed of our pages or applications. When we add JavaScript to a page, we’re typically adding an external JavaScript file and an optional (usually large) library like jQuery. These interfaces won’t become usable until all the content, JavaScript files included, is downloaded—creating a slow and sluggish first impression for our users.

    If we could create these content-on-demand patterns with no reliance on JavaScript, our interfaces would render earlier, and users could interact with them as soon as they were visible. By shifting some of the functionality to CSS, we could also reduce the amount of JavaScript needed to render the rest of our page. The result would be smaller file sizes, faster page-load times, interfaces that are available earlier, and the same functionality we’ve come to rely on from these design patterns.

    In this article, I’ll explore a technique I’ve been working on that does just that. It’s still a bit experimental, so use your best judgment before using it in your own production systems.

    Understanding JavaScript’s role in maintaining state

    To understand how to accomplish these design patterns without JavaScript at all, let’s first take a look at the role JavaScript plays in maintaining state for a simple tabbed interface.

    See the demo: Show/hide example

    Let’s take a closer look at the underlying code.

    <div class="js-tabs">
    
        <div class="tabs">
            <a href="#starks-panel" id="starks-tab"
                class="tab active">Starks</a>
            <a href="#lannisters-panel" id="lannisters-tab"
                class="tab">Lannisters</a>
            <a href="#targaryens-panel" id="targaryens-tab"
                class="tab">Targaryens</a>
        </div>
    
        <div class="panels">
            <ul id="starks-panel" class="panel active">
                <li>Eddard</li>
                <li>Caitelyn</li>
                <li>Robb</li>
                <li>Sansa</li>
                <li>Brandon</li>
                <li>Arya</li>
                <li>Rickon</li>
            </ul>
            <ul id="lannisters-panel" class="panel">
                <li>Tywin</li>
                <li>Cersei</li>
                <li>Jamie</li>
                <li>Tyrion</li>
            </ul>
            <ul id="targaryens-panel" class="panel">
                <li>Viserys</li>
                <li>Daenerys</li>
            </ul>
        </div>
    
    </div>
    

    Nothing unusual in the layout, just a set of tabs and corresponding panels that will be displayed when a tab is selected. Now let’s look at how tab state is managed by altering a tab’s class:

    ...
    
    .js-tabs .tab {
        /* inactive styles go here */
    }
    .js-tabs .tab.active {
        /* active styles go here */
    }
    
    .js-tabs .panel {
        /* inactive styles go here */
    }
    .js-tabs .panel.active {
        /* active styles go here */
    }
    
    ...
    

    Tabs and panels that have an active class will have additional CSS applied to make them stand out. In our case, active tabs will visually connect to their content while inactive tabs remain separate, and active panels will be visible while inactive panels remain hidden.

    At this point, you’d use your preferred method of working with JavaScript to listen for click events on the tabs, then manipulate the active class, removing it from all tabs and panels and adding it to the newly clicked tab and corresponding panel. This pattern is pretty flexible and has worked well for a long time. We can simplify what’s going on into two distinct parts:

    1. JavaScript binds events that manipulate classes.
    2. CSS restyles elements based on those classes.

    State management without JavaScript

    Trying to replicate event binding and class manipulation in CSS and HTML alone would be impossible, but if we define the process in broader terms, it becomes:

    1. User input changes the system’s active state.
    2. The system is re-rendered when the state is changed.

    In our HTML- and CSS-only solution, we’ll use radio buttons to allow the user to manipulate state, and the :checked pseudo-class as the hook to re-render.

    The solution has its roots in Chris Coyier’s checkbox hack, which I was introduced to via my colleague Scott O’Hara in his morphing menu button demo. In both cases, checkbox inputs are used to maintain two states without JavaScript by styling elements using the :checked pseudo-class. In this case, we’ll be using radio buttons to increase the number of states we can maintain beyond two.

    Wait, radio buttons?

    Using radio buttons to do something other than collect form submission data may make some of you feel a little uncomfortable, but let’s look at what the W3C says about input use and see if we can ease some concerns:

    The <input> element represents a typed data field, usually with a form control to allow the user to edit the data. (emphasis mine)

    “Data” is a pretty broad term—it has to be to cover the multitude of types of data that forms collect. We’re allowing the user to edit the state of a part of the page. State is just data about that part of the page at any given time. This may not have been the intended use of <input>, but we’re holding true to the specification.

    The W3C also states that inputs may be rendered wherever “phrasing content” can be used, which is basically anywhere you could put standalone text. This allows us to use radio buttons outside of a form.

    Radio-controlled tabs

    So now that we know a little more about whether we can use radio buttons for this purpose, let’s dig into an example and see how they can actually remove or reduce our dependency on JavaScript by modifying the original tabs example.

    Add radio buttons representing state

    Each radio button will represent one state of the interactive component. In our case, we have three tabs and each tab can be active, so we need three radio buttons, each of which will represent a particular tab being active. By giving the radio buttons the same name, we’ll ensure that only one may be checked at any time. Our JavaScript example had the first tab active initially, so we can add the checked attribute to the radio button representing the first tab, indicating that it is currently active.

    Because CSS selectors can only style sibling or child selectors based on the state of another element, these radio buttons must come before any content that needs to be visually manipulated. In our case, we’ll put our radio buttons just before the tabs div:

        <input class="state" type="radio" name="houses-state"
            id="starks" checked />
        <input class="state" type="radio" name="houses-state"
            id="lannisters" />
        <input class="state" type="radio" name="houses-state"
            id="targaryens" />
    
        <div class="tabs">
        ...
    

    Replace click and touch areas with labels

    Labels naturally respond to click and touch events. We can’t tell them how to react to those events, but the behavior is predictable and we can leverage it. When a label associated with a radio button is clicked or touched, the radio button is checked while all other radio buttons in the same group are unchecked.

    By setting the for attribute of our labels to the id of a particular radio button, we can place labels wherever we need them while still inheriting the touch and click behavior.

    Our tabs were represented with anchors in the earlier example. Let’s replace them with labels and add for attributes to wire them up to the correct radio buttons. We can also remove the active class from the tab and panel as the radio buttons will be maintaining state:

    ...
        <input class="state" type="radio" title="Targaryens"
            name="houses-state" id="targaryens" />
    
        <div class="tabs">
            <label for="starks" id="starks-tab"
                class="tab">Starks</label>
            <label for="lannisters" id="lannisters-tab"
                class="tab">Lannisters</label>
            <label for="targaryens" id="targaryens-tab"
                class="tab">Targaryens</label>
        </div>
    
        <div class="panels">
    ...
    

    Hide radio buttons with CSS

    Now that our labels are in place, we can safely hide the radio buttons. We still want to keep the tabs keyboard accessible, so we’ll just move the radio buttons offscreen:

    ...
    
    .radio-tabs .state {
        position: absolute;
        left: -10000px;
    }
    
    ...
    

    Style states based on :checked instead of .active

    The :checked pseudo-class allows us to apply CSS to a radio button when it is checked. The sibling selector ~ allows us to style elements that follow an element in the same level. Combined, we can style anything after the radio buttons based on the buttons’ state.

    The pattern is #radio:checked ~ .something-after-radio or optionally #radio:checked ~ .something-after-radio .something-nested-deeper:

    ...
    
    .tab {
        ...
    }
    #starks:checked ~ .tabs #starks-tab,
    #lannisters:checked ~ .tabs #lannisters-tab,
    #targaryens:checked ~ .tabs #targaryens-tab {
        ...
    }
    
    .panel {
        ...
    }
    #starks:checked ~ .panels #starks-panel,
    #lannisters:checked ~ .panels #lannisters-panel,
    #targaryens:checked ~ .panels #targaryens-panel {
        ...
    }
    
    ...
    

    Now when the tab labels are clicked, the appropriate radio button will be checked, which will style the correct tab and panel as active. The result:

    See the demo: Show/hide example

    Browser support

    The requirements for this technique are pretty low. As long as a browser supports the :checked pseudo-class and ~ sibling selector, we’re good to go. Firefox, Chrome, and mobile Webkit have always supported these selectors. Safari has had support since version 3, and Opera since version 9. Internet Explorer started supporting the sibling selector in version 7, but didn’t add support for :checked until IE9. Android supports :checked but has a bug which impedes it from being aware of changes to a checked element after page load.

    That’s decent support, but with a little extra work we can get Android and older IE working as well.

    Fixing the Android 2.3 :checked bug

    In some versions of Android, :checked won’t update as the state of a radio group changes. Luckily, there’s a fix for that involving a webkit-only infinite animation on the body, which Tim Pietrusky points out in his advanced checkbox hack:

    ...
    
    /* Android 2.3 :checked fix */
    @keyframes fake {
        from {
            opacity: 1;
        }
        to {
            opacity: 1
        }
    }
    body {        
        animation: fake 1s infinite;
    }
    
    ...
    

    JavaScript shim for old Internet Explorer

    If you need to support IE7 and IE8, you can add this shim to the bottom of your page in a script tag:

    document.getElementsByTagName('body')[0]
    .addEventListener('change', function (e) {
        var radios, i;
        if (e.target.getAttribute('type') === 'radio') {
            radios = document.querySelectorAll('input[name="' +
                e.target.getAttribute('name') + '"]');
            for (i = 0; i < radios.length; i += 1) {
                radios[ i ].className = 
                    radios[ i ].className.replace(
                        /(^|\s)checked(\s|$)/,
                        ' '
                    );
                if (radios[ i ] === e.target) {
                    radios[ i ].className += ' checked';
                }
            }
        }
    });
    

    This adds a checked class to the currently checked radio button, allowing you to double up your selectors and keep support. Your selectors would have to be updated to include :checked and .checked versions like this:

    ...
    
    .tab {
        ...
    }
    #starks:checked ~ .tabs #starks-tab,
    #starks.checked ~ .tabs #starks-tab,
    #lannisters:checked ~ .tabs #lannisters-tab,
    #lannisters.checked ~ .tabs #lannisters-tab,
    #targaryens:checked ~ .tabs #targaryens-tab,
    #targaryens.checked ~ .tabs #targaryens-tab {
        ...
    }
    
    .panel {
        ...
    }
    #starks:checked ~ .panels #starks-panel,
    #starks.checked ~ .panels #starks-panel,
    #lannisters:checked ~ .panels #lannisters-panel,
    #lannisters.checked ~ .panels #lannisters-panel,
    #targaryens:checked ~ .panels #targaryens-panel,
    #targaryens.checked ~ .panels #targaryens-panel {
        ...
    }
    
    ...
    

    Using an inline script still saves a potential http request and speeds up interactions on newer browsers. When you choose to drop IE7 and IE8 support, you can drop the shim without changing any of your code.

    Maintaining accessibility

    While our initial JavaScript tabs exhibited the state management between changing tabs, a more robust example would use progressive enhancement to change three titled lists into tabs. It should also handle adding all the ARIA roles and attributes that screen readers and other assistive technologies use to navigate the contents of a page. A better JavaScript example might look like this:

    See the demo: Show/hide example

    Parts of the HTML are removed and will now be added by additional JavaScript; new HTML has been added and will be hidden by additional JavaScript; and new CSS has been added to manage the pre-enhanced and post-enhanced states. In general, our code has grown by a good amount.

    In order to support ARIA, particularly managing the aria-selected state, we’re going to have to bring some JavaScript back into our radio-controlled tabs. However, the amount of progressive enhancement we need to do is greatly reduced.

    If you aren’t familiar with ARIA or are a little rusty, you may wish to refer to the ARIA Authoring Practices for tabpanel.

    Adding ARIA roles and attributes

    First, we’ll add the role of tablist to the containing div.

    <div class="radio-tabs" role="tablist">
      
        <input class="state" type="radio" name="houses-state"
            id="starks" checked />
        ...
    

    Next, we’ll add the role of tab and attribute aria-controls to each radio button. The aria-controls value will be the id of the corresponding panel to show. Additionally, we’ll add titles to each radio button so that screen readers can associate a label with each tab. The checked radio button will also get aria-selected="true":

    <div class="radio-tabs" role="tablist">
      
        <input class="state" type="radio" title="Starks"
            name="houses-state" id="starks" role="tab"
            aria-controls="starks-panel" aria-selected="true"checked />
        <input class="state" type="radio" title="Lanisters" 
            name="houses-state" id="lannisters" role="tab" 
            aria-controls="lannisters-panel" />
        <input class="state" type="radio" title="Targaryens" 
            name="houses-state" id="targaryens" role="tab" 
            aria-controls="targaryens-panel" />
    
        <div class="tabs">
    

    We’re going to hide the visual tabs from assistive technology because they are shallow interfaces to the real tabs (the radio buttons). We’ll do this by adding aria-hidden="true" to our .tabs div:

        ...
        <input class="state" type="radio" title="Targaryens"
            name="houses-state" id="targaryens" role="tab"
            aria-controls="targaryens-panel" />
    
        <div class="tabs" aria-hidden="true">
            <label for="starks" id="starks-tab"
                class="tab">Starks</label>
        ...
    

    The last bit of ARIA support we need to add is on the panels. Each panel will get the role of tabpanel and an attribute of aria-labeledby with a value of the corresponding tab’s id:

       ...
       <div class="panels">
            <ul id="starks-panel" class="panel active"
                role="tabpanel" aria-labelledby="starks-tab">
                <li>Eddard</li>
                <li>Caitelyn</li>
                <li>Robb</li>
                <li>Sansa</li>
                <li>Brandon</li>
                <li>Arya</li>
                <li>Rickon</li>
            </ul>
            <ul id="lannisters-panel" class="panel"
                role="tabpanel" aria-labelledby="lannisters-tab">
                <li>Tywin</li>
                <li>Cersei</li>
                <li>Jamie</li>
                <li>Tyrion</li>
            </ul>
            <ul id="targaryens-panel" class="panel"
                role="tabpanel" aria-labelledby="targaryens-tab">
                <li>Viserys</li>
                <li>Daenerys</li>
            </ul>
        </div>
        ...
    

    All we need to do with JavaScript is to set the aria-selected value as the radio buttons change:

    $('.state').change(function () {
        $(this).parent().find('.state').each(function () {
            if (this.checked) {
                $(this).attr('aria-selected', 'true');
            } else {
                $(this).removeAttr('aria-selected');
            }       
        });
    });
    

    This also gives an alternate hook for IE7 and IE8 support. Both browsers support attribute selectors, so you could update the CSS to use [aria-selected] instead of .checked and remove the support shim.

    ...
    
    #starks[aria-selected] ~ .tabs #starks-tab,
    #lannisters[aria-selected] ~ .tabs #lannisters-tab,
    #targaryens[aria-selected] ~ .tabs #targaryens-tab,
    #starks:checked ~ .tabs #starks-tab,
    #lannisters:checked ~ .tabs #lannisters-tab,
    #targaryens:checked ~ .tabs #targaryens-tab {
        /* active tab, now with IE7 and IE8 support! */
    }
    
    ...
    

    The result is full ARIA support with minimal JavaScript—and you still get the benefit of tabs that can be used as soon as the browser paints them.

    See the demo: Show/hide example

    That’s it. Note that because the underlying HTML is available from the start, unlike the initial JavaScript example, we didn’t have to manipulate or create any additional HTML. In fact, aside from adding ARIA roles and parameters, we didn’t have to do much at all.

    Limitations to keep in mind

    Like most techniques, this one has a few limitations. The first and most important is that the state of these interfaces is transient. When you refresh the page, these interfaces will revert to their initial state. This works well for some patterns, like modals and offscreen menus, and less well for others. If you need persistence in your interface’s state, it is still better to use links, form submission, or AJAX requests to make sure the server can keep track of the state between visits or page loads.

    The second limitation is that there is a scope gap in what can be styled using this technique. Since you cannot place radio buttons before the <body> or <html> elements, and you can only style elements following radio buttons, you cannot affect either element with this technique.

    The third limitation is that you can only apply this technique to interfaces that are triggered via click, tap, or keyboard input. You can use progressive enhancement to listen to more complex interactions like scrolling, swipes, double-tap, or multitouch, but if your interfaces rely on these events alone, standard progressive enhancement techniques may be better.

    The final limitation involves how radio groups interact with the tab flow of the document. If you noticed in the tab example, hitting tab brings you to the tab group, but hitting tab again leaves the group. This is fine for tabs, and is the expected behavior for ARIA tablists, but if you want to use this technique for something like an open and close button, you’ll want to be able to have both buttons in the tab flow of the page independently based on the button location. This can be fixed through a bit of JavaScript in four steps:

    1. Set the radio buttons and labels to display: none to take them out of the tab flow and visibility of the page.
    2. Use JavaScript to add buttons after each label.
    3. Style the buttons just like the labels.
    4. Listen for clicks on the button and trigger clicks on their neighboring label.

    Even using this process, it is highly recommended that you use a standard progressive enhancement technique to make sure users without JavaScript who interact with your interfaces via keyboard don’t get confused with the radio buttons. I recommend the following JavaScript in the head of your document:

    <script>document.documentElement.className+=" js";</script>
    

    Before any content renders, this will add the js class to your <html> element, allowing you to style content depending on whether or not JavaScript is turned on. Your CSS would then look something like this:

    .thing {
        /* base styles - when no JavaScript is present
           hide radio button labels, show hidden content, etc. */
    }
    
    .js .thing {
        /* style when JavaScript is present
           hide content, show labels, etc. */
    }
    

    Here’s an example of an offscreen menu implemented using the above process. If JavaScript is disabled, the menu renders open at all times with no overlay:

    See the demo: Show/hide example

    Implementing other content-on-demand patterns

    Let’s take a quick look at how you might create some common user interfaces using this technique. Keep in mind that a robust implementation would address accessibility through ARIA roles and attributes.

    Modal windows with overlays

    • Two radio buttons representing modal visibility
    • One or more labels for modal-open which can look like anything
    • A label for modal-close styled to look like a semi-transparent overlay
    • A label for modal-close styled to look like a close button

    See the demo: Show/hide example

    Off-screen menu

    • Two radio buttons representing menu visibility
    • A label for menu-open styled to look like a menu button
    • A label for menu-close styled to look like a semi-transparent overlay
    • A label for menu-close styled to look like a close button

    See the demo: Show/hide example

    Switching layout on demand

    • Radio buttons representing each layout
    • Labels for each radio button styled like buttons

    See the demo: Show/hide example

    Switching style on demand

    • Radio buttons representing each style
    • Labels for each radio button styled like buttons

    See the demo: Show/hide example

    Content carousels

    • X radio buttons, one for each panel, representing the active panel
    • Labels for each panel styled to look like next/previous/page controls

    See the demo: Show/hide example

    Other touch- or click-based interfaces

    As long as the interaction does not depend on adding new content to the page or styling the <body> element, you should be able to use this technique to accomplish some very JavaScript-like interactions.

    Occasionally you may want to manage multiple overlapping states in the same system—say the color and size of a font. In these situations, it may be easier to maintain multiple sets of radio buttons to manage each state.

    It is also highly recommended that you use autocomplete="off" with your radio buttons to avoid conflict with browser form autofill switching state on your users.

    Radio-control the web?

    Is your project right for this technique? Ask yourself the following questions:

    1. Am I using complex JavaScript on my page/site that can’t be handled by this technique?
    2. Do I need to support Internet Explorer 6 or other legacy browsers?

    If the answer to either of those question is “yes,” you probably shouldn’t try to integrate radio control into your project. Otherwise, you may wish to consider it as part of a robust progressive enhancement technique.

    Most of the time, you’ll be able to shave some bytes off of your JavaScript files and CSS. Occasionally, you’ll even be able to remove Javascript completely. Either way, you’ll gain the appearance of speed—and build a more enjoyable experience for your users.

  • Matt Griffin on How We Work: Being Profitable 

    When I recently read Geoff Dimasi’s excellent article I thought: this is great—values-based business decisions in an efficient fashion. But I had another thought, too: where, in that equation, is the money?

    If I’m honest with myself, I’ve always felt that on some level it’s wrong to be profitable. That making money on top of your costs somehow equates to bilking your clients. I know, awesome trait for a business owner, right?

    Because here’s the thing: a business can’t last forever skating on the edge of viability. And that’s what not being profitable means. This is a lesson I had to learn with Bearded the hard way. Several times. Shall we have a little bit of story time? “Yes, Matt Griffin,” you say, “let’s!” Well OK, then.

    At Bearded, our philosophy from the beginning was to focus on doing great web work for clients we believed in. The hope was that all the sweat and care we put into those projects and relationships would show, and that profit would naturally follow quality. For four years we worked our tails off on project after project, and as we did so, we lived pretty much hand-to-mouth. On several occasions we were within weeks and a couple of thousand bucks from going out of business. I would wake up in the night in a panic, and start calculating when bills went out and checks would come in, down to the day. I loved the work and clients, but the other parts of the business were frankly pretty miserable.

    Then one day, I went to the other partners at Bearded and told them I’d had it. In the immortal words of Lethal Weapon’s Sergeant Murtaugh, I was getting too old for this shit. I told them I could put in one more year, and if we weren’t profitable by the end of it I was out, and we should all go get well-paid jobs somewhere else. They agreed.

    That decision lit a fire under us to pay attention to the money side of things, change our process, and effectively do whatever it took to save the best jobs we’ve ever had. By the end of the next quarter, we had three months of overhead in the bank and were on our way to the first profitable year of our business, with a 50 percent growth in revenue over the previous year and raises for everyone. All without compromising our values or changing the kinds of projects we were doing.

    This did not happen on its own. It happened because we started designing the money side of our business the way we design everything else we care about. We stopped neglecting our business, and started taking care.

    “So specifically,” you ask, “what did you do to turn things around? I am interested in these things!” you say. Very good, then, let’s take a look.

    Now it’s time for a breakdown

    Besides my arguably weird natural aversion to profit, there are plenty of other motivations not to examine the books. Perhaps math and numbers are scary to you. Maybe finances just seem really boring (they’re no CSS pseudo-selectors, amiright?). Or maybe it’s that when we don’t pay attention to a thing, it’s easier to pretend that it’s not there. But in most cases, the unknown is far scarier than fact.

    When it comes down to it, your businesses finances are made up of two things: money in and money out. Money in is revenue. Money out is overhead. And the difference? That’s profit (or lack thereof). Let’s take a look at the two major components of that equation.

    Overhead Overheels

    First let’s roll up our sleeves and calculate your overhead. Overhead includes loads of stuff like:

    • Staff salaries
    • Health insurance
    • Rent
    • Utilities
    • Equipment costs
    • Office supplies
    • Snacks, meals, and beverages
    • Service fees (hosting, web services, etc.)

    In other words: it’s all the money you pay out to do your work. You can assess these items over whatever period makes sense to you: daily, weekly, annually, or even by project.

    For Bearded, we asked our bookkeeper to generate a monthly budget in Quicken based on an average of the last six months of actual costs that we have, broken down by type. This was super helpful in seeing where our money goes. Not surprisingly, most of it was paying staff and covering their benefits.

    Once we had that number it was easy to derive whatever variations were useful to us. The most commonly used number in our arsenal is weekly overhead. Knowing that variable is very helpful for us to know how much we cost every week, and how much average revenue needs to come in each week before we break even.

    Everything old is revenue again

    So how do we bring in that money? You may be using pricing structures that are fixed-fee, hourly, weekly, monthly, or value-based. But at the end of the day you can always divide the revenue gained by the time you spent, and arrive at a period-based rate for the project (whether monthly, weekly, hourly, or project length). This number is crucial in determining profitability, because it lines up so well with the overhead number we already determined.

    Remember: money in minus money out is profit. And that’s the number we need to get to a point where it safely sustains the business.

    If we wanted to express this idea mathematically, it might look something like this:

    (Rate × Time spent × Number of People) - (Salaries + Expenses) = Profit

    Here’s an example:

    Let’s say that our ten-person business costs $25,000 a week to run. That means each person, on average, needs to do work that earns $2,500 per week for us to break even. If our hourly rate is $100 per hour, that means each person needs to bill 25 hours per week just to maintain the business. If everyone works 30 billable hours per week, the business brings in $30,000—a profit of 20 percent of that week’s overhead. In other words, it takes five good weeks to get one extra week of overhead in the bank.

    That’s not a super great system, is it? How many quality billable hours can a person really do in a week—30? Maybe 36? And is it likely that all ten people will be able to do that many billable hours each week? After all, there are plenty of non-billable tasks involved in running a business. Not only that, but there will be dry periods in the work cycle—gaps between projects, not to mention vacations! We won’t all be able to work full time every week of the year. Seems like this particular scenario has us pretty well breaking even, if we’re lucky.

    So what can we do to get the balance a little more sustainable? Well, everyone could just work more hours. Doing 60-hour weeks every week would certainly take care of things. But how long can real human beings keep that up?

    We can lower our overhead by cutting costs. But seeing as most of our costs are paying salaries, that seems like an unlikely place to make a big impact. To truly be more profitable, the business needs to bring in more revenue per hour of effort expended by staff. That means higher rates. Let’s look at a new example:

    Our ten-person business still costs $25,000 a week. Our break-even is still at $2,500 per week per person. Now let’s set our hourly rate at $150 per hour. This means that each person has to work just under 17 billable hours per week for the business to break even. If everyone bills 30 hours in a week, the business will now bring in $45,000—or $20,000 in profit. That’s 80 percent of a week’s overhead.

    That scenario seems a whole lot more sustainable—a good week now pays for itself, and brings in 80 percent of the next week’s overhead. With that kind of ratio we could, like a hungry bear before hibernation, start saving up to protect ourselves from less prosperous times in the future.

    Nature metaphors aside, once we know how these parts work, we can figure out any one component by setting the others and running the numbers. In other words, we don’t just have to see how a specific hourly rate changes profit. We can go the other way, too.

    Working for a living or living to work

    One way to determine your system is to start with desired salaries and reasonable work hours for your culture, and work backwards to your hourly rate. Then you can start thinking about pricing systems (yes, even fixed price or value-based systems) that let you achieve that effective rate.

    Maybe time is the most important factor for you. How much can everyone work? How much does everyone want to work? How much must you then charge for that time to end up with salaries you can be content with?

    This is, in part, a lifestyle question. At Bearded, we sat down not too long ago and did an exercise adapted from an IA exercise we learned from Kevin M. Hoffman. We all contributed potential qualities that were important to our business—things like “high quality of life,” “high quality of work,” “profitable,” “flexible,” “clients who do good in the world,” “efficient,” and “collaborative.” As a group we ordered those qualities by importance, and decided we’d let those priorities guide us for the next year, at which point we’d reassess.

    That exercise really helped us make decisions about things like what rate we needed to charge, how many hours a week we wanted to work, as well as more squishy topics like what kinds of clients we wanted to work for and what kind of work we wanted to do. Though finances can seem like purely quantitative math, that sort of qualitative exercise ended up significantly informing how we plugged numbers into the profit equation.

    Pricing: Where the rubber meets the road

    Figuring out the basics of overhead, revenue, and profit, is instrumental in giving you an understanding of the mechanics of your business. It lets you plan knowledgeably for your future. It allows you to make plans and set goals for the growth and maintenance of your business.

    But once you know what you want to charge there’s another question—how do you charge it?

    There are plenty of different pricing methods out there (time unit-based, deliverable-based, time period-based, value-based, and combinations of these). They all have their own potential pros and cons for profitability. They also create different motivations for clients and vendors, which in turn greatly affect your working process, day-to-day interactions, and project outcomes.

    But that, my friends, is a topic for our next column. Stay tuned for part two of my little series on the money side of running a web business: pricing!

  • Ten CSS One-Liners to Replace Native Apps 

    Håkon Wium Lie is the father of CSS, the CTO of Opera, and a pioneer advocate for web standards. Earlier this year, we published his blog post, “CSS Regions Considered Harmful.” When Håkon speaks, whether we always agree or not, we listen. Today, Håkon introduces CSS Figures and argues their case.

    Tablets and mobile devices require us to rethink web design. Moused scrollbars will be replaced by paged gestures, and figures will float in multi-column layouts. Can this be expressed in CSS?

    Paged designs, floating figures, and multi-column layout are widely used on mobile devices today. For some examples, see Flipboard, the Our Choice ebook, or Facebook Paper. These are all native apps. If we want the web to win on these devices (we do), it’s vital that designers can build these kinds of presentations using web standards. If web standards cannot express this, authors will be justified in making native apps.

    Over the past years, I’ve been editing two specifications that, when combined, provide this kind of functionality: CSS Multi-column Layout and CSS Figures. I believe they are important to make sure the web remains a compelling environment for content providers.

    In this article, I will demonstrate how simple it is to write CSS code with these specs. I will do so through 10 one-liners. Real stylesheets will be slightly longer, but still compact, readable, and reusable. Here are some screenshots to give you a visual indication of what we are aiming for:

    Three views of a web page demonstrating different numbers of columns for different window sizes

    Building a page

    The starting point for my code examples is an article with a title, text, and some images. In a traditional browser, the article will be shown in one column, with a scrollbar on the right. Using CSS Multi-column Layout, you can give the article two columns instead of one:

      article { columns: 2 }
    

    That’s a powerful one-liner, but we can do better; we can make the number of columns depend on the available space, so that a narrow screen will have one column, a wider screen will have two columns, etc. This is all it takes to specify that the optimal line length is 15em and for the number of columns to be calculated accordingly:

      article { columns: 15em }
    

    To me, this is where CSS code morphs into poetry: one succinct line of code scales from the narrowest phone to the widest TV, from the small print to text for the visually impaired. There is no JavaScript, media queries, or expensive authoring tool involved. There is simply one highly responsive line of code. That line is used to great effect to produce the screenshots above. And it works in current browsers (which is not yet the case for the following examples).

    The screenshots above show paged presentations, as opposed to scrolled presentations. This is easily expressed with:

      article { overflow: paged-x }
    

    The above code says that the article should be laid out as pages, stacked along the x-axis (which, in English, is toward the right). Browsers that support this feature must provide an interface for navigating in these pages. For example, the user may reach the next page by making a swiping gesture or tilting the device. A visual indication of which page you are reading may also be provided, just like scrollbars provide a visual indication in scrolled environments. On a tablet or mobile phone, swiping to the next page or document will be easier than scrolling.

    Images

    Adding images to the article creates some challenges. Lines of text can easily be poured into several columns, but if figures are interspersed with text, the result will be uneven; because images are unbreakable, they will cause unused whitespace if they appear at a column break. To avoid this, traditional paper-based layouts place images at the top or bottom of columns, thereby allowing other text to fill the whitespace. This can naturally be expressed in CSS by adding top and bottom to the float property. For example:

      img { float: bottom }
    

    The bluish harbor images in the screenshots above have been floated to the bottom of the page with this one-liner. CSS is used to express something that HTML cannot say; it is impossible to know how much textual content will fit on a screen in advance of formatting. Therefore, an author cannot know where to insert the image in the source code in order for it to appear at the bottom of the column. Being able to float figures to the top and bottom (in addition to the already existing left and right) is a natural extension to the float property.

    Spanning columns

    Another trick from traditional layout is for figures to span several columns. Consider this newspaper clipping:

    A newspaper clipping showing text in four columns and images in the lower-left, lower-right and upper-right corners
    Used with permission from the Bristol Observer

    In the newspaper article, the image on the left spans two columns and is floated to the bottom of the columns. The code to achieve this in CSS is simple:

      figure { float: bottom; column-span: 2 }
    

    HTML5’s figure element is perfect for holding both an image and the caption underneath it:

      <figure>
        <img src=cats.jpg>
        <figcaption>Isabel loves the fluffy felines</figcaption>
      </figure>
    

    The newspaper article also has a figure that spans three columns, and is floated to the top right corner. In a previous version of the CSS Figures specification, this was achieved by setting float: top-corner. However, after discussions with implementers, it became clear that they were able to float content to more places than just corners. Therefore, CSS Figures introduces new properties to express that content should be deferred to a later column, page, or line.

    Deferring figures

    To represent that the cat picture in the newspaper clipping should be placed at the top of the last column, spanning three columns, this code can be used:

      figure { float: top; float-defer-column: last; column-span: 3 }
    

    This code is slightly less intuitive (compared to the abandoned top-corner keyword), but it opens up a range of options. For example, you can float an element to the second column:

      .byline { float: top; float-defer-column: 1 }
    

    The above code defers the byline, “By Collette Jackson”, by one. That is, if the byline would naturally appear in the first column, it will instead appear in the second column (as is the case in the newspaper clipping). For this to work with HTML code, it is necessary for the byline to appear early in the article. For example, like this:

    <article>
      <h1>New rescue center pampers Persians</h1>
      <p class=byline>By Collette Jackson</p>
      ...
    </article>
    

    Deferring ads

    Advertisements are another type of content which is best declared early in the source code and deferred for later presentation. Here’s some sample HTML code:

    <article>
      <aside id=ad1 src=ad1.png>
      <aside id=ad2 src=ad2.png>
      <h1>New rescue center pampers Persians</h1>
    </article>
    

    And here is the corresponding CSS code, with a one-liner for each advertisement:

    #ad1 { float-defer-page: 1 }
    #ad2 { float-defer-page: 3 }
    

    As a result of this code, the ads would appear on page two and four. Again, this is impossible to achieve by placing ads inside the text flow, because page breaks will appear in different places on different devices.

    I think both readers and advertisers will like a more page-oriented web. In paper magazines, ads rarely bother anyone. Likewise, I think ads will be less intrusive in paged, rather than scrolled, media.

    Deferring pull quotes

    The final example of content that can be deferred is pull quotes. A pull quote is a quote lifted from the article, and presented in larger type at some predetermined place on the page. In this example, the pull quote is shown midway down the second column:

    A picture of a pull quote in a print newspaper

    Here’s the CSS code to express this in CSS:

      .pullquote#first { float-defer-line: 50% }
    

    Other types of content can also be positioned by deferring lines. For example, a photograph may be put above the fold of a newspaper by deferring a number of lines. This will also work on the foldable screens of the future.

    Pull quotes, however, are an interesting use case that deserve some discussion. A pull quote has two functions. First, it presents to the reader an interesting line of text to gain attention. Second, the presentation of the article becomes visually more varied when the body text is broken up by the larger type. Typically, you want one pull quote on every page. On paper, where you know how many pages an article will take up, it is easy to supply the right number of pull quotes. On the web, however, content will be reformatted for all sorts of screens; some readers will see many small pages, other will see fewer larger pages. To ensure that each page has a pull quote, authors must provide a generous supply of pull quotes. Rather than showing the extraneous quotes at the end of the article (which would be a web browser’s instinct), they should be discarded; the content will anyway appear in the main article. This can be achieved with a one-liner:

      .pullquote { float-policy: drop-tail }
    

    In prose, the code reads: if the pull quote is at the tail-end of the article, it should not be displayed. The same one-liner would be used to extraneous images at the end of the article; authors will often want to have one image per page, but not more than one.

    Exercises

    The studious reader may want to consult the CSS Multi-column Layout and CSS Figures specifications. They have more use cases and more knobs to allow designers to describe the ideal presentation of figures on the web.

    The easiest way to play with CSS Figures is to download Opera 12.16 and point it to this document, which generated the screenshots in Figure 1. Based on implementation experience, the specifications have changed and not all one-liners presented in this article will work. Also, Prince and AntennaHouse have partial support for CSS Figures—these are batch formatters that output PDF documents.

    I’d love to hear from those who like the approach taken in this article, and those who don’t. Do you want this added to browsers? Let me know below, or request if from your favorite browser (Firefox, Chrome, Opera, IE). If you don’t like the features, how would you express the use cases that have been discussed?

    Pages and columns have been basic building blocks in typography since the Romans started cutting scrolls into pages. This is not why browsers should support them. We should do so because they help us make better, more beautiful, user experiences on mobile devices.

  • Laura Kalbag on Freelance Design: I Don&#8217;t Like It 

    “I don’t like it”—The most dreaded of all design feedback from your client/boss/co-worker. This isn’t so much a matter of your ego being damaged, it’s just not useful or constructive criticism.

    In order to do better we need feedback grounded in understanding of user needs. And we need to be sure it’s not coming from solely the client’s aesthetic preferences, which may be impeccable but may not be effective for the product.

    Aesthetics are a matter of taste. Design is not just aesthetics. I’m always saying it, but it’s worth repeating: there are aesthetic decisions in design, but they are meant to contribute to the design as a whole. The design as a whole is created for an audience, and with goals in mind, so objectivity is required and should be encouraged.

    Is the client offering an opinion based on her own taste, trying to reflect the taste of the intended audience, or trying to solve a perceived problem for the user? Don’t take “I don’t like it” at face value and try to respond to it without more communication.

    How do we elicit better feedback?

    To elicit the type of feedback we want from clients, we should encourage open-ended critiques that explain the reasons behind the negative feedback, critiques that make good use of conjunctions like “because.” “I don’t like it because…” is already becoming more valuable feedback.

    Designer: Why don’t you like the new contact form design?

    Client: I don’t like it because the text is too big.

    Whether that audience can achieve their goals with our product is the primary factor in its success. We need clients to understand that they may not be the target audience. Sometimes this can be hard for anyone close to a product to understand. We may be one of the users of the products we’re designing, but the product is probably not being designed solely for users like us. The product has a specific audience, with specific goals. Once we’ve re-established the importance of the end user, we can then reframe the feedback by asking the question, “how might the users respond?”

    Designer: Do you think the users will find the text too big?

    Client: Yes. They’d rather see everything without having to scroll.

    Designer: The text will have to be very small if we try to fit it all into the top of the page. It might be hard to read.

    Client: That’s fine. All of our users are young people, so their eyesight is good.

    Throughout the design process, we need to check our hidden assumptions about our users. We should also ensure any feedback we get isn’t based upon an unfounded assumption. If the client says the users won’t like it, ask why. Uncover the assumption—maybe it’s worth testing with real users?

    Designer: Can we be certain that all your users are young people? And that all young people have good eyesight? We might risk losing potential customers unless the site is easy for everyone to read.

    How do we best separate out assumptions from actual knowledge? Any sweeping generalizations about users, particularly those that assume users all share common traits, are likely to need testing. A thorough base of user research, with evidence to fall back on, will give you a much better chance at spotting these assumptions.

    The design conversation

    As designers, we can’t expect other people to know the right language to describe exactly why they think something doesn’t work. We need to know the right questions that prompt a client to give constructive criticism and valuable feedback. I’ve written before on how we can pre-empt problems by explaining our design decisions when we share our work, but it’s impossible to cover every minute detail and the relationships between them. If a client can’t articulate why they don’t like the design as a whole, break the design into components and try to narrow down which part isn’t working for them.

    Designer: Which bit of text looks particularly big to you?

    Client: The form labels.

    When you’ve zeroed in on a component, elicit some possible reasons that it might not be effective.

    Designer: Is it because the size of the form labels leaves less space for the other elements, forcing the users to scroll more?

    Client: Yes. We need to make the text smaller.

    Reining it in

    Aesthetics are very much subject to taste. You know what colors you like to wear, and the people you find attractive, and you don’t expect everyone else to share those same tastes. Nishant wrote a fantastic column about how Good Taste Doesn’t Matter and summarized it best when he said:

    good and virtuous taste, by its very nature, is exclusionary; it only exists relative to shallow, dull…tastes. And if good design is about finding the most appropriate solution to the problem at hand, you don’t want to start out with a solution set that has already excluded a majority of the possibilities compliments of the unicorn that is good taste.

    Taste’s great

    Designer: But if we make the text smaller, we’ll make it harder to read. Most web pages require scrolling, so that shouldn’t be a problem for the user. Do you think the form is too long, and that it might put users off from filling it in?

    Client: Yes, I want people to find it easy to contact us.

    Designer: How about we take out all the form fields, except the email address and the message fields, as that’s all the information we really need?

    Client: Yes, that’ll make the form much shorter.

    If you’re making suggestions, don’t let a client say yes to your first one. These suggestions aren’t meant as an easy-out, allowing them to quickly get something changed to fit their taste. This is an opportunity to brainstorm potential alternatives on the spot. Working collaboratively is the important part here, so don’t just go away to work out the first alternative by yourself.

    If you can work out between you which solution is most likely to be successful, the client will be more committed to the iteration. You’ll both have ownership, and you’ll both understand why you’ve decided to make it that way.