EW Resource

Newsfeeds

There's a huge resource of Newsfeeds on the Net which brings up-to-date information and content on a wide range of subjects.

Here are just a few relating to web development.



A List Apart: The Full Feed
  • Laura Kalbag on Freelance Design: How Big is Big Enough to Pick On? 

    I’m a firm believer in constructive criticism. As I said in a previous column, being professional in the way we give and receive criticism is a large part of being a designer.

    However, criticism of the work has to be separated from criticism of the person. It can be all too easy to look at your own work and think “This is rubbish, so I’m rubbish, ” or have somebody else say “This isn’t good enough” and hear “You’re not good enough. ” Unfortunately, it’s also easy to go from critical to judgmental when we’re evaluating other people’s work.

    Being able to criticize someone’s work without heaping scorn on them constitutes professionalism. I’ve occasionally been guilty of forgetting that: pumped up by my own sense of self-worth and a compulsion to give good drama to my followers on social networks, I’ve blurted unconstructive criticism into a text field and hit “send. ”

    Deriding businesses and products is a day-to-day occurrence on Twitter and Facebook, one that’s generally considered acceptable since real live individuals aren’t under attack. But we should consider that businesses come in all sizes, from the one-person shop to the truly faceless multinational corporation.

    As Ashley Baxter wrote, we tend to jump on social networks as a first means of contact, rather than attempting to communicate our issues privately first. This naming and shaming perhaps stems from years of being let down by unanswered emails and being put on hold by juggernaut corporations. Fair enough: in our collective memory is an era when big business seemingly could ignore customer service without suffering many repercussions. Now that we as consumers have been handed the weapon of social media, we’ve become intent on no longer being ignored.

    When we’re out for some online humiliation, we often don’t realize how small our targets can be. Some businesses of one operate under a company name rather than a personal name. And yet people who may approach a customer service issue differently if faced with an individual will be incredibly abusive to “Acme Ltd. ” Some choice reviews from an app I regularly use:

    Should be free

    Crap. Total rip off I want my money back

    Whoever designed this app should put a gun to there [sic] head. How complicated does if [sic] have to be…

    In the public eye

    We even have special rules that allow us to rationalize our behavior toward a certain class of individual. Somehow being a celebrity, or someone with many followers, means that cruel and unconstructive criticism doesn’t hurt—either because we mix up the special status of public figures in matters of libel with emotional invincibility, or because any hurt is supposed to be balanced out by positivity and praise from fans and supporters. Jimmy Kimmel’s Celebrities Read Mean Tweets shows hurt reactions veiled with humor. Harvard’s Q Guide allows students to comment anonymously on their professors and classes, so even Harvard profs get to read mean comments.

    Why do we do it?

    We love controversial declarations that get attention and give us all something to talk about, rally for, or rally against. Commentators who deliver incisive criticism in an entertaining way become leaders and celebrities.

    Snarky jokes and sarcastic remarks often act as indirect criticisms of others’ opinions of the business. It might not be the critic’s intention from the beginning, but that tends to be the effect. No wonder companies try so hard to win back public favor.

    Perhaps we’re quick to take to Twitter and Facebook to complain because we know that most companies will fall all over themselves to placate us. Businesses want to win back our affections and do damage control, and we’ve learned that we can get something out of it.

    We’re only human

    When an individual from a large company occasionally responds to unfair criticism, we usually become apologetic and reassure them that we have nothing personal against them. We need to remember that on the other side of our comments there are human beings, and that they have feelings that can be hurt too.

    If we can’t be fair or nuanced in our arguments on social media, maybe we should consider writing longform critical pieces where we have more space and time for thoughtful arguments. That way, we could give our outbursts greater context (as well their own URLS for greater longevity and findability).

    If that doesn’t sound worthwhile, perhaps our outbursts just aren’t worth the bandwidth. Imagine that.

  • This week's sponsor: MyFonts 

    Thanks to MyFonts for sponsoring A List Apart this week. MyFonts webfonts are flexible, easy to use, and require no monthly fees. Take a look at their list of the 50 most popular fonts on the web right now.

  • Variable Fonts for Responsive Design 

    Choosing typefaces for use on the web today is a practice of specifying static fonts with fixed designs. But what if the design of a typeface could be as flexible and responsive as the layout it exists within?

    The glass floor of responsive typography

    Except for low-level font hinting and proof-of-concept demos like the one Andrew Johnson published earlier this week, the glyph shapes in modern fonts are restricted to a single, static configuration. Any variation in weight, width, stroke contrast, etc.—no matter how subtle—requires separate font files. This concept may not seem so bad in the realm of print design, where layouts are also static. On the web, though, this limitation is what I refer to as the “glass floor” of responsive typography: while higher-level typographic variables like margins, line spacing, and font size can adjust dynamically to each reader’s viewing environment, that flexibility disappears for lower-level variables that are defined within the font. Each glyph is like an ice cube floating in a sea of otherwise fluid design.

    The “glass floor” of responsive typography
    The continuum of responsive design is severed for variables below the “glass floor” in the typographic hierarchy.

    Flattening of dynamic typeface systems

    The irony of this situation is that so many type families today are designed and produced as flexible systems, with dynamic relationships between multiple styles. As Erik van Blokland explained during the 2013 ATypI conference:

    If you design a single font, it’s an island. If you design more than one, you’re designing the relationships, the recipe.

    Erik is the author of Superpolator, a tool for blending type styles across multiple dimensions. Such interpolation saves type designers thousands of hours by allowing them to mathematically mix design variables like weight, width, x-height, stroke contrast, etc.

    Superpolator allows type designers to generate variations of a typeface mathematically by interpolating between a small number of master styles.

    The newest version of Superpolator even allows designers to define complex conditional rules for activating alternate glyph forms based on interpolation numbers. For example, a complex ‘$’ glyph with two vertical strokes can be automatically replaced with a simplified single-stroke form when the weight gets too bold or the width gets too narrow.

    Unfortunately, because of current font format limitations, all this intelligence and flexibility must be flattened before the fonts end up in the user’s hands. It’s only in the final stages of font production that static instances are generated for each interpolated style, frozen and detached from their siblings and parent except in name.

    The potential for 100–900 (and beyond)

    The lobotomization of dynamic type systems is especially disappointing in the context of CSS—a system that has variable stylization in its DNA. The numeric weight system that has existed in the CSS spec since it was first published in 1996 was intended to support a dynamic stylistic range from the get-go. This kind of system makes perfect sense for variable fonts, especially if you introduce more than just weight and the standard nine incremental options from 100 to 900. Håkon Wium Lie (the inventor of CSS!) agrees, saying:

    One of the reasons we chose to use three-digit numbers [in the spec for CSS font-weight values] was to support intermediate values in the future. And the future is now :)

    Beyond increased granularity for font-weight values, imagine the other stylistic values that could be harnessed with variable fonts by tying them to numeric values. Digital typographers could fine-tune typeface specifics such as x-height, descender length, or optical size, and even tie those values to media queries as desired to improve readability or layout.

    Toward responsive fonts

    It’d be hard to write about variable fonts without mentioning Adobe’s Multiple Master font format from the 1990s. It allows smooth interpolation between various extremes, but the format was abandoned and is now mostly obsolete for typesetting by end-users. We’ll get back to Multiple Master later, but for now it suffices to say that—despite a meager adoption rate—it was perhaps the most widely used variable font format in history.

    More recently, there have been a number of projects that touch on ideas of variable fonts and dynamic typeface adjustment. For example, Matthew Carter’s Sitka typeface for Microsoft comes in six size-specific designs that are selected automatically based on the size used. While the implementation doesn’t involve fluid interpolation between styles (as was originally planned), it does approximate the effect with live size-aware selections.

    The Sitka type system by Matthew Carter for Microsoft
    The Sitka type family, designed by Matthew Carter, automatically switches between optical sizes in Microsoft applications. From left to right: Banner, Display, Heading, Subheading, Text, Small. All shown at the same point size for comparison. Image courtesy of John Hudson / Tiro Typeworks.

    There are also some options for responsive type adjustments on the web using groups of static fonts. In 2014 at An Event Apart Seattle, my colleague Chris Lewis and I introduced a project, called Font-To-Width, that takes advantage of large multi-width and multi-weight type families to fit pieces of text snugly within their containers. Our demo shows what I call “detect and serve” responsive type solutions: swapping static fonts based on the specifics of the layout or reading environment.

    One of the more interesting recent developments in the world of variable font development was the the publication of Erik van Blokland’s MutatorMath under an open source license. MutatorMath is the interpolation engine inside Superpolator. It allows for special kinds of font extrapolation that aren’t possible with MultipleMaster technology. Drawing on masters for Regular, Condensed, and Bold styles, MutatorMath can calculate a Bold Condensed style. For an example of MutatorMath’s power, I recommend checking out some type tools that are utilizing it, like the Interpolation Matrix by Loïc Sander.

    Loïc Sander’s Interpolation Matrix tool harnesses the power of Erik van Blokland’s MutatorMath

    A new variable font format

    All of these ideas seem to be leading to the creation of a new variable font format. Though none of the aforementioned projects offers a complete solution on its own, there are definitely ideas from all of them that could be adopted. Proposals for variable font formats are starting to show up around the web, too. Recently on the W3C Public Webfonts Working Group list, FontLab employee Adam Twardoch made an interesting proposal for a “Multiple Master webfonts resurrection.”

    And while such a thing would help improve typographic control, it could also improve a lot of technicalities related to serving fonts on the web. Currently, accessing variations of a typeface requires loading multiple files. With a variable font format, a set of masters could be packaged in a single file, allowing not only for more efficient files, but also for a vast increase in design flexibility.

    Consider, for example, how multiple styles from within a type family are currently served, compared to how that process might work with a variable font format.


    Static fonts vs. variable fonts
     

    With static fonts

    With a variable font

    *It is actually possible to use three masters to achieve the same range of styles, but it is harder to achieve the desired glyph shapes. I opted to be conservative for this test.

    **This table presumes 120 kB per master for both static and variable fonts. In actual implementation, the savings for variable fonts compared with static fonts would likely be even greater due to reduction in repeated/redundant data and increased efficiency in compression.

    Number of weights3Virtually infinite
    Number of widths2Virtually infinite
    Number of masters64*
    Number of files61
    Data @ 120 kB/master**720 kB480 kB
    Download time @ 500 kB/s1.44 sec0.96 sec
    Latency @ 100 ms/file0.6 sec0.1 sec
    Total load time2.04 sec1.06 sec

    A variable font would mean less bandwidth, fewer round-trips to the server, faster load times, and decidedly more typographic flexibility. It’s a win across the board. (The still-untested variable here is how much time might be taken for additional computational processing.)

    But! But! But!

    You may feel some skepticism about a new variable font format. In anticipation of that, I’ll address the most obvious questions.

    This all seems like overkill. What real-world problems would be solved by introducing a new variable font format?

    This could address any problem where a change in the reading environment would inform the weight, width, descender length, x-height, etc. Usually these changes are implemented by changing fonts, but there’s no reason you shouldn’t be able to build those changes around some fluid and dynamic logic instead. Some examples:

    • Condensing the width of a typeface for narrow columns
    • Subtly tweaking the weight for light type on a dark background
    • Showing finer details at large sizes
    • Increasing the x-height at small sizes
    • Adjusting the stroke contrast for low resolutions
    • Adjusting the weight to maintain the same stem thickness across different sizes
    • Adjusting glyphs set on a circle according to the curvature of the baseline. (Okay, maybe that’s pushing it, but why should manhole covers and beer coasters have all the fun?)

    Multiple Master was a failure. What makes you think variable fonts will take off now?

    For starters, the web now offers the capability for responsive design that print never could. Variable fonts are right at home in the context of responsive layouts. Secondly, we are already seeing real-world attempts to achieve similar results via “detect and serve” solutions. The world is already moving in this direction with or without a variable font format. Also, the reasons the Multiple Master format was abandoned include a lot of political and/or technical issues that are less problematic today. Furthermore, the tools to design variable typefaces are much more advanced and accessible now than in the heyday of Multiple Master, so type designers are better equipped to produce such fonts.

    How are we supposed to get fonts that are as compressed as possible if we’re introducing all of this extra flexibility into their design?

    One of the amazing things about variable fonts is that they can potentially reduce file sizes while simultaneously increasing design flexibility (see the “Static fonts vs. variable fonts” comparison).

    Most interpolated font families have additional masters between the extremes. Aren’t your examples a bit optimistic about the efficiency of interpolation?

    The most efficient variable fonts will be those that were designed from scratch with streamlined interpolation in mind. As David Jonathan Ross explained, some styles are better suited for interpolation than others.

    Will the additional processing power required for interpolation outweigh the benefits of variable fonts?

    Like many things today, especially on the web, it depends on the complexity of the computation, processing speed, rendering engine, etc. If interpolated styles are cached to memory as static instances, the related processing may be negligible. It’s also worth noting that calculations of comparable or higher complexity happen constantly in web browsers without any issues related to processing (think SVG scaling and animation, responsive layouts, etc). Another relevant comparison would be the relatively minimal processing power and time required for Adobe Acrobat to interpolate styles of Adobe Sans MM and Adobe Serif MM when filling in for missing fonts.

    But what about hinting? How would that work with interpolation for variable fonts?

    Any data that is stored as numbers can be interpolated. With that said, some hinting instructions are better suited for interpolation than others, and some fonts are less dependent on hinting than others. For example, the hinting instructions are decidedly less crucial for “PostScript-flavored” CFF-based fonts that are meant to be set at large sizes. Some new hinting tables may be helpful for a variable font format, but more experimentation would be in order to determine the issues.

    If Donald Knuth’s MetaFont was used as a variable font model, it could be even more efficient because it wouldn’t require data for multiple masters. Why not focus more on a parametric type system like that?

    Parametric type systems like MetaFont are brilliant, and indeed can be more efficient, but in my observation the design results they bear are decidedly less impressive or useful for quality typography.

    What about licensing? How would you pay for a variable font that can provide a range of stylistic variation?

    This is an interesting question, and one that I imagine would be approached differently depending on the foundry or distributor. One potential solution might be to license ranges of stylistic variation. So it would cost less to license a limited weight range from Light to Medium (300–500) than a wide gamut from Thin to Black (100–900).

    What if I don’t need or want these fancy-pants variable fonts? I’m fine with my old-school static fonts just the way they are!

    There are plenty of cases where variable fonts would be unnecessary and even undesirable. In those cases, nothing would stop you from using static fonts.

    Web designers are already horrible at formatting text. Do we really want to introduce more opportunities for bad design choices?

    People said similar things about digital typesetting on the Mac, mechanical typesetting on the Linotype, and indeed the whole practice of typography back in Gutenberg’s day. I’d rather advance the state of the art with some growing pains than avoid progress on the grounds of snobbery.

    Okay, I’m sold. What should I do now?

    Experiment with things like Andrew Johnson’s proof-of-concept demo. Read up on MutatorMath. Learn more about the inner workings of digital fonts. Get in touch with your favorite type foundries and tell them you’re interested in this kind of stuff. Then get ready for a future of responsive typography.

  • This week's sponsor: HipChat 

    Thanks to HipChat for sponsoring A List Apart this week. Learn how you can make work more productive with group chat, IM, file sharing, screen sharing, and more from HipChat.

  • : The People are the Work 

    Not long ago at the Refresh Pittsburgh meetup, I saw my good friend Ben Callahan give his short talk called Creating Something Timeless. In his talk, he used examples ranging from the Miles Davis sextet to the giant sequoias to try to get at how we—as makers of things that seem innately ephemeral—might make things that stand the test of time.

    And that talk got me thinking.

    Very few of the web things I’ve made over the years are still in existence—at least not in their original state. The evolution and flux of these things is something I love about the web. It’s never finished; there’s always a chance to improve or pivot.

    And yet we all want to make something that lasts. So what could that be?

    For me, it’s not the things I make, but the experience of making them. Every project we’ve worked on at Bearded has informed the next one, building on the successes and failures of its predecessors. The people on the team are the vessels for that accumulated experience, and together we’re the engine that makes better and better work each time.

    From that perspective it’s not the project that’s the timeless work, it’s us. But it doesn’t stop there, either. It’s also our clients. When we do our jobs well, we leave our clients and their teams more knowledgeable and capable, more empowered to use the web to further the goals of their organization and meet the needs of their users. So how do we give our clients more power to—ultimately—help themselves?

    Not content (kənˈtent) with content (ˈkäntent)

    Back in 2008 (when we started Bearded), one of our differentiators was that we built every site on a CMS. At the time, many agencies had not-insignificant revenue streams derived from updating their clients’ site content on their behalf.

    But we didn’t want to do that work, and our clients didn’t want to pay for it. Building their site on a CMS and training them to use it was a natural solution. It solved both of our problems, recurring revenue be damned! It gave our clients power that they wanted and needed.

    And there are other things like this that gnaw at me. Like site support.

    Ask any web business owner what they do for post-launch site support, and you’re likely to get a number of different answers. Most of those answers, if we’re honest with ourselves, will have a thread of doubt in their tone. That’s because none of the available options feel super good.

    We’ll do it ourselves!

    For years at Bearded we did our own site support. When there were upgrades, feature changes, or (gasp!) bugs, we’d take care of it. Even for sites that had launched years ago.

    But this created a big problem for us. We were only six people, and only three of us could handle those sorts of development tasks. Those three people also had all the important duties of building the backend features for all our new client projects. Does the word bottleneck mean anything to you? Because, brother, it does to me!

    Not only that but, just like updating content, this was not work we enjoyed (nor was it work our clients liked paying for, but we’ll get to that later).

    We’ll let someone else do it!

    The next thing we did was find a development partner that specialized in site support. If you’re lucky enough to find a good shop like this (especially in your area) hang on to them, my friend! They can be invaluable.

    This situation is great, because it instantly relieved our bottleneck problem. But it also put us in a potentially awkward position, because it relied on someone else’s business to support our clients.

    If they started making decisions that I didn’t agree with, or they went out of business, I’d be in trouble and it could start affecting my client relationships. And without healthy client relationships, you’ve got nothing.

    But what else is there to do?

    We’ll empower our clients!

    For the last year or two, we’ve been doing something totally different. For most of our projects now, we’re not doing support—because we’re not building the whole site. Instead we’ve started working closely with our client’s internal teams, who build the site with us.

    We listen to them, pair with them, and train them. We bring them into our team, transfer every bit of knowledge we can throughout the whole project, and build the site together. At the end there’s no hand-off, because we’ve been handing off since day one. They don’t need site support because they know the site as well as we do, and can handle things themselves.

    It’s just like giving our clients control of their own content. We give them access to the tools they need, lend them our expertise, and give them the guidance they’ll need to make good decisions after we’re gone.

    At the end of it, we’ve probably built a website, but we’ve also done something more valuable: we’ve helped their team grow along with us. Just like us, they’re now better at what they do. They can take that knowledge and experience with them to their next projects, share that knowledge with other team members, and on, and on, and on.

    What we develop is not websites, it’s people. And if that’s not timeless work, what is?

     

  • Thoughtful Modularity 

    I spent most of the first week of December down at NASA’s Kennedy Space Center for the launch of Orion, NASA’s next-generation spacecraft. As part of NASA Social, I was lucky enough to get some behind-the-scenes tours, and to talk with scientists, engineers, astronauts, and even the Administrator himself.

    The day before launch, there was a two-hour event featuring the leaders of various NASA departments, with the discussion centered on Orion’s future missions—including the first (of hopefully many) crewed journeys to Mars. William Gerstenmaier, NASA’s Associate Administrator for Human Exploration and Operations, had some interesting comments about the technology that will get us there (55 minutes into the event):

    “Things will change over time. To think we know all the technology that will be in place and exactly how things will work, to try to project that 20 years in the future, that’s not a very effective approach. You need to be ready and if some new technology comes online or a new way of doing business is there, we’re ready to adapt, and we haven’t built an infrastructure that’s so rigid it can’t adapt and change to those pieces.”

    This is quite a shift in strategy for NASA. The Apollo and the Shuttle programs were riddled with rigidity. One engineer I talked with said that contractors received more than 12,000 requirements for the Shuttle’s development. Orion has just over 300—a clear move toward flexibility.

    It’s not only the unpredictable political minefield that NASA plays in pushing them to modularity, it’s the fact that the final pieces making a crewed mission to Mars possible are still in the works. If something revolutionary is learned about the habitability of Mars from rovers there over the next few years, NASA needs to be able to incorporate those findings into the plans.

    Most of our projects operate on a smaller scale than a mission to Mars, but there’s a lot we can learn from this approach. NASA isn’t just planning for modularity in terms of how things interface with each other, they’re planning for it at the core level of each component. Rather than stopping at a common docking mechanism (a common API, of sorts), they’re building rockets, landers, and habitation modules that can be modified as breakthroughs are made.

    In a lot of ways, we’re already doing this in our work, whether we realize it or not. The rising focus on design systems and pattern libraries shows that we’ve got a knack for breaking something down to its smallest components. By building up from those small components, we’re able to swap out anything, big or small, at any time. If a button style isn’t working, it can be changed painlessly. If the entire header needs to be reconsidered after user testing, it’s self-contained enough to not disturb the rest of the site.

    With the help of new-age content management systems like Craft, we’re able to be more modular by decoupling the front-end interface from the way data is stored and managed on the backend. That means that either side can be upgraded, rewritten, or changed entirely, without the other being dependent on it.

    Tim Berners-Lee has been talking about modularity as a central principle of the web for quite a while:

    We must design for new applications to be built on top of [the web]. There will be more modules to come, which we cannot imagine now.

    NASA’s strategies echo his thoughts on the web: build modularly for the future while realizing that the future is unknown. New discoveries will be made, new things will be built, and technology will improve.

    If you think authentication is about to undergo a major revolution with the rise of cheap biometrics, you may plan and build your system differently than assuming it will always be password based. The goal is to build in an open-ended way, so as things change and progress, those innovations can be implemented with ease.

    That’s not an impossible task, either. Just take a look at Amazon—they haven’t had a major, sitewide redesign in more than a decade, and the industry has changed in massive ways. I’m sure the behind-the-scenes infrastructure has changed, but users haven’t been aware of it. Amazon has been tweaking and iterating for years, improving their product to their customers’ satisfaction.

    The implementation details of our work will always be in flux, but the goals of our product will remain the same. Be modular in implementation so that what you build can reap the benefits of the future.

     

  • A Vision for Our Sass 

    At a recent CSS meetup, I asked, “Who uses Sass in their daily workflow?” The response was overwhelmingly positive; no longer reserved for pet projects and experiments, Sass is fast becoming the standard way for writing CSS.

    This is great news! Sass gives us a lot more power over complex, ever-growing stylesheets, including new features like variables, control directives, and mixins that the original CSS spec (intentionally) lacked. Sass is a stylesheet language that’s robust yet flexible enough to keep pace with us.

    Yet alongside the wide-scale adoption of Sass (which I applaud), I’ve observed a steady decline in the quality of outputted CSS (which I bemoan). It makes sense: Sass introduces a layer of abstraction between the author and the stylesheets. But we need a way to translate the web standards—that we fought so hard for—into this new environment. The problem is, the Sass specification is expanding so much that any set of standards would require constant revision. Instead, what we need is a charter—one that sits outside Sass, yet informs the way we code.

    To see a way forward, let’s first examine some trouble spots.

    The symptoms

    One well-documented abuse of Sass’s feature-set is the tendency to heavily nest our CSS selectors. Now don’t get me wrong, nesting is beneficial; it groups code together to make style management easier. However, deep nesting can be problematic.

    For one, it creates long selector strings, which are a performance hit:

    body #main .content .left-col .box .heading { font-size: 2em; }

    It can muck with specificity, forcing you to create subsequent selectors with greater specificity to override styles further up in the cascade—or, God forbid, resort to using !important:

    body #main .content .left-col .box .heading  [0,1,4,1]
    .box .heading  [0,0,2,0]

    Comparative specificity between two selectors.

    Last, nesting can reduce the portability and maintainability of styles, since selectors are tied to the HTML structure. If we wanted to repeat the style heading for a box that wasn’t in the leftcol, we would need to write a separate rule to accomplish that.

    Complicated nesting is probably the biggest culprit in churning out CSS soup. Others include code duplication and tight coupling—and again, these are the results of poorly formed Sass. So, how can we learn to use Sass more judiciously?

    Working toward a cure

    One option is to create rules that act as limits and reign in some of that power. For example, Mario Ricalde uses an Inception-inspired guideline for nesting: “Don’t go more than four levels deep.”

    Rules like this are especially helpful for newcomers, because they provide clear boundaries to work within. But few universal rules exist; the Sass spec is sprawling and growing (as I write this, Sass is at version 3.4.5). With each new release, more features are introduced, and with them more rope with which to hang ourselves. A rule set alone would be ineffective.

    We need a proactive, higher-level stance toward developing best practices rather than an emphasis on amassing individual rules. This could take the form of a:

    • Code standard, or guidelines for a specific programming language that recommend programming style, practices, and methods.
    • Framework, or a system of files and folders of standardized code, which can be used as the foundation of a website.
    • Style guide, or a living document of code, which details all the various elements and coded modules of your site or application.

    Each approach has distinct advantages:

    • Code standards provide a great way of unifying a team and improving maintainability across a large codebase (see Chris Coyier’s Sass guidelines).
    • Frameworks are both practical and flexible, offering the lowest barrier to entry and removing the burden of decision. As every seasoned front-end developer knows, even deciding on a CSS class name can become debilitating.
    • Style guides make the relationship between the code and the output explicit by illustrating each of the components within the system.

    Each also has its difficulties:

    • Code standards are unwieldy. They must be kept up-to-date and can become a barrier to entry for new or inexperienced users.
    • Frameworks tend to become bloated. Their flexibility comes at a cost.
    • Style guides suffer from being context-specific; they are unique to the brand they represent.

    Unfortunately, while these methods address the technical side of Sass, they don’t get to our real problem. Our difficulties with Sass don’t stem from the specification itself but from the way we choose to use it. Sass is, after all, a CSS preprocessor; our Sass problem, therefore, is one of process.

    So, what are we left with?

    Re-examining the patient

    Every job has its artifacts, but problems arise if we elevate these by-products above the final work. We must remember that Sass helps us construct our CSS, but it isn’t the end game. In fact, if the introduction of CSS variables is anything to go by, the CSS and Sass specs are beginning to converge, which means one day we may do away with Sass entirely.

    What we need, then, is a solution directed not at the code itself but at us as practitioners—something that provides technical guidelines as we write our Sass, but simultaneously lifts our gaze toward the future. We need a public declaration of intentions and objectives, or, in other words, a manifesto.

    Sass manifesto

    When I first discovered Sass, I developed some personal guidelines. Over time, they formalized into a manifesto that I could then use to evaluate new features and techniques—and whether they’d make sense for my workflow. This became particularly important as Sass grew and became more widely used within my team.

    My Sass manifesto is composed of six tenets, or articles, outlined below:

    1. Output over input
    2. Proximity over abstraction
    3. Understanding over brevity
    4. Consolidation over repetition
    5. Function over presentation
    6. Consistency over novelty

    It’s worth noting that while the particular application of each article may evolve as the specification advances, the articles themselves should remain unchanged. Let’s cover each in a little more depth.

    1. Output over input

    The quality and integrity of the generated CSS is of greater importance than the precompiled code.

    This is the tenet from which all the others hang. Remember that Sass is one step in the process toward our goal, delivering CSS files to the browser. This doesn’t mean the CSS has to be beautifully formatted or readable (this will never be the case if you’re following best practices and minimizing CSS), but you must keep performance at the forefront of your mind.

    When you adopt new features in the Sass spec, you should ask yourself, “What is the CSS output?” If in doubt, take a look under the hood—open the processed CSS. Developing a deeper understanding of the relationship between Sass and CSS will help you identify potential performance issues and structure your Sass accordingly.

    For example, using @extend targets every instance of the selector. The following Sass

    .box {
    	background: #eee;
    	border: 1px solid #ccc;
    
    	.heading {
    	  font-size: 2em;
    	}
    }
    
    .box2 {
    	@extend .box;
    	padding: 10px;
    }


    compiles to

    .box, .box2 {
      background: #eee;
      border: 1px solid #ccc;
    }
    .box .heading, .box2 .heading {
      font-size: 2em;
    }
    
    .box2 {
      padding: 10px;
    }

    As you can see, not only has .box2 inherited from .box, but .box2 has also inherited from the instances where .box is used in an ancestor selector. It’s a small example, but it shows how you can arrive at some unexpected results if you don’t understand the output of your Sass.

    2. Proximity over abstraction

    Projects should be portable without over-reliance on external dependencies.

    Anytime you use Sass, you’re introducing a dependency—the simplest installation of Sass depends on Ruby and the Sass gem to compile. But keep in mind that the more dependencies you introduce, the more you risk compromising one of Sass’s greatest benefits: the way it enables a large team to work on the same project without stepping on one another’s toes.

    For instance, along with the Sass gem you can install a host of extra packages to accomplish almost any task you can imagine. The most common library is Compass (maintained by Chris Epstein, one of Sass’s original contributors), but you can also install gems for grid systems, and frameworks such as Bootstrap, right down to gems that help with much smaller tasks like creating a color palette and adding shadows.

    These gems create a set of pre-built mixins that you can draw upon in your Sass files. Unlike the mixins you write inside your project files, a gem is written to your computer’s installation directory. Gems are used out-of-the-box, like Sass’s core functions, and the only reference to them is via an @include method.

    Here’s where gems get tricky. Let’s return to the scenario where a team is contributing to the same project: one team member, whom we’ll call John, decides to install a gem to facilitate managing grids. He installs the gem, includes it in the project, and uses it in his files; meanwhile another team member—say, Mary—pulls down the latest version of the repository to change the fonts on the website. She downloads the files, runs the compiler, but suddenly gets an error. Since Mary last worked on the project, John has introduced an external dependency; before Mary can do her work, she must debug the error and download the correct gem.

    You see how this problem can be multiplied across a larger team. Add in the complexity of versioning and inter-gem-dependency, and things can get very hairy. Best practices exist to maintain consistent environments for Ruby projects by tracking and installing the exact necessary gems and versions, but the simplest approach is to avoid using additional gems altogether.

    Disclaimer: I currently use the Compass library as I find its benefits outweigh the disadvantages. However, as the core Sass specification advances, I’m considering when to say goodbye to Compass.

    3. Understanding over brevity

    Write Sass code that is clearly structured. Always consider the developer who comes after you.

    Sass is capable of outputting super-compressed CSS, so you don’t need to be heavy-handed in optimizing your precompiled code. Further, unlike regular CSS comments, inline comments in Sass aren’t outputted to the final CSS.

    This is particularly helpful when documenting mixins, where the output isn’t always transparent:

    // Force overly long spans of text to truncate, e.g.:
    // @include truncate(100%);
    // Where $truncation-boundary is a united measurement.
    
    @mixin truncate($truncation-boundary){
        max-width:$truncation-boundary;
        white-space:nowrap;
        overflow:hidden;
        text-overflow:ellipsis;
    }

    However, do consider which parts of the your Sass will make it to the final CSS file.

    4. Consolidation over repetition

    Don’t Repeat Yourself. Recognize and codify repeating patterns.

    Before you start any project, it’s sensible to sit down and try to identify all the different modules in a design. This is the first step in writing object-oriented CSS. Inevitably some patterns won’t become apparent until you’ve written the same (or similar) line of CSS three or four times.

    As soon as you recognize these patterns, codify them in your Sass.

    Add variables for recurring values:

    $base-font-size: 16px;
    $gutter: 1.5em;

    Use placeholders for repeating visual styles:

    %dotted-border { border: 1px dotted #eee; }

    Write mixins where the pattern takes variables:

    //transparency for image features
    @mixin transparent($color, $alpha) {
      $rgba: rgba($color, $alpha);
      $ie-hex-str: ie-hex-str($rgba);
      background-color: transparent;
      background-color: $rgba;
      filter:progid:DXImageTransform.Microsoft.gradient(startColorstr=#{$ie-hex-str},endColorstr=#{$ie-hex-str});
      zoom: 1;
    }

    If you adopt this approach, you’ll notice that both your Sass files and resulting CSS will become smaller and more manageable.

    5. Function over presentation

    Choose naming conventions that focus on your HTML’s function and not its visual presentation.

    Sass variables make it incredibly easy to theme a website. However, too often I see code that looks like this:

    $red-color: #cc3939; //red
    $green-color: #2f6b49; //green

    Connecting your variables to their appearance might make sense in the moment. But if the design changes, and the red is replaced with another color, you end up with a mismatch between the variable name and its value.

    $red-color: #b32293; //magenta
    $green-color: #2f6b49; //green

    A better approach is to name these color variables based on their function on the site:

    $primary-color: #b32293; //magenta
    $secondary-color: #2f6b49; //green

    Presentational classes with placeholder selectors

    What happens when we can’t map a visual style to a functional class name? Say we have a website with two call-out boxes, “Contact” and “References.” The designer has styled both with a blue border and background. We want to maximize the flexibility of these boxes but minimize any redundant code.

    We could choose to chain the classes in our HTML, but this can become quite restrictive:

    <div class="contact-box blue-box">
    <div class="references-box blue-box">

    Remember, we want to focus on function over presentation. Fortunately, using the Sass @extend method together with a placeholder class makes this a cinch:

    %blue-box {
    	background: #bac3d6;
    	border: 1px solid #3f2adf;
    }
    
    .contact-box {
    	@extend %blue-box;
    	...
    }
    .references-box {
    @extend %blue-box;
    	...
    }

    This generates the following CSS, with no visible references to %blue-box anywhere, except in the styles that carry forward.

    .contact-box,
    .references-box {
    	background: #bac3d6;
    	border: 1px solid #3f2adf;
    }

    This approach cuts references in our HTML to presentational class names, but it still lets us use them in our Sass files in a descriptive way. Trying to devise functional names for common styles can have us reaching for terms like base-box, which is far less meaningful here.

    6. Consistency over novelty

    Avoid introducing unnecessary changes to the processed CSS.

    If you’re keen to introduce Sass into your workflow but don’t have any new projects, you might wonder how best to use Sass inside a legacy codebase. Sass fully supports CSS, so initially it’s as simple as changing the extension from .css to .scss.

    Once you’ve made this move, it may be tempting to dive straight in and refactor all your files, separating them into partials, nesting your selectors, and introducing variables and mixins. But this can cause trouble down the line for anyone who is picking up your processed CSS. The refactoring may not have affected the display of anything on your website, but it has generated a completely different CSS file. And any changes can be extremely hard to isolate.

    Instead, the best way to switch to a Sass workflow is to update files as you go. If you need to change the navigation, separate that portion into its own partial before working on it. This will preserve the cascade and make it much easier to pinpoint any changes later.

    The prognosis

    I like to think of our current difficulties with Sass as growing pains. They’re symptoms of the adjustments we must make as we move to a new way of working. And an eventual cure does exist, as we mature in our understanding of Sass.

    It’s my vision that this manifesto will help us get our bearings as we travel along this path: use it, change it, or write your own—but start by focusing on how you write your code, not just what you write.

  • Live Font Interpolation on the Web 

    We all want to design great typographic experiences. We also want to serve users on an increasing range of devices and contexts. But today’s webfonts tie our responsive sites and applications to inflexible type that doesn’t scale. As a result, our users get poor reading experiences and longer loading times from additional font weights.

    As typographers, designers, and developers, we can solve this problem. But we’ll need to work together to make webfonts more systemized and context-aware. Live webfont interpolation—the modification of a font’s design in the browser—exists today and can serve as an inroad for using truly responsive typography.

    An introduction to font interpolation

    Traditional font interpolation is a process used by type designers to generate new intermediary fonts from a series of master fonts. Master fonts represent key archetypal designs across different points in a font family. By using math to automatically find the in-betweens of these points, type designers can derive additional font variants/weights from interpolation instead of designing each one manually. We can apply the same concept to our webfonts to serve different font variants for our users. For example, the H letter (H glyph) in this proof of concept (currently for desktop browsers) has light and heavy masters in order to interpolate a new font weight.

    An interpolated H glyph using 50 percent of the light weight and 50 percent of the black weight. There can be virtually any number of poles and axes linked to combinations of properties, but in this example everything is being interpolated at once between two poles.

    Normally these interpolated type designs end up being exported as separate fonts. For example, the TheSans type family contains individual font files for Extra Light, Light, Semi Light, Plain, SemiBold, Bold, Extra Bold, and Black weights generated using interpolation.

    Individual font weights generated from interpolation from the TheSans type family.

    Interpolation can alter more than just font weight. It also allows us to change the fundamental structure of a font’s glyphs. Things like serifs (or lack thereof), stroke contrast/direction, and character proportions can all be changed with the right master fonts.

    A Noordzij cube showing an interpolation space with multiple poles and axes.

    Although generating fonts with standard interpolation gives us a great deal of flexibility, webfont files are still static in their browser environment. Because of this, we’ll need more to work with the web’s responsiveness.

    Web typography’s medium

    Type is tied to its medium. Both movable type and phototypesetting methods influenced the way that type was designed and set in their time. Today, the inherent responsiveness of the web necessitates flexible elements and relative units—both of which are used when setting type. Media queries are used to make more significant adjustments at different breakpoints.

    An approximation of typical responsive design breakpoints.

    However, fonts are treated as another resource that needs to be loaded, instead of a living, integral part of a responsive design. Changing font styles and swapping out font weights with media queries represent the same design compromises inherent in breakpoints.

    Breakpoints set by media queries often reflect the best-case design tradeoffs—often during a key breakpoint, like collapsing the navigation under a menu icon. Likewise, siloed font files often reflect best-case design tradeoffs—there’s no font in between The Mix Light and The Sans SemiLight.

    Enter live webfont interpolation

    Live webfont interpolation just means interpolating a font on the fly inside the browser instead of being exported as a separate file resource. By doing this, our fonts themselves can respond to their context. Because type reflows and is partially independent of a responsive layout, there’s less of a need to set abrupt points of change. Fonts can adhere to bending points—not just breaking points—to adapt type to the design.

    Live interpolation doesn’t have to adhere to any specific font weight or design.

    Precise typographic control

    With live font interpolation, we can bring the same level of finesse to our sites and applications that type designers do. Just as we take different devices into account when designing, type designers consider how type communicates and performs at small sizes, low screen resolutions, large displays, economical body copy, and everything in between. These considerations are largely dependent on the typeface’s anatomy, which requires live font interpolation to be changed in the browser. Properties like stroke weight and contrast, counter size, x-height, and character proportions all affect how users read. These properties are typically balanced across a type family. For example, the JAF Lapture family includes separate designs for text, display, subheads, and captions. Live font interpolation allows a single font to fit any specific role. The same font can be optimized for captions set at .8em, body text set at 1.2em, or H1s set at 4.8em in a light color.

    JAF Lapture Display (top) and JAF Lapture Text (bottom). Set as display type at 40 pixels, rendered on Chrome 38. Note how the display version uses thinner stroke weights and more delicate features that support its sharp, authoritative character without becoming too heavy at larger sizes. (For the best examples, compare live type in your own device and browser.)

    JAF Lapture Text. Set as body copy at 16 pixels, rendered on Chrome. Note how features like the increased character width, thicker stroke weights, and shorter ascenders and descenders make the text version more appropriate for smaller body copy set in paragraph blocks.

    JAF Lapture Display. Set as body copy at 16 pixels, rendered on Chrome.

    Live font interpolation also allows precise size-specific adjustments to be made for the different distances at which a reader can perceive type. Type can generally remove finer typographic details at sizes where they won’t be perceived by the reader—like on far-away billboards, or captions and disclaimers set at small sizes.

    Adaptive relationships

    Live font interpolation’s context-awareness builds inherent flexibility into the font’s design. A font’s legibility and readability adjustments can be linked to accessibility options. People with low vision who increase the default text size or zoom the browser can get type optimized for them. Fonts can start to respond to combinations of factors like viewport size, screen resolution, ambient light, screen brightness, and viewing distance. Live font interpolation offers us the ability to extend great reading experiences to everyone, regardless of how their context changes.

    Live font interpolation on the web today

    While font interpolation can be done with images or canvas, these approaches don’t allow text to be selectable, accessible via screen readers, or crawlable by search engines. SVG fonts offer accessible type manipulation, but they currently miss out on the properties that make a font robust: hinting and OpenType tables with language support, ligatures, stylistic alternates, and small caps. An SVG OpenType spec exists, but still suffers from limited browser support.

    Unlike SVG files, which are made of easily modifiable XML, font file formats (ttf, otf, woff2, etc.) are compiled as binary files, complicating the process of making live changes. Sets of information describing a font are stored in tables. These tables can range from things like a head table containing global settings for the font to a name table holding author’s notes. Different font file formats contain different sets of information. For example, the OpenType font format, a superset of TrueType, contains additional tables supporting more features and controls (per Microsoft’s OpenType spec):

    • cmap: Character to glyph mapping
    • head: Font header
    • hhea: Horizontal header
    • hmtx: Horizontal metrics
    • maxp: Maximum profile
    • name: Naming table
    • OS/2: OS/2 and Windows-specific metrics
    • post: PostScript information

    For live webfont interpolation, we need a web version of something like ttx, a tool for converting font files into a format we can read and parse.

    Accessing font tables

    Projects like jsfont and opentype.js allow us to easily access and modify font tables in the browser. Much like a game of connect-the-dots, each glyph (the glyp table in OpenType) is made up of a series of points positioned on an x-y grid.

    A series of numbered points on an H glyph. The first x-y coordinate set determines where the first point is placed on the glyph’s grid and is relative to the grid itself. After the first point, all points are relative to the point right before it. Measurements are set in font design units.

    Interpolation involves the modification of a glyph to fall somewhere between master font poles—similar to the crossfading of audio tracks. In order to make changes to glyphs on the web with live webfonts, we need to compare and move individual points.

    The first points for the light and heavy H glyph have different x coordinates, so they can be interpolated.

    Interpolating a glyph via coordinates is essentially a matter of averaging points. More robust methods exist, but aren’t available for the web yet.

    Other glyph-related properties (like xMin and xMax) also must be interpolated in order to ensure the glyph bounding box is large enough to show the whole glyph. Additionally, padding—or bearings, in font terminology—can be added to position a glyph in its bounding box (leftsidebearing and width properties). This becomes important when considering the typeface’s larger system. Any combination of glyphs can end up adjacent to each other, so changes must be made considering their relationship to the typeface’s system as a whole.

    Glyph properties. Both xMin/xMax and advancewidth must be scaled in addition to the glyph’s coordinate points.

    Doing it responsibly

    Our job is to give users the best experience possible—whether they’re viewing the design on a low-end mobile device, a laptop with high resolution, or distant digital signage. Both poorly selected and slowly loading fonts hinder the reading experience. With CSS @fontface as a baseline, fonts can be progressively enhanced
    with interpolation where appropriate. Users on less capable devices and browsers are best served with standard @fontface fonts.

    After the first interpolation and render, we can set a series of thresholds where re-renders are triggered, to avoid constant recalculations for insignificant changes (like every single change in width as the browser is resized). Element queries are a natural fit here (pun intended) because they’re based at the module level, which is where type often lives within layouts. Because information for interpolation is stored with JavaScript, there’s no need to load an entirely different font—just the data required for interpolation. Task runners can also save this interpolation data in JavaScript during the website or application build process, and caching can be used to avoid font recalculations when a user returns to a view a second time.

    Another challenge is rendering interpolated type quickly and smoothly. Transitioning in an interpolated font lined up with the original can minimize the visual change. Other techniques, like loading JavaScript asynchronously, or just caching the font for next time if the browser cannot load the font fast enough, could also improve perceived performance.

    As noted by Nick Sherman, all these techniques illustrate the need for a standardized font format that wraps everything up into a single sustainable solution. Modifying live files with JavaScript serves only as an inroad for future font formats that can adapt to the widely varied conditions they’re subjected to.

    Fonts that interpolate well

    Like responsive design, font interpolation requires considerations for the design at both extremes, as well as everything in the middle. Finch—the typeface in these examples—lends itself well to interpolation. David Jonathan Ross, Finch’s designer, explains:

    Interpolation is easiest when letter structure, contrast, and shapes stay relatively consistent across a family. Some typeface designs (like Finch) lend themselves well to that approach, and can get by interpolating between two extremes. However, other designs need more care and attention when traversing axes like weight or width. For example, very high-contrast or low-contrast designs often require separately-drawn poles between the extremes to help maintain the relationship between thick and thin, especially as certain elements are forced to get thin, such as the crossbar of the lowercase ’e’. Additionally, some designs get so extreme that letter shape is forced to change, such as replacing a decorative cursive form of lowercase ’k’ with a less-confusing one at text sizes, or omitting the dollar sign’s bar in the heaviest weights.

    Finch’s consistency across weights allows it to avoid a complex interpolation space—there’s no need for additional master fonts or rules to make intermediate changes between two extremes.

    Glyphs also don’t have to scale linearly across an axis’s delta. Guidelines like Lucas De Groot’s interpolation theory help us increase the contrast between near-middle designs, which may appear too similar to the user.

    A call to responsive typography

    We already have the tools to make this happen. For example, jsfont loads a font file and uses the DataView API to create a modifiable font object that it then embeds through CSS @fontface. The newer project opentype.js has active contributors and a robust API for modifying glyphs.

    As font designers, typographers, designers, and developers, the only way to take advantage of responsive typography on the web is to work together and make sure it’s done beautifully and responsibly. In addition to implementing live webfont interpolation in your projects, you can get involved in the discussion, contribute to projects like opentype.js, and let type designers know there’s a demand for interpolatable fonts.

  • Nishant Kothary on the Human Web: Logically Speaking 

    Whether you’re arguing for a design decision, or making the case for hiring another developer for your team, the advice I’ve heard over and over is that if you use logic (backed by user research or other data), you will prevail. I’ve rarely found that to be true in the real world, and that’s what I want to talk about today.

    But first, some math. You probably recognize this equation:

    result = target ÷ context

    It was introduced by Ethan in his 2009 article, Fluid Grids, and laid the foundation for the movement we now know as Responsive Web Design.

    If you remember your high-school algebra (or you’re into math), you’ll recognize Ethan’s equation as linear:

    y = m × x + b

    where y = result, m = target, x = 1/context, and b = 0

    It’s hard to overstate the applications of linear equations in computing. They form the basis of linear algebra, without which we wouldn’t be reading this column on a computer screen. Actually, the computer wouldn’t even exist.

    Yet the elegant little nuggets of logic that are linear equations only work when we take certain mathematical concepts for granted.

    For instance, that addition is commutative:

    a + b = b + a

    Or, that multiplication is associative:

    ( a × b ) × c = a × ( b × c )

    Or that most profound algebraic axiom, reflexivity:

    a = a

    Without these foundational laws of mathematics and logic, linear equations are about as useful as a dog taking a selfie. Not quite as entertaining, though.

    Dog selfie
    Reflexive dog is reflexive.

    What’s truly important to realize is that we all share precisely the same concepts related to mathematics. In the universe of mathematics, if you have an a and I have an a, they are both exactly the same and they are equal to each other. Thanks to this guarantee, planes can fly, iPhones can ring, and of course, websites can respond (browser inconsistencies notwithstanding).

    What we forget is that the certainty that “one thing always means the same thing no matter what” disappears almost entirely in the real world.

    And that’s why, like a linear equation in a universe without reflexivity, arguments backed only by logic tend to land on deaf ears in the real world, where each one of us is governed by our own unique and personal laws of logic; where the speaker’s a quite literally can be the listener’s wtf.

    This is not to say that you should forgo logic in making your case. On the contrary, base your case on it. But if you’re not incorporating the most essential element of effective persuasion—an understanding of the other person’s universe, no matter how illogical it may seem to you—don’t be surprised when your case falls flat.

    Ironically, that’s the only logical outcome.

  • Pinpointing Expectations 

    In my work as a front-end developer, I’ve come to realize that expectations, and how you handle them, are one of the most integral parts of a project. Expectations are tricky things, especially because we don’t talk about them very much.

    Somehow, we always expect other people to just know what we’re thinking. Expectations have a tendency to shift and change, too. Sometimes during the course of a project, as you learn, research, and work, expectations change because of new knowledge gained while working. Other times an outside influence changes, say a competitor comes out with a new feature or product, which could cause the goals and expectations of your project to change as well. 

    Not talking about expectations causes a lot of headaches throughout a project. We aren’t mind readers, but clients and colleagues often expect us to be. Even when expectations aren’t articulated, there’s often frustration when you don’t meet them. This is why showing your work as often as possible and talking about it as you go can be a helpful way to make sure things are living up to expectations.

    So how do we handle this? We have to try as hard as we can to draw out the expectations at the beginning, learning what’s expected so that we can be prepared to meet those goals. We also have to check in throughout the project to see if things have changed.

    Recently, I was on a project that ran over by several months, dragging on longer than I and, I think, the client expected. I was getting a bit antsy. When would we wrap up? What was going on?

    As a freelancer, my schedule is important and things that throw it out of whack are hard on me. Sticking up for myself isn’t always easy, but the client’s schedule changed over the course of the project and it was my job to figure out how to make the project end successfully. I did the email thing, I asked all the questions, and frankly, I pushed a bit. After several emails, and some explanations on both sides, things were sorted out in a way that worked for everyone.

    This wasn’t a huge issue, but it could have grown larger if not acknowledged, talked about, and handled. Often it’s the small issues that can snowball into bigger ones down the road, so handling them early on saves everyone a lot of grief. Below, I go into more detail on how to get a handle on expectations early so issues either don’t come up, or they don’t blow up into something unmanageable.

    Managing expectations

    At the beginning of the project I ask for a detailed scope. The goal of this is to have everyone spell out what the end of the project looks like. When I’m done with my work, what will that work look like, what will the final project consist of? Ultimately, what is my deliverable?

    If the scope of a project is tricky to define, I ask a lot of questions to get us there:

    • What is the goal of the project?
    • What do you hope I’ll have done at the end?
    • How will we know it’s done?
    • How often will we meet to discuss the project when it’s in process?
    • Do you have a workflow you prefer for projects?
    • Are there milestones along the way, midpoints in the project and what are they?
    • What is the design process and how does the development team fit into that?
    • How finalized do designs need to be before starting to work in code?
    • Do you iterate and do designing in code or not?

    These are questions I ask of my clients, but they can also be useful discussion starters when working on new projects within teams as well.

    As a front-end developer, my final deliverable is often a template or page of a website, finished and ready for launch or integration. It could also be a report on ways to improve CSS for performance and maintainability, or a style guide and a cleaned-up codebase to show how the guide helped trim down the file sizes. Getting not only the final deliverable established, but also the process for getting there, helps everyone know not only what will be done, but how it will get done.

    Since I write code, I also make sure I know about the coding standards for the shared repo. I want to make sure I write, test, and do anything else the client expects, so that my code conforms to their standards when the project is over.

    When the expectations are unrealistic given the limits of code and timeline, I’m honest about limitations. We can do a lot with code, but we can’t do everything. Also, sometimes requests may be bad for accessibility or usability, so I’m not afraid to speak up and voice this to the team.

    If the project is longer than a week or two, I try very hard to send updates, making sure I’m communicating where we are in regards to the expectations outlined in the scope and contract. Often, a regular call or video chat will do. Should I start to get the feeling that things have changed (you know, that awkward email exchange or tense video call), then it’s my job to ask about it. To have successful projects we have to be willing to have the hard conversations. Sometimes, a quick email asking if everything is OK is enough, other times it takes another phone call or two to sort through things.

    I’m the first to admit it: some of this is hard. I sit at home in my office and worry at times. But whenever I’ve taken the bull by the horns and just asked what was going on, it’s always been worth it. Many times it proved to be something small, but other times it meant a course change for the project which saved time and effort on everyone’s part.

    To avoid small issues snowballing or larger issues cropping up, have a good plan at the beginning of every project for how to handle expectations. You need to first establish what they are by asking a lot of questions, even the obvious ones, and then make sure you communicate frequently along the way. Hopefully things won’t change too much, but you’ll be ready to deal with them when they do.

  • The Core Model: Links and Resources 

    My recent article on the core model was an attempt to sum up two things that I could go on about forever. The Norwegian Cancer Society (NCS) redesign project started in January 2012, and we’re still working together. The core model was created by Are Halland in 2006, and we’re still working on that too! In other words, there is a lot more to say both about that project and the model.

    Putting the core model to use in your own project

    The only thing you really need to use the core model is a pen and paper. To get started, download the core model worksheets (downloadable PDF) or check out three other examples from case studies using the core model.

    There’ll also be two core model workshops held in the U.S. this year, one at the IA Summit and one at Confab Intensive.

    Using the core model in the day-to-day editorial process

    The article focused on using the core model in the design process, but you can also use the principles from the core model in your editorial process.

    The editorial team behind the Norwegian Cancer Society is hands down the most impressive editorial team I have ever met. A team that actually reviews all of their content every three months? Unheard of! But these brilliant (and endearing) people get it done.

    To learn more about how they work, check out this in-depth case study on content governance at the NCS and this recent interview about their editorial process. This presentation also has some more details about their content governance.

    Form design and mobile-first success

    Some of the results in the NCS case study are, to be honest, more about form design than the core model per se. Beate Sørum, previously digital fundraiser at the NCS, now independent, has written a three-part blog series about how we worked with digital fundraising at the NCS.

    Also, check out our presentation with best practice advice for digital fundraisers presented at the Institute of Fundraising’s National Convention and this presentation from Responsive Day Out 2, focusing on the mobile results.

    How the core model came about

    The core model has similarities with several other approaches and deliverables, for instance page description diagrams, and you could look at the core model as just one variant of page description diagrams. I believe the difference is that the core model is more than a deliverable, it’s also an approach.

    To learn more about the thinking behind the core model, you should definitely view Are Halland’s presentation from the IA Summit in 2007. Even 8 years later, it’s a true delight. It begins with the seven deadly sins of information architecture, which are, unfortunately, still relevant!

    Even more questions?

    The redesign of the Norwegian Cancer Society was a great team effort. If you have more questions about the NCS or the core model, do not hesitate to ask any of these wonderful people in the comments below, or on Twitter:

    • Beate Sørum, previously digital fundraiser at NCS, now independent: @BeateSorum
    • Marte Gråberg, web editor at the NCS: @MarteGraberg
    • Monica Solheim Slind, webmaster at the NCS: @SolheimSlind
    • Wilhelm Joys Andersen, front-end developer: @WilhelmJA
    • Thord Veseth Foss, graphic designer: @ThordFoss
    • Eirik Hafver Rønjum, content strategist: @EirikHafver
    • Are Halland, content strategist and creator of the core model: @AreGH

     

  • Rachel Andrew on the Business of Web Dev: The Challenge for the Tiny Global Business 

    We track various metrics for our product Perch. One of those is the percentage of sales to the UK (where we’re based) versus the rest of Europe and the rest of the world. At the time of writing, about 50 percent of license sales are to the UK and 20 percent are to non-UK European countries, with the remaining 30 percent to the rest of the world. Almost half of our business is outside of our own country. We are an export business.

    In the traditional business paradigm, companies begin to export after a period of growth that has allowed them to add resources for trading on the international market. They do well in their home market, then start to explore outside that region, increasing their turnover by selling overseas. For Perch, some of our very first customers were in America; we were an export business from Day 1. From cultural to legal and tax issues, exporting raises challenges that the majority of tiny businesses are not well placed to deal with. As governments scramble to create legislation for this new global marketplace, I am afraid that laws created to prevent tax avoidance by multinationals will force small traders directly into their arms.

    When tax is very taxing

    At one point HM Revenue and Customs in the UK ran adverts with the slogan, “Tax doesn’t have to be taxing.” When you run an export business, tax can be very taxing indeed. You may already be aware of the changes in VAT legislation in Europe, which have implications on digital businesses worldwide. Worse news is that other countries are looking at what the European Union (EU) are doing and considering implementing their own regime of adding tax on digital goods and services sold to citizens of their country.

    As of January 1, our systems have to be able to do the following for every purchase of our software:

    • If the customer is in the UK, sell the product with VAT added at 20 percent.
    • If the customer is in any other EU country but gives us a VAT number (which identifies that they are registered for VAT), validate the VAT number and allow them to buy without adding VAT.
    • If the customer is in another EU country but does not have a VAT number, treat them as a consumer and charge them VAT at the rate in their country. We need to make sure we know which country they are resident in to charge the right amount of VAT, as we then have to pay that VAT via a central system to the correct EU Member State.
    • If the customer is outside the EU then VAT doesn’t apply, so we don’t add VAT to the price of the product.
    • We also have to store two pieces of non-contradictory evidence as to where our customer is located. In the event of an audit we have to be able to prove we charged the right amount of VAT and paid it to the right country!

    Many digital micro-businesses have found that their small business accountants are ill-equipped to deal with the complexities of these VAT rules. We end up being more expert in international taxation issues than the people who are supposed to be advising us!

    Multiple currencies

    Until recently, if you wanted to take payments in multiple currencies, your only option other than PayPal was to have a multi-currency merchant account, plus a payment processor that could cope with taking payment in your chosen currencies. With the advent of Stripe this is getting easier and less expensive to do. That said, you still tend to swallow a fairly expensive exchange rate. Larger companies with bank accounts in each currency are able to maintain balances in the currency they accept and wait to transfer money at favorable rates. This usually isn’t so possible for the tiny business.

    Even with accepting currency becoming easier, dealing with these different currencies through your accounting system adds extra complexity. Large companies have accounting departments to deal with these issues. In our case we lean heavily on our accounting software Xero to help us do the various calculations and keep track of exchange rates.

    Our laws, your expectations

    As a UK company, we have to comply with the laws of the United Kingdom. Our software license is drawn up in respect of those laws. Our UK lawyer has checked that we are doing all of the right things in terms of the UK and European legislation we have to comply with. We try very hard to use plain English in any legal documents. However, we sometimes get questions from customers outside of the UK who are puzzled by some of the wording.

    In addition, we sometimes have to comply with things that baffle and annoy customers from elsewhere in the world. If you have been asked whether you agree to have cookies placed on your computer recently, it’s likely that the website was in Europe and trying to comply with the “cookie law.” This privacy legislation requires websites to get consent from visitors to store or retrieve information from cookies.

    Information that you are required legally, or for taxation purposes, to collect in your own country might seem like an invasion of privacy in another. As a small company you are left to navigate these waters yourself. It’s time-consuming and not straightforward unless you happen to have a background in international business.

    When it comes to business insurance for digital products and services, insurers tend to assume small means “this country only.” Policies designed for global sales are offered to multinational companies, and are not suited to tiny single-founder operations.

    Our part to play in the future

    Something that has become very clear as I have tried to help people navigate the EU VAT issue is that the people writing the legislation are out of touch with the reality of modern, digital business. For the VAT situation, there was an assumption that small businesses were all selling via a larger company—that all independent publishers were selling ebooks via Amazon, for example. It was taken for granted that these companies, not the small players, would be handling the complexity.

    Due to the difficulties in dealing with these international issues, I believe that many founders avoid the issues until they come knocking on our door. However, I would encourage people to engage with the tricky issues of being a truly global, small business. Only by pushing for legislation that works with our businesses, insurance that makes sense, and banking and payment systems that take into account modern business practice will we be able to ensure our future as independent businesses.

    If legislation is passed that makes it ever-harder for us to run these start-ups and small businesses, you can be sure that the big players will be happy to step in and offer us a “solution.” That solution is likely to leave us placing much of our profit—and our ability to make business decisions—in the hands of huge enterprises. As the founder of an independent business, as a person who wants to use and support independent businesses—that is a path I fear heading down.

  • The Core Model: Designing Inside Out for Better Results 

    If you’ve worked on a website design with a large team or client, chances are good you’ve spent some time debating (arguing?) with each other about what the homepage should look like, or which department gets to be in the top-level navigation—perhaps forgetting that many of the site’s visitors might never even see the homepage if they land there via search.

    Nobody comes to your website just to look at your homepage or navigate your information architecture. People come because they want to get something done.

    All too often, we blame the client for falling short on user experience. They don’t get that the important thing is that the information architecture is easy to understand—something you will never achieve if every single department gets to have their own button on the homepage.

    It’s about time we take more of that blame ourselves. Usually, no matter how much user research we do, or how meticulously we’ve treated the digital strategy, we start out in our interactions with clients by mapping the information architecture or sketching homepages. It’s no wonder clients believe these are the pillars of the entire website, if these are the first things we show them. In fact, very few users actually meet their goals right on the homepage of any given website, so it follows that very few organizations will reach their own objectives—much less their users’—by focusing only on the homepage.

    Long before “mobile first” or “content-driven design” were even buzzwords, information architect Are Halland tried to solve this conundrum by introducing the core model, which he presented at IA Summit 2007. The presentation is still highly enjoyable and relevant, even seven years later. In short, websites must be designed from the inside out, with primary focus on the core tasks its users need to accomplish.

    When we used the core model technique at Netlife Research to begin a mobile-first, content-driven website redesign with the Norwegian Cancer Society (NCS), we spent less time quibbling over the homepage contents, and more time trying to figure out how we could actually help the users and the NCS get what they needed out of the website—with great results.

    The core model ensures that we’re thinking about user needs all the way through the website design process, thinking holistically about goals instead of hierarchically; instead of demanding “Where do you belong on the NCS website?” of our visitors, the core model prompts us to ask, more generously, “How can the NCS help you?”

    A different starting point

    Using the core model, we start the design process by mapping out all the content we have in order to find the pages with a clear overlap between objectives and user tasks.

    To use the core model, you need:

    • Business objectives: Prioritized, measurable objectives and sub-objectives. What does the organization want to achieve?
    • User tasks: Actual, researched, prioritized user tasks. What is it that people want to get done? (We usually conduct top task surveys to identify the user tasks, which is a great tool if you want to align the organization.)

    A good review of a website’s existing content can turn up some dusty corners that need clearing out. Typically, a website might have a lot of content that doesn’t help users meet their goals—such as press release archives and lengthy vision statements. A great deal of this content can usually be removed, simplified, or merged in some way.

    When you have set aside the nonessential content, you are left with cores. These are pages or workflows whose content fulfills a clear overlap between business objectives and user tasks.

    An example from the NCS is their page dedicated to information about lung cancer. Our user research identified a huge need for qualified and authoritative information on the many forms of cancer—and seeing that one of the objectives for the NCS is to educate Norwegians about cancer, this is a clear match of the users’ needs with the organization’s larger objective.

    A Venn diagram showing the intersection of user needs and business goals.
    Which pages meet both business goals and user needs?

    But what happens with pages like “Donate”? Our research showed that users did not typically search the site for information related to fundraising, but being able to receive donations online is essential if the NCS is going to raise more money for cancer research. This is where the core model truly shines: if you create good cores, you’ll also be able to create good pathways to other, less-requested pages on your website, regardless of where they are placed in the information architecture. A core page should never be a blind alley.

    Who is the core model for?

    The core model is first and foremost a thinking tool. It helps the content strategist identify the most important pages on the site. It helps the UX designer identify which modules she needs on a page. It helps the graphic designer know which are the most important elements to emphasize in the design. It helps include clients or stakeholders who are less web-savvy in your project strategy. It helps the copywriters and editors leave silo thinking behind and create better content.

    And to get all these different disciplines to start thinking collaboratively, we’ve found success with organizing team workshops that introduce core model thinking to the whole group.

    By the end of the workshop, the group will have a common understanding of user needs, business goals, and how different pages should be connected. Additionally, you have worksheets where stakeholders have given you a prioritized list of what kind of content and modules they believe are the most important on the page they have worked on, when considering both user needs and business objectives.

    With a prioritized list of what kind of content and modules needs to be on the most important pages, it’s a lot easier for the team to get to work, regardless of whether they are UX designers, graphic designers, or content strategists. We start by creating the core pages; the homepage is usually the last page we design. (How can you design the wrapping before you know what’s inside?)

    How to do a core model workshop

    The core model workshop outlined here is the first stage of a bigger design process, which might look a little different from one team to the next. But when you work with clients through these initial worksheets, the end result will be a team that’s excited to see the new website take shape—and is agreed on which content is truly important.

    Doing a core workshop is easy and low-tech. All you need is:

    • Handouts summarizing researched user tasks and identified business objectives (see above)
    • Handouts with the core model (e.g., A3 paper size) (to fill out)
    • Markers and Post-Its
    • Room with a projector
    • 3-4 hours per workshop
    • 1-3 participants from your team (e.g., designers, UX, content, developers, and so forth)
    • 6-14 stakeholders from relevant fields or departments in the organization
    • Snacks and lots of coffee!
    Core model handout with fields to fill out: core page name, business goals, user tasks, inward paths, core content, and forward path.
    The core model handout.

    When inviting stakeholders, try to involve people…

    • who will work with the content
    • with strong opinions about the website
    • who should be collaborating, but aren’t

    To take part in a core workshop, there is no need for drawing skills, design skills, or tech-savviness. The most important thing is that people understand their own respective fields.

    All workshop participants should work in pairs to fill out their worksheets. Between each step in the workshop, they’ll present their ideas to the other pairs, which will usually generate questions or new ideas that the other pairs can incorporate into their worksheets.

    Core model workshop with participants discussing and working in pairs. Participants are labeled as socionom, lawyer, cancer nurse, research, design, cancer care, prevention, web editor, and fundraiser.
    Participants from the NCS in one of the core workshops included people from the departments of cancer care, cancer prevention, cancer research, rights, and fundraising, as well as their in-house designer and web editor.

    1. Identify your cores

    The first thing you need to do is to identify your core pages by matching the business objectives and the user tasks. You can do this in the workshop or beforehand. Let’s use the example of our cancer type template, e.g. “lung cancer,” where we matched the following tasks and objectives.

    Business objectives:

    • Helping patients and their friends and family
    • Increasing knowledge about cancer and prevention

    User tasks:

    • Learn about different forms of cancer
    • Identify symptoms of cancer
    • Get tips for preventing cancer
    • Find information about treating cancer (therapies, adverse effects, risks, prognosis)
    The core model handout, partially completed with core page name, business goals, and user tasks.
    The worksheet has been filled out with the relevant business goals and user tasks.

    2. Plan for inward paths

    Instead of jumping into content creation and detailing that page, the next step is to map out inward paths. This is where we’ll look carefully at any user research findings to help inform decisions. How might people find this page? How did they get here?

    This approach is a simple way to prompt your client to think about the page from a user’s perspective. In our example of the page about lung cancer, plausible inward paths are things like:

    • Googling lung cancer
    • Googling symptoms
    • Clicking a link on the homepage
    • Finding a link in a printed brochure

    3. Determine core content

    After identifying inward paths, we begin talking about the core content. What content do we need on this page for it to achieve the goals of both the organization and the users? What kind of modules or elements do we need?

    In this task, the participants are using all the information they have on their worksheets: the user tasks, the business objectives, and the inward paths. In light of this information, what are the most important things that need to go on to that page—and in what order? Having a solid user research foundation at hand will make this process much simpler. In the case of the NCS workshop, the user research had identified cancer prevention as a top user concern, which made it clear that we needed to say something about prevention—sometimes even for cancer types that cannot be prevented.

    4. Set forward paths

    This last field is key to the core model’s success. After visitors have gotten the answers to their questions, where do we want to send them next? At this point you can allow yourself to think more about business goals in a general sense.

    In the case of the lung cancer page, it could be forward paths like:

    • Contacting the cancer help line (so they don’t diagnose themselves)
    • How to prevent all forms of cancer, not just this specific type of cancer
    • Patient rights, if they are reading about treatment
    • Telling users about the political work and lobby work NCS does (e.g. trying to reduce treatment waiting times)

    This has to be done in the context of user tasks. If someone is visiting the website in a fearful state, hoping to find solid information about melanoma, do we really want to conclude their journey with a flashy “Donate!” message? Not really—that would just be rude and insensitive, and is unlikely to encourage donations anyway. However, many users do look for general information on cancer research, and in this context, we can frame it more specifically: “If you think cancer research is important, you can help us by donating.” (And in fact, this more considerate approach might end up increasing donations, as it did for us at the NCS.)

    The core model handout with additional details filled in, including inward paths, core content, and forward paths.
    A filled-out core worksheet.

    5. Think mobile to prioritize

    After all these steps, participants are usually excited. Their worksheets are full of ideas for content, modules, and all sorts of functionality.

    The enthusiasm is great—that’s something we want!—but a worksheet full of discursive ideas is difficult to work with. Are all these things equally important?

    That is why the final step in the workshop is to use mobile-first thinking to prioritize all the elements. We give the participants a new sheet and ask them: if you had just a small screen available, in which order would you place the elements you’ve identified throughout the workshop? They’ll also need to place those forward paths they’ve written down in the context of the main content.

    A modified version of the core model handout for mobile screens, with narrow columns replacing the large core content field and the forward path fields.
    At the final step, participants get new worksheets with narrow columns representing a mobile screen. How would they prioritize their content on such a small screen?
    A completed example of the core model handout, modified for thinking about content on mobile screens.
    A finished mobile core worksheet from a workshop with the Norwegian Association of the Blind and Partially Sighted.

    From core sketches to a finished website

    We rarely use wireframes, and you won’t see a photoshop sketch or prototype with lorem ipsum. Why not?

    A wireframe says a lot about where something is placed on a page, but it rarely says anything about why it was put there. Because of this, wireframes imply a lot more about what the design could look like than you really want it to in the early stages of design.

    The core sketches from our workshop, on the other hand, can be put to good use by any web discipline, because it tells you which elements need to be on which pages, and—just as importantly—why they are there. There really isn’t any web-related discipline I know of whose members shouldn’t care about what needs to be on the page and why.

    With interdisciplinary teams, you’re more likely to come up with more innovative ways of solving the user tasks: should it just be text? A video? A quiz? Something completely different?

    At Netlife Research, we usually work in close teams of two to four people with a broad combination of skills, such as user research, UX design, graphic design, front end development, and content strategy. At this stage we’re also typically in close collaboration with a work group from the client.

    Together, we are able to identify what kind of modules and information we need on the core pages—but the visual design is still flexible at this stage.

    Three sketches of a page in mobile view: an early paper sketch, an early Photoshop sketch, and a simple black-and-white HTML and CSS prototype. All sketches used real content, not lorem ipsum.

    The next step is to begin content workshops with the organization, using core model thinking and writing in pairs

    Ultimately, we delivered an HTML and CSS prototype with actual content, which (in this case) a subcontractor then developed for the NCS, along with a custom CMS to manage the website and make changes. We also designed several modules tailor-made for common forward paths, which their website editors can place as needed on various pages.

    One such forward path module is a box advertising the cancer helpline. This helps the NCS achieve its goal of increasing the use of its services for learning about cancer, and it helps users get answers to their questions.

    Screenshot of a page about breast cancer, showing text about causes and prevention on the left, and the cancer helpline box on the right.
    The cancer helpline box as used on the page about breast cancer. See our presentation from Confab Central 2014 for more details about this and other forward path modules.

    Results

    The message gets out to more people

    After launch in September 2012, the number of unique visitors on the NCS website has steadily increased each year, despite the fact that the project had no specific activities aimed at search engine optimization. User-focused content goes a long way.

    Since the launch of the redesign in September 2012, the number of unique visitors has doubled. Before launch, the number of unique visitors had been steady since 2010.
    Unique visitors to the Norwegian Cancer Society’s website. Wondering about those dips in the graph each year? That’s the effect of the (nearly) endless Norwegian summer holidays.

    As a welcome side effect of restructuring the website and the content around user tasks, the Norwegian Cancer Society is also now being used as a source by news media more frequently than before.

    Forward paths have a huge impact

    One example of a forward path is the aforementioned cancer helpline. If you compare the number of cancer helpline conversations in 2013 with the number of conversations the previous seven years, the number of conversations is up 40 percent. Usually, organizations will be looking to decrease the number of calls, but when you’re in the business of informing, it’s a good thing when users reach out. More people at risk of cancer picked up the phone, or entered a chat, or sent an email, and talked to an oncology nurse.

    And despite this increase in conversations on the support lines, the oncology nurses tell us they are actually receiving more informed and sophisticated questions than they used to, because more people have already found the answers to their most basic questions on the website.

    Since 2006, the number of cancer helpline conversations had been about the same, but in 2013, the year after launch, there was a 40% increase.
    The total number of cancer helpline conversations, including email, chat, and phone calls.

    Fewer banners, but more donations

    The previous NCS homepage had several banners and menu items pointing to different ways of supporting the NCS. Today, there’s just the “Support us” item in the menu, and the banners are gone.

    Despite this, the effect on the digital fundraising has been astounding. Comparing numbers from 2011 (a whole year with the old website) with 2013 (a whole year with the new website):

    • The number of one-time donations has tripled (up 198%)
    • The number of regular donors registering each year has quadrupled (up 288%)
    • The total sum from regular donors each year has quintupled(!) (up 382%)

    This is not only due to core model thinking, but also continuous improvement of the forms.

    This graph shows that the number of regular donors registered online is six times higher in 2014 than in 2011. There has been a steady increase since launch in 2012.
    Thanks to continuous improvements, by August 2014 the annual income from regular donors registering online in 2014 had already surpassed the annual income from 2013.

    People will do anything on mobile

    For several years, we have advocated the idea that people will do anything on mobile, if you just let them. Our work with the NCS is great testament to this way of thinking.

    Users spend roughly the same amount of time on the page about lung cancer regardless of which device they are using; while tablet and desktop users spend on average 3 minutes and 48 seconds, smartphone users spend 3 minutes and 57 seconds.

    In some forms, conversion rates are actually higher on mobile; in a recent membership campaign, the conversion rate was 7.3 percent on mobile whilst only 2.5 percent on desktop.

    Membership conversion rates are 2.5% on desktop, 6.5% on tablet, and 7.3% on mobile.
    To become a member, we need the person’s name, address, and birthday. That didn’t stop people from signing up on a smartphone.

    From homepage first to homepage last

    By devoting your time to the core pages first, you’ll avoid a number of turf battles about menu bar real estate, and enjoy the benefits of getting the whole team united behind the same essential parts of the website. You’ll prove to the team that you care about their content and their users (and, sure, maybe convince them to care a little more about their users, too)—and the end result is a website that knows exactly what it’s about. Next time you’re faced with a compromise-riddled situation where you’re designing a website by committee, give the core model a shot. In my work, it’s been a great way to get down to business and stay on positive terms with the whole team.

  • From Empathy to Advocacy 

    For the past several years, I’ve been privileged to work with a number of local advocacy organizations in my community. Doing so has made me keenly aware of the crucial role that advocates play. They operate on scales both large and small—from working with lawmakers to shape public policy, to helping a single parent fill out the paperwork to find child care that enables them to keep a job. But advocates have a few things in common:

    • They have a cause: in whatever context they work, there’s an existing pattern they’re not satisfied with.
    • They intervene when they perceive an imbalance of power.
    • They act as translators between “outsiders” and “insiders.”
    • They persuade others to care about their cause, using stories and hard data.

    As people who make websites, we may find that thinking of ourselves as advocates for our users, rather than creators of a product or providers of a service, transforms the way we work.

    The UX industry devotes considerable attention to the concept of empathy, and rightly so, as understanding our users and their needs is foundational to delivering quality experiences. Still, empathy and insights alone do not automatically create those experiences. What matters is how cultivating empathy alters our decisions and behaviors. My ability to understand the needs of another person does nothing to meet those needs until I take conscious action—becoming not just a listener, but an advocate.

    Most of us probably feel that we practice user-centered design, but the work of web development doesn’t happen in a vacuum. It frequently means inheriting a legacy of past decisions and considering a multitude of business pressures. It’s messy, and users often suffer as a result. Advocacy is what we do to make it better. It’s how we navigate the complex world of business relationships and persuade others to care about the same principles we do. And by making the language of advocacy a part of our daily conversation, we’re constantly seeking to build a culture of respect for users, rather than waiting for a project that provides a convenient framework.

    Imbalances of power

    Advocacy can take many forms, but one way to think of it is that when any significant power imbalance exists between two parties, the risk of an injustice occurring is high. The greater the difference, the greater the risk.

    A person can feel powerless for many reasons. Being a child is an obvious example, as are differences in social or economic status. I can be privileged in one context and powerless in another. We all feel powerless whenever someone else makes decisions for us. When I go to the doctor, I’m signing up for whatever procedures are ordered, often without knowing what the cost will be. When I file my taxes, the government decides whether I’ve gotten them right.

    In the face of a large, complex system, someone who feels powerless needs an advocate, usually one who works within the system, to gain a hearing. Otherwise, it’s too easy for the privileged party to make every decision based on its own interests and preferences, even if unintentionally.

    That’s why governments and large organizations sometimes employ ombudsmen, who operate outside the normal chain of command and elevate complaints that might otherwise go ignored. Other advocates wear the hat informally, simply as people who care.

    In developing web applications, I might be tempted to think that ultimate power always lies in the user’s hands. After all, I assume, she controls the browser and can always take her business elsewhere. But in many cases, this is more of an abstract idea than a true lever of influence.

    Will she really leave the social network that all her friends use? And if she did, would I, as a developer, ever know the reason? There are many applications that people are forced to use without a good alternative, like those provided by an employer. In many cases, the business offering a website holds much of the power in its interactions with users.

    Even when choices do exist, users are still not the ones making design decisions—they can only react to decisions already made by developers and others, and hope that someone is listening. Voting for change by leaving a website is unlikely to mean much because it’s a passive statement.

    Experiences can also be hard to quantify. For example, when language barriers are creating a UX problem, it’s hard to measure the resulting frustration or its cost in goodwill.

    In a culture of advocacy, the conversation starts from an underlying value of respect for the user, rather than the balance sheet or even an abstract idea like “best practices.” We can’t always say that adding translation, or making our writing more accessible, will improve our customer satisfaction metric by a given percentage. But we can say, “We ought to be an organization that recognizes the diversity of our customers and respects their time. Let’s demonstrate that by…”

    The most effective advocate is likely one who has direct experience in the user’s shoes. The bilingual team member is most likely to be sensitive to language barriers. This isn’t always the case—anyone can champion a cause—but it does mean that as developers, we need to pay extra attention to the people within our organizations who have such experience. We can encourage them to act as advocates in those areas, and to remind us of priorities that we might otherwise overlook.

    We must put structures in place to give such advocates the power to be heard, though. Ombudsmen hold positional authority that, while not absolute, isn’t easily overruled. In the design or development process, that structure may be formalized as a specific role within the development team (“Eve is our user advocate on this project”), or an expression of shared values that says, “If anyone feels that a decision doesn’t demonstrate respect for our users, we stop what we’re doing and tackle the issue.”

    Translating between insiders and outsiders

    An advocate also serves as a translator, helping outsiders to navigate a complex system and helping insiders to understand an outsider’s position. Doing this effectively can be challenging and involves a mix of both technical and soft skills.

    In web development, interviews and testing will, of course, yield insights into users’ needs. But users can’t be expected to articulate those needs in a way that makes sense to developers. A good advocate will listen, draw inferences, and re-interpret needs in ways that lead to practical application at a technical level, so they can negotiate effectively with developers or other business stakeholders.

    An advocate’s analytical skills, and a level of impartiality, can be just as crucial. Users may ask for the moon. They may describe symptoms instead of the fundamental problem, or jump to conclusions about what the problem is. For example, when a user complains that “the system is slow,” it might mean that response time is poor, or that the system is confusing and accomplishing a task takes too long. The user might feel strongly that the solution is adding a search system, when in reality, a few IA improvements would be effective.

    An advocate’s role is to distill core problems from the raw input of feelings, reactions, and data, and, just as a lawyer might, recognize the moments when a user asks for what is not in their best interest. What people like is not always what’s most effective (although this is not an excuse to simply replace their subjective preferences with our own).

    Collective interests and individual stories

    “Users” are a faceless and voiceless mass. Alice and Bob, on the other hand, are people. An advocate’s third task is to make the impersonal personal by articulating the interests of a group and helping decision-makers to see them as people rather than numbers.

    One of the most persuasive presentations I’ve ever attended came from a Boys & Girls Club spokesperson describing their work. It was persuasive because she did a masterful job of combining big-picture data about the effectiveness of certain programs with specific stories of children she’d worked with. It’s one thing to advocate for early childhood programs based on economic data; it’s quite another to show how a particular child’s life was changed through a development program. Collective data represented through individual stories become compelling.

    It’s easy enough to write off the experience of a group of people when we describe them with a label—“IE7 users,” for example. It’s even easier when the label describes a minority of our audience. But if I think about my friend Alice, who works in a healthcare setting where computers are difficult to upgrade because of regulatory concerns, it’s much harder to explain why she deserves to be marginalized.

    Stories are persuasive because they humanize the subject matter and help us connect emotionally. That’s what makes personas a valuable tool. But when we encounter resistance, they’re also easy to dismiss as anecdotal unless they’re supported by harder data. If I want to make the point that we shouldn’t ignore the experience of IE7 users, I need to know how many of those users I have. The developer who really doesn’t want to deal with IE7 might say that they’re “less than 5% of the audience”—a purely quantitative argument that sounds reasonable. But perhaps that 5% represents thousands of individual people. When I tell Alice’s story, explaining that she’s powerless to change the browser her employer provides, and point out that we have thousands more customers just like her, the argument carries more weight.

    Advocacy in practice

    Design decisions always require that we balance competing interests, but as a user advocate, I believe that it’s the user’s interest that should generally carry the most weight. Likely, not everyone in a business will agree, at least not all the time. To build credibility in speaking on behalf of users, consider the following practical guidelines.

    Affirm the legitimacy of other interests

    Rarely is a business openly hostile to users, of course. But we all bring to the discussion a set of preferences and biases that stem from our experience and expertise. Executives have an interest in the financial performance of the organization. Security professionals have an interest in protecting systems from intrusion. Developers want maintainable code and database integrity. Marketers want a strong brand image.

    These are perfectly valid goals, but each of them can easily find itself in tension with what the users of a web application care about. Users probably value things like a good price, a clear interface, and software that lets them complete a task quickly.

    An advocate exercises empathy not only on behalf of the user, but also on behalf of the business. To be heard, I need to first understand and respect what decision-makers value. Over the years, I’ve had many discussions with security professionals about the inherent tradeoffs in balancing usability with security—the most secure system is always the least usable. The UX person’s “clear feedback” in a failed login interaction is the security person’s “information leakage,” for example. To strike that balance well, I have to respect the need for security and the very real threats that face modern web applications. If ease of use were my only priority, I could easily put an application at risk.

    Frame the discussion in meaningful terms

    Respecting the goals of business stakeholders enables us to make a case for user-first decisions in ways that command attention. UX considerations can often be framed as risk management, brand management, or other business values.

    For example, imagine that a website is offering a survey to users to get background research for a potential new product. The data will drive critical decisions, so everyone believes it’s a user-first project. But the survey takes over the user’s screen about 10 seconds after page load, so it’s frustrating. The conversation could easily go like this:

    UX person: We’ve got to do something about this survey. It’s driving people crazy.
    Research person: We really need the data. And people want to have a voice in our design process, right?
    UX person: We could make it less intrusive, though. It doesn’t have to take over the whole screen.
    Research person: Then they won’t notice it, and we won’t get enough data. We have to make it obvious.
    UX person: Our users are annoyed.
    Research person: It’s worth it for a little while. They’ll benefit in the long run.

    A more effective approach might be:

    UX person: Can I talk to you about the new survey? I’m really glad that kind of research is going into the new product.
    Research person: Yeah, it’s exciting.
    UX person: We’re hearing some complaints about the timing, and the way it takes over people’s screens. Could we adjust that? I’m afraid that frustration could really bias your data.
    Research person: Oh, I hadn’t thought about that. We really need it to be obvious, though. If we don’t get enough people clicking through, the whole thing will be useless.
    UX person: Definitely. Can I show you a couple of design ideas?

    To someone who lives and breathes UX, user frustration might be a sufficient reason to make a design change on its own. But to the analyst who needs that survey data, it might seem like an acceptable tradeoff. Articulating the risk that frustration poses to the business, like biasing the results of a survey, can make the argument far more persuasive. And proposing a workable alternative is always more effective than simply highlighting a problem.

    Be pragmatic

    If we recognize that other business interests have merit, we have to be prepared to lose UX arguments at times. A business is not a person, and the cold logic of operational calculus may determine that the cost of making a change to improve UX outweighs the benefit. Advocates learn to choose their battles and press for change where it matters the most.

    In the case of language barriers, as much as I may like to see full translation of a website or application, it’s likely an expensive proposition many businesses won’t be prepared to accept. But perhaps there are a few key interactions where it could be especially helpful, and we can advocate for a limited expense there. And if that doesn’t fly, we can fall back to simplifying the writing as much as possible to make it accessible to non-native speakers. At each step, even though the solution isn’t ideal, it keeps the issue visible, and keeps people thinking about the needs of that group of users.

    While a business may be an impersonal entity, it is also composed of people who, for better or worse, share a common culture. As web professionals who continually speak in the language of advocacy, we can cultivate an environment in which users are respected, even when we lose out on individual decisions.

    Find a cause and start somewhere

    The advocates that I’ve worked with recognize their limits. They are passionate about a cause, but they know they can’t change the world all at once. They tackle manageable problems while always watching for new opportunities. Start by finding the single UX issue that you care most about, and look for small ways to improve it and persuade others to care. It could be one of the big issues of our day, like front-end performance or the mobile experience, or something very specific, like the experience of a handful of internal users with a particular administrative interface (which are easy to neglect—and improving them is a terrific way to get buy-in for future efforts).

    At the same time, just as solving major social problems depends on public policy, our industry can only improve when we advocate publicly—so it’s important to write, speak, and share our experiences, particularly those that may be unique or underrepresented.

    But whether the scale is large or small, the key is to encourage, in ourselves and in others, a healthy level of dissatisfaction with the status quo and take daily actions that directly improve the experience of our users.

  • Rian van der Merwe on A View from a Different Valley: How to Interview 

    It’s not like my life goal was to become an expert on interviewing. I’d much rather be an expert on work than on finding work. Like a corporate version of Frodo, in the midst of a grueling interview cycle I’d often lament that, “I wish it need not have happened in my time.” And then Business Gandalf would show up in my head to tell me, “So do all who live to see such times. But that is not for them to decide. All we have to decide is what to do with the time that is given us.” Ugh.

    But I did what I had to do. I got good at interviewing. Now, I don’t plan to go anywhere anytime soon, so I have a chance to take a breath and reflect on what worked and what didn’t work when I was trying to change jobs. Here I’ll share some of the experience I picked up while interviewing for a variety of jobs as I moved across the world—twice—in a relatively short time.

    Of course, this whole thing comes with an obvious disclaimer: This is what worked for me. It might not work for you, so proceed with caution. With that out of the way, let’s split the discussion up into two sections: how to get an interview, and how to get through it.

    How to get an interview

    One of the most common pieces of advice people give when they know you’re looking for a job is that you should never apply through a company’s website or respond to a general job ad. I’ve found this to be true—clicking the “Apply” button and pasting a text-only version of your résumé is a very effective way to get ignored. But what works, then? This is the process I used very effectively to get that all-important first email back:

    1. Find a job you’re interested in. That’s not really what this article is about, so I won’t go into too much detail except to list some of the usual suspects: use LinkedIn, go to the websites of companies you like and click on “Careers,” sign up for industry-specific job boards like BayCHI, etc.
    2. Find the two or three most likely hiring managers. This step is crucial. For example, if you’re applying for a design role, use LinkedIn or the company’s “About” page to find the VP of Product, or the Design Manager, or the Chief Product Officer, or any number of fairly senior roles that the job likely reports into.
    3. Use Rapportive to guess their email addresses. It’s usually not hard to figure out people’s corporate email addresses. the first thing to try is “firstname.lastname@”. There is only a finite number of combinations it could be. But the way to be sure is to install the Rapportive plugin, compose a new email in Gmail, and try a bunch of addresses (without sending the email) until Rapportive finds the person’s LinkedIn profile.
    4. Send an extremely short introduction email. Send separate, personal emails to each of the likely hiring managers you found. Make it really, really short. Don’t go on about how awesome you are—you’ll get a chance to do that later. Tell them you like their company, you like the role, you’re interested in talking. Link to stuff you’ve done: your LinkedIn profile, your portfolio, articles/books you’ve written, etc. Then ask them if they’d be willing to have a call, or forward your information on to the right person. The point is to not burden people. If they see a long email, the chances are high that they will delete it. But if they see a short email that’s respectful of their time and gives them the information they need to make a quick decision—that’s a different story.

    You won’t get an email back every time, but of all the different ways I’ve tried, this method has had the most success. Your goal at this point isn’t to get the job, it’s to get that first email back. Once you get the email and the first call is set up, you move on to your next objective…

    How to have a successful interview

    Note that I didn’t title this section “How to get the job.” Remember that you might not want the job. Or, you might want the job but you shouldn’t take it because it’s all wrong for you. That’s what the interview process is all about. It’s not about looking good enough so someone will hire you. It’s about finding out if there’s a good fit between you and the company you’re interviewing with.

    Your first call will usually be with a recruiter. The recruiter call is mostly a formality. As long as you’re able to condense your (obviously) illustrious career into a five-minute history lesson of past experiences, you should be fine. Recruiters get in trouble when they waste hiring managers’ time, so they’re just trying to avoid that. Your objective at this point is still not to get the job—it’s to get to talk to the hiring manager. And you do that by not sounding like an idiot when you talk to the recruiter.

    The call with the hiring manager is a different story. I’ve approached this a bunch of different ways, but here’s the general approach that works best for me.

    First, it’s important to look at the interview through the right lens. Don’t go into it with the primary goal of impressing the hiring manager. That is a waste of their time, and it makes you sound desperate. Instead, seek to have a mutually beneficial conversation with a fellow industry leader. You want to learn something from the conversation, and you want them to learn something as well. Your best outcome is if, at some point, the hiring manager says, “Huh, I’m going to read up on that a bit more when we’re done here.”

    So how do you do this? You usually start with that five-minute history of your career. But then take the next step, and ask the first question… How do you do product development at your company? How do you prioritize roadmaps? What’s your design process like? By guiding the discussion and asking questions about how things work, you not only demonstrate what’s important to you, you also open a door to talk about the areas you’re most knowledgeable and passionate about.

    Sure, you’re still going to get the odd, “Tell me about a time you’ve failed and how you dealt with that” question, but that will be few and far between. Most of the time what you’ll do instead is go over your allotted time and have a spirited conversation about the best ways to design and develop software. And that’s exactly what you want. You want to be seen as a peer right away—someone who would fit in.

    That, to me, is a good interview. It’s not a venue for one person to test another person. Sometimes you can’t get away from that—you get bombarded with questions the minute the call starts. But that’s probably a good indication that it’s not a good place to work anyway. If the interviewer doesn’t bite, or insists on following a script that doesn’t really allow for conversation—it’s the first sign that you should probably walk away. If you can’t have a conversation as equals, you’ll never be treated as a valuable member of the team—you’ll always be a resource. And you don’t want that.

    Which is to say…

    If I could convince people of one thing that will make them more successful in interviews, it would be to change their framing of what an interview process actually is. Many of us grew up thinking an interview is a test that you need to pass. However, if you instead look at the interview process as a meeting of equals to understand if a good fit exists, you’ll not only be more confident and relaxed in the process, you’re also more likely to impress the company. And who knows, maybe they’ll even become better interviewers themselves.

  • Getting Started with Gulp 

    While building JavaScript related projects (whether server side via Node.js or front-end libraries), a build tool to help easily maintain and automate many of the processes—including testing, concatenating files, minification, compiling templates, as well as many other options—can be incredibly useful. This takes the step out that’s most prone to error (me, in the case of my projects) and replaces it with a fast and consistent system that never forgets to update or copy your files over. There are many great build tools, including Grunt, Broccoli, and Brunch.

    One of these build tools that I find particularly helpful is gulp.js. It’s fast and I’ve found it really easy to work with since I learned how to set it up and incorporate it into a workflow. Today, I’m going to walk through that process.

    So, what’s gulp? Gulp is build system built on the concept of streams. Bear with me here, I might need to go a little high level on this. Streams are a way to develop large pieces of software out of many small pieces. The philosophy behind it is for each component to do one thing and to have the same way of communicating as the next. This allows you to mix and match these small components so you can take the output from any one that follows this philosophy and plug it into the next. The principles of streaming are significant in Node.js and as a platform, Node gives some helpful syntax for the utilization of streaming. Gulp, as a Node.js tool, utilizes this syntax and follows these principles to allow the user to piece together a large build system while escaping a lot of the complexity that comes with big tools that do a lot of things. A quick example of this is in the way gulp tasks are structured, using the pipe method.

    gulp.src('script/lib/*.js') // read all of the files that are in script/lib with a .js extension
      .pipe(jshint()) // run their contents through jshint
      .pipe(jshint.reporter('default')) // report any findings from jshint
      .pipe(concat('all.js'))
    

    One stream is piped into a destination, which is then piped into another destination, and so on and so forth. We’ll come back to this example later.

    Now, I know what you’re thinking, “This sounds like a lot to learn in order to do some basic stuff that I can just do by hand,” and if you are thinking that:

    • I’m a mindreader and this is totally going to work out for me; and
    • We should talk about the benefits of an automated build system.

    If you’re already using something to perform the concatenation, minification, testing, etc. of your files—go ahead and skip to the next paragraph. I’ll take care of you there. If not, let’s get right to the point in business terms: the initial cost incurred in you learning to use this technology and spin it up on your projects will be made up for exponentially with a decrease in risk for both errors and required knowledge to be transferred if somebody else comes onto the project.

    Maybe you’re not thinking that, maybe you’re thinking, “I already use Grunt for all my processes. I’m the Admiral of Automation, the Captain of the Command line, the General of…um…you get where I am going with this. I am very good at setting up my own projects with my build tool of choice.” That’s awesome! Though, in my experience, it never hurts to learn something new, and who knows? You might even like this better!

    What’s better about gulp? First, it’s fast. Since gulp uses that stream thing I mentioned earlier, it’s very nature is in passing data from one program to another instead of reading a file, performing a task, writing a file, then doing that again with another task. Another thing that’s nice about gulp is how easy it is to read. Since it’s just short bits of code, it becomes very clear, very quickly, what your tasks do. This differs from other systems that use a configuration file in that configuration files tend to mean jumping around in a file a lot and making sure you’re keeping track in your head what’s going on and where it’s happening. This may not seem like a big deal if you’re the person who set up the configuration or if you’re early in the project, but if you’re new and debugging—the more you have to keep track of, the worse.

    This is what had me, faster and easier to understand? Sold.

    Now that you’ve heard a little bit of the why, let’s talk about the how. Installation is dead simple, thanks to our good friend, NPM.

    First, make sure gulp is installed globally.

    npm install -g gulp

    Now, in your project, install gulp as a developer dependency

    npm install --save-dev gulp

    Next, make gulpfile.js in the root of your project

    var gulp = require('gulp');
    
    gulp.task('default', function() {
      // place code here
    });
    

    Lastly, run gulp from your command line

    gulp

    [20:27:35] Using gulpfile ~/code/example/gulpfile.js
    [20:27:35] Starting 'default'...
    [20:27:35] Finished 'default' after 45 μs
    

    Nice.

    Let’s get to writing a task. Now, on any normal project, I’m likely going to want to run JShint on my files, concatenate all of them, and then minify. I’ll also want to make sure a minified and non-minified version are saved. First, let’s make sure to install the plugins via NPM as well. We can do that by placing these in our package.json.

    'gulp-jshint': '1.9.0',
    'gulp-concat': '2.4.2',
    'gulp-rename': '1.2.0',
    'gulp-uglify': '1.0.2'
    

    And running npm install

    Now, let’s write that gulpfile:

    var gulp = require('gulp');
    var jshint = require('gulp-jshint');
    var concat = require('gulp-concat');
    var rename = require('gulp-rename');
    var uglify = require('gulp-uglify');
    
    gulp.task('js-linting-compiling', function(){
      return gulp.src('script/lib/*.js') // read all of the files that are in script/lib with a .js extension
        .pipe(jshint()) // run their contents through jshint
        .pipe(jshint.reporter('default')) // report any findings from jshint
        .pipe(concat('all.js')) // concatenate all of the file contents into a file titled 'all.js'
        .pipe(gulp.dest('dist/js')) // write that file to the dist/js directory
        .pipe(rename('all.min.js')) // now rename the file in memory to 'all.min.js'
        .pipe(uglify()) // run uglify (for minification) on 'all.min.js'
        .pipe(gulp.dest('dist/js')); // write all.min.js to the dist/js file
    });
    

    I’ve added some comments next to each line, but let’s take a closer look:

    var gulp = require('gulp');
    var jshint = require('gulp-jshint');
    var concat = require('gulp-concat');
    var rename = require('gulp-rename');
    var uglify = require('gulp-uglify');
    

    At the top, I’m requiring each plugin that I’ll be using as well as gulp itself.

    gulp.task('js-linting-compiling', function(){
    

    Now I’m creating a task and naming it “js-linting-compiling,” because I am not very creative and that is pretty descriptive.

    return gulp.src('script/lib/*.js')
    

    Now I’m reading the files that are in the script/lib folder that have the extension .js. This line begins with a return, because I’m returning a stream from the task itself.

    From this point forward, think about streams. The component takes in input, makes a change, and then produces output that the next component can read.

        .pipe(jshint()) // run their contents through jshint
        .pipe(jshint.reporter('default')) // report any findings from jshint
        .pipe(concat('all.js')) // concatenate all of the file contents into a file titled 'all.js'
        .pipe(gulp.dest('dist/js')) // write that file to the dist/js directory
        .pipe(rename('all.min.js')) // now rename the file in memory to 'all.min.js'
        .pipe(uglify()) // run uglify (for minification) on 'all.min.js'
        .pipe(gulp.dest('dist/js')); // write all.min.js to the dist/js file
    

    That will hopefully get you started. If you’d like to learn more, check out:

  • Lyza Danger Gardner on Building the Web Everywhere: The Implicit Contract 

    I work with lots of different teams and different developers. I usually know innately, as does the team around me, whether the teams we’re working with are good or not. We rarely disagree on the evaluation.

    But what does good mean?

    I find that the most valuable web developers interact with each other along a kind of implicit contract, the tenets of which are based upon web standards and proven ways of doing things that we’ve cobbled together collectively over the years. Most of the time, good isn’t generated by an individual in isolation—it’s the plurality of tandem efforts that hum along to a shared, web-driven rhythm.

    When things are ticking along smoothly among devs, I find we have a common underlying way of talking and thinking about the web. We fit together in human and technical ways, upholding a shared understanding about how best to make pieces of the web fit together.

    In contrast to the tired stereotype of genius coming in the form of a lone, intense hacker, much of the effective work done on the web is done within the bounds of a certain kind of communal conformance. In a good way.

    Working together

    A heap of obvious things goes into making an individual web developer seem good: An innate understanding of time and effort. An indestructible drive to self-educate.  A lick-smart, logical mind quick to spot and take advantage of patterns. I think we look for these talents naturally.

    And yet when devs work together, those skills fade back just a bit. In a (grossly oversimplified) way, as part of a larger team each developer is a miniature black box. What comes fiercely front-and-center are the interfacing edges of the people and teams. The way they talk to each other and the timbre of what they build, what they disclose and what they don’t think they need to mention.

    When something unexpected pops up between healthy teams—which happens, because this is a complicated world—a communication like, “Hey, when I poke this service in this way, it throws a 500 at me” as often as not is enough information for the recipient to go off and fix it, because we have have similar scars to reference and a shared vocabulary built on common ground.

    A common vernacular and communication style is an echo of a common thinking style. Underneath the chatter are cognitive technical models of the metaphors at hand, based on each team member’s perception of how the web fits together—REST, modular patterns, progressive enhancement, etc.—and how those components apply to the current project. Happy days when those internal archetypes align.

    When you run into misaligned teams it is obvious. There’s a herky-jerky grating to communication. Seemingly dashed-off emails that don’t quite dig into the problem at hand. Teams where you can tell each member’s mental context differs. Code that feels weird and wrong.

    A common ground engenders brilliant ideas

    Unless it is the actual goal of the project, I don’t care too much if you can come up with a Nobel-worthy new implementation of a basic CRUD feature. In most cases, I’ll happily accept something predictable and expected.

    This is not an argument for ignorance or apathy. Ideally, everyone should be pretty good at what they do—those individual technical skills do of course matter. I mean, part of the contract here does involve boots-on-ground time—to understand the lay of the land, to break HTTP into bits and pieces, leak some memory, screw up DNS a few times. We break and heal frequently as we gain deeper web mastery.

    But having a web set of conceptual building blocks—standards, patterns, conventions—upon which we can frame and build gives us the freedom to focus on where we really need to be creative: the particular task, product, or site at hand. Common components, shared notions.

    After all, the best chefs in the world don’t reinvent carrots. Instead, they identify what other remixed food components might plug into a carrot to make it divine.

    Likewise, good developers are mixing up agreed-upon technical ingredients into the soup of the day. And just as a talented cook knows how to explain to the waitstaff the nuances that thyme brings to the potato, good devs know how to talk to those around them, team members both in the kitchen and beyond, about why today’s menu includes OAuth or moment.js.

    It’s not just touchy-feely

    It used to be that I would think, “Hey, these people seem like they’re on the same wavelength as my team; that’s cool,” but now I realize it’s likely that what seems merely like good vibrations saves prodigious time and money on projects.

    In damaged teams, mental reference dissonance carries through to the outcome, manifesting itself in jarring technical mismatches, poorly-thought-through integration seams and, frankly, bugs. It’s as if people are thinking about the web in different internal language systems.

    So? Things take longer, often a lot longer. Teams become frustrated with each other. Meetings and discussions are drawn-out and less fruitful. The results suffer. Things break.

    It matters.

    I’m not suggesting we all link arms and plow out code from a single hive mind. In fact, I’d argue that the constraints imposed by a common perspective help to drive a certain kind of unique brilliance.

  • Tweaking the Moral UI 

    A couple of years ago, I was asked to help put together a code of conduct for the IA Summit. I laughed.

    We need a code of conduct here? The IA Summit is the nicest, most community-friendly conference ever! Those problems happen at other conferences! And they want me to help? There are sailors jealous of my cussing vocabulary—surely I was not PC enough to be part of such an effort. But the chairs insisted. So, being a good user-centered designer, I started asking around about the idea of a code of conduct.

    I found out design conferences are not the safe meetings of minds I thought they were.

    One woman told me that she had been molested by another attendee at a favorite conference, and was too scared to report it. “No one will ever see me as anything but a victim,” she said. “I’ve worked too hard for that.”

    At another conference, a woman was woken up in the middle of the night by a speaker demanding that she come over. When she told the organizer in the morning, he said, “We were all pretty drunk last night. He’s a good guy. He just gets a bit feisty when he’s drinking.”

    Then there was my own little story. Years ago at the IA Summit, I went to talk to a speaker about something he’d said. I’m a tall, tough lady. But he managed to pin me against a balcony railing and try to kiss me. I started wondering, what if there had been a code of conduct then? What if I had had someone to talk to about it? What if I hadn’t said, “Oh, he’s just drunk”?

    Maybe I wouldn’t have spent the past seven years ducking him at every event I go to. And maybe he wouldn’t have spent those same years harassing other women—women who now were missing out on amazing learning and networking opportunities because they thought they’d be harassed.

    The idea of a code of conduct didn’t seem so silly anymore.

    A wicked problem

    Unfortunately, it still seems silly to others. Recently I was talking to another conference organizer about setting up codes of conduct, and he said, “That doesn’t happen at our conferences. People know me, and they know they can talk to me. A code of conduct will make people nervous that we have a problem. And we don’t.”

    I wonder how he knew that, since most victims don’t come forward. They don’t want to be seen as a “buzzkill,” or be told that what they wore or what they drank meant that they asked for it. This is not unusual; every day we see examples of women whose reputations are trashed for reporting rape and harassment. On Twitter, women who talk about sexism in games or even think a woman should go on a stamp are given death threats. Reporting carries consequences. Reporting is scary.

    In order to feel safe enough to come forward, attendees and speakers need to know that the conference organizers are paying attention. We need a guarantee that they’ll listen, be discreet, and do something about it.

    In her recent piece, “Why You Want a Code of Conduct & How We Made One,” Erin Kissane frames precisely why codes of conduct are absolutely necessary:

    To define a code of conduct is to formally state that your community—your event or organization or project—does not permit intimidation or harassment or any of the other terrible things that we can’t seem to prevent in the rest of the world. It’s to express and nurture healthy community norms. In a small, limited way, it’s to offer sanctuary to the vulnerable: to stake out a space you can touch, put it under your protection, and make it a welcoming home for all who act with respect.

    A code of conduct is a message—not a message that there is a problem, but a message that there is a solution. As much as a label on a button or a triangle with an exclamation point in it, a code of conduct tells you how a conference works.

    Tweaking the UI

    We are designers.

    That means we make choices about the interface that sits between the people and the thing they want. We mock interfaces that aren’t clear. We write books with titles like Don’t Make Me Think. Yet when we hold conferences, we seem to assume that everyone has the same idea of how they work.

    Why do we expect that people will “just know” how to use this complex build of architecture and wetware? There is a lecture; that means professional behavior! There is a bar; that means drinking and flirting! There is a reception; that means…alcohol plus speakers…network-flirting? A conference can be a complex space to understand because it mixes two things that usually have clear boundaries: social and work. If one person is working and another is looking to get social, conflict will happen.

    These fluid boundaries can be particularly hard on speakers. Attendees often approach speakers with questions inspired by their talk, which could start a conversation that leads to work…or a date. It’s hard to tell; cautious flirting and cautious networking often look the same. People can feel uncomfortable saying no to someone who might hire them—or keep them from being hired.

    Sometimes after giving a talk, I’ve mistaken admiration for flirtation, and the other way around. A wise speaker stays neutral, but it can be hard to be wise after a few glasses of wine. A code of conduct is useful because it spells out parameters for interaction. Some codes have even gone so far as to say if you are a speaker, you cannot engage in romantic activities like flirting. Clarity around what is expected of you leads to fewer accidental missteps.

    Set expectations

    A good code, like a good interface, sets clear expectations and has a swift feedback loop. It must:

    • Define clearly what is and isn’t acceptable behavior at your con. “Don’t be a dick” or “Be excellent to each other” is too open to interpretation. The Railsconf policy offers clear definitions: “Harassment includes, but is not limited to: offensive verbal comments related to gender, sexual orientation, disability, physical appearance, body size, race, or religion; sexual images in public spaces; deliberate intimidation; stalking; following; harassing photography or recording; sustained disruption of talks or other events; inappropriate physical contact; and any unwelcome sexual attention.”
    • Set expectations for what will happen if the code is violated, as the O’Reilly code of conduct does: “Conference participants violating this Code of Conduct may be expelled from the conference without a refund, and/or banned from future O’Reilly events, at the discretion of O’Reilly Media.”
    • Tell people how and to whom to report the incident. The Lean Startup Conference’s code includes: “Please contact a staff member, volunteer, or our executive producer [name], [email] or [phone number].” Providing a phone number is a massive signal that you are willing to listen.
    • Set expectations about how it will be handled. The World IA Day code is very clear:

      First we will listen.

      Then, we will help you to determine the options that we have based on the situation. We will also document the details to assure trends of behavior are uncovered across locations.

      Lastly, we will follow the situation to a resolution where you feel safe and you can remain anonymous if you wish to be.

    A code of conduct is a little like a FAQ or a TOS. It’s clunky, and I hope someone comes up with something better. But it’s instructions on what to expect and how to behave and, most importantly, what to do when something breaks. Because, as we keep seeing, something will eventually break. It’s better if it’s not people.

    A lot of conferences are adopting codes of conduct now. The Lean Startup Conference one mentioned above is heartfelt and crafted based on their values. The art and technology festival XOXO has an excellent one, based on the template from Geek Feminism. Yes, there’s a template. It’s not even hard to write one anymore. It doesn’t even take a long time.

    Meet (or exceed) expectations

    Any good experience designer knows that setting expectations is worthless if they aren’t immediately met. Beyond writing a code of conduct, conference organizers must also train their team to handle this emotionally charged situation, including making sure the person reporting feels safe. And there needs to be a clear, established process that enables you to act swiftly and decisively to remove violators.

    So how should a conference handle it when the code is violated? There are a couple of telling case studies online: one from Elise Matthesen at the feminist science fiction conference WisCon, and another from Kelly Kend at XOXO.

    In both cases, these women were immediately supported by the people they spoke with—a critical first step. In Kelly’s case, she brought her situation directly to the organizers, who listened to her and made it clear they weren’t going to blame her for the incident. Once the organizers had made her feel safe, they removed the harasser. It was improvised action, but effective.

    In Elise’s case, it’s clear that WisCon was well-prepared to handle the incident. The first part of the story is exemplary:

    • The conference staff member (called a “safety staffer”) asked if Elise wanted someone there while she reported.
    • The safety staffer asked if she wanted to report it formally, or just talk it through first.
    • The safety staffer asked if she wanted to use her name, or remain anonymous.
    • And the safety staffer and the conference organizers kept checking in with her to make sure she was doing okay.

    Unfortunately, WisCon fell down when it came to acting on the report. Eventually the harasser was banned, but only after a slow and onerous process. And the ban isn’t permanent, which has infuriated the community.

    It is hard work to get the poison out of the apple. Elise writes, “Serial harassers can get any number of little talking-to’s and still have a clear record,” which has been my experience as well. Since I started writing about conference harassment, a number of women have spoken to me about incidents at various design conferences. Two names keep coming up as the abusers, yet they continue to get invitations to speak. Until more people step forward to share their stories, this won’t change. And people cannot step forward until they are sure they won’t be victimized by the reporting process.

    If you are a conference organizer, it is your job to make sure your attendees know you will listen to them, take them seriously, and act decisively to keep them safe.

    If you are an attendee who sees harassment, stand up for those who may be afraid to step forward, and act as a witness to bad behavior.

    And if you are harassed, please consider coming forward. But I can’t blame you if you choose not to. Keep yourself safe first.

    A promise

    John Scalzi, author of several best selling sci-fi novels, made a pledge to his community that he would neither speak at nor attend any conference without an enforced code of conduct.

    I will make the same pledge now. I will honor any commitments I’ve made previously; all new ones are subject to the pledge.

    I will neither speak at nor attend conferences that do not have and enforce a code of conduct. This may prove hard, as many conferences I’d love to speak at do not have a code yet. But change takes sacrifice. Integrity takes sacrifice.

    If you believe, as I do, that it is critical to make a safe place where everyone can learn and grow and network, then leave a comment with just one word: “cosigned.”

  • Conference Proposals that Don&#8217;t Suck 

    When it comes to turning your big idea into a proposal that you want to submit to a conference, there are no real rules or patterns to follow beyond “just do your best” and perhaps “keep it to 500 words,” which makes the whole process pretty daunting.

    I’ve worked with a number of people submitting proposals to events over the past few years. I’ve been racking my brain trying to identify a strong pattern that helps people pull together proposals that provide what conference chairs and program planners are looking for, while at the same time making the process a bit more clear to people who really want to find their way to the stage.

    I’ve found that it’s best to treat the proposal-writing process as just that—a process, not just something you do in a half-hour during a slow afternoon. One of the worst things you can do is to write your proposal in the submission form. I’ve done it. You probably know someone else who has done it. Most of our proposals probably sucked because of it. Hemingway advises us that “the first draft of anything is shit,” and this is as true for conference proposals as it is for just about anything.

    When you write a proposal in the submission form, you don’t give yourself the time that a proposal needs to mature. I’ve found six solid steps that can help you turn that idea into a lucid and concise conference proposal that paints a clear picture of what your presentation will be about.

    As I walk through these steps, I’m going to share my most recently created conference proposal. I’ve recently submitted this to some conferences, and I don’t yet know if it will be accepted or if I’ll have any opportunities to give this presentation—but following these steps made writing the proposal itself much easier.

    Let’s get to it.

    Step 1: Write down the general, high-level ideas that you want to talk about

    This is a very informal step, and it should be written just for you. I use this step to take any notes I’ve stored away on post-its or in Evernote, and turn them into something resembling a couple of paragraphs. It’s an exercise in getting everything out of your head.

    You don’t need to worry about being factually accurate in what you’re writing. This is the opportunity to go with what you know or remember, or assume you know or remember, and get it all into some other medium. You can fix anything that is inaccurate later; no one is going to read this but you.

    For example, I’m writing a proposal for a presentation about creating “skunk works” projects (essentially, where small teams work on secret/necessary endeavors) to get things done when you’re busily leading a team and don’t really have time to get all the things accomplished that should be in place.

    Here’s what I started with:

    Something About Skunk Works Projects

    The overall premise is that teams are really busy and if you’ve recently grown one (in-house or otherwise), you know that all the bodies go to the work, and little goes to the stuff that helps make a team purr along nice and smoothly, such as efficient on-boarding processes, sharing of thinking, processes, definitions, etc. Skunk Works projects can help you continue to increase the value to your team (and others) and also provide the team with an outlet for growth.

    Is there a formula? There sure is, and I can trace a lot back to Boeing, and other places like Atari & Chuck E. Cheese, and my own current “stuff.” It dovetails nicely into the guerrilla stuff that I’ve done in the past, and the leadership I’ve been doing recently.

    That’s the idea—how to get stuff done for your team when you’ve got so much stuff to do that you don’t have time.

    This is an extremely rough draft, and should be for your eyes only—despite the fact that I’m sharing mine with you here, in its poorly written and somewhat inaccurate state.

    At this point, you’ve earned a break. You’ll want to be fresh for the next step, where we start to build a supporting backbone for your free-flowing words.

    Step 2: Break your content into topic points

    Review what you’ve written and begin to break that content into topics. I create bullet points for “Pain,” “Solution,” and two rounds of “Support.” I also add a bullet point I call “Personable,” so that I have a place to add how the idea is relatable to my own experience (though this sometimes ends up being covered by one of the Support points).

    This isn’t final content; go ahead and lift sentences from your previous paragraphs if you feel like they’re relevant. Grammar still takes a backseat here, but do make sure that you’re addressing the topic point with some clarity. Also, spend a little time doing some fact-checking; tighten your points up a bit with real and concrete information.

    As I was working through this step, I did a little more homework. I cracked open a few books and hunted down articles in order to refresh myself and feel like I was on more solid ground as I pulled the points together.

    Pain

    When you think about your presentation’s topic, what is the common point of pain that you believe you share with other people? What prompted you to feel that this is a strong idea for a presentation and worthy of sharing? Pain is something we all share, and when you can help someone else feel like their pain might be alleviated, they start to nod their heads, mentally say “yes!” to themselves, and begin to relate to you and your message.

    Pain point: Work has to get done; organizational good “stuff” often comes last, which means it never gets done because the bills have to get paid and people get booked on project work first.

    Solution

    After you’ve identified that common point of pain, what’s the general, high-level solution? If you are the person who found the solution, you should say so; if not, you should identify who did, and explain what you learned from it. Give enough information to assure people there is a solution. Don’t get hung up on feeling like you’ll give away the ending; people will show up to your presentation to hear more about the journey you’ve taken from that common point of pain, not just to hear you recite the solution itself.

    Solution: Don’t worry, others have used skunk works to have some great successes. Companies such as Google, Microsoft, Ford, and Atari have done amazing work with skunk works. So have I, and I’ll show you how I’ve done it so you can put it into practice for yourself based upon my loose framework.

    Supporting points

    Once you’ve worked through the pain and solution, it’s time to provide a little more information to the reviewers and readers of your proposal. Merely telling people that there is pain and a solution is great to lead with; however, it’s not enough. You’ll still need to convince people that this idea applies to a broad range of other contexts, and that this is a presentation that they need to see so that they can benefit from your wisdom. What are a couple of key points that you can use to support the validity of your proposal and the claims that you may have made?

    Support 1: Origin in the 40s with Lockheed. They used it to create a jet fighter to help fend off the German jet fighter threat in 1943. Kelly Johnson and his team designed and built the XP-80 in only 143 days, seven less than was required.

    Support 2: Kelly had 14 Rules & Practices for skunk works projects—we don’t need them all; however, we can learn a lot from them.

    Something personal and/or humorous (optional)

    If you’re able to pull something personal into your proposal, you can help reviewers and audiences members further relate to you and what you’ve been through. It can shift a proposal from appearing to be “book report-ish” to one that speaks from your experience and perspective. I like to leave this as optional content because you may already be adding something similar in the Pain, Solution, or Supporting points sections.

    It’s important not to overlook the value—and the risk—of humor. Humor is tough to do in a conference proposal. You may have a line that you find hilarious; however, great comedy relies heavily on nuances of delivery that are difficult to transmit in a written proposal (and sometimes even harder for the readers to pick up on). Take caution, and when in doubt, skip anything that could be misperceived when creating your proposal.

    Personal: I’ve pulled together skunk works teams and busted out some skunk works projects myself!

    Humor: The results smell pretty damn good. (Wah wah wah.)

    Together, these provide the foundation for the next step, which is where we start to get more serious.

    Step 3: Turn your topics into a draft proposal

    This is where we take the organization and grouping of your thoughts and turn them into a few short paragraphs. It’s time to turn on the spell checker and call the grammar police; this is a serious activity and the midway point to having a proposal that’s ready for submission.

    You’ll be writing the best, most coherent sentences that you know how to craft based on your topic points. You should use your topic points as the outline for your proposal, hitting the ideas in the same order. As a refresher, here are my topic points, in the order they were created.

    Pain: Work has to get done; organizational good “stuff” often comes last, which means it never gets done because the bills have to get paid and people get booked on project work first.

    Solution: Don’t worry, others have used skunk works to have some great successes. Companies such as Google, Microsoft, Ford, and Atari have done amazing work with skunk works. So have I, and I’ll show you how I’ve done it so you can put it into practice for yourself based upon my loose framework.

    Support 1: Origin in the 40s with Lockheed. They used it to create a jet fighter to help fend off the German jet fighter threat in 1943. Kelly Johnson and his team designed and built the XP-80 in only 143 days, seven less than was required.

    Support 2: Kelly had 14 Rules & Practices for skunk works projects—we don’t need them all; however, we can learn a lot from them.

    Personal: I’ve pulled together Skunk Works teams and busted out some Skunk Works projects myself!

    Humor: The results smell pretty damn good. (Wah wah wah.)

    Once you’ve reviewed your topic points, put your writing skills to work. I did more gut-checking and fact-checking to make sure I wasn’t completely full of crap and to generally tighten up my thinking.

    The Science of Skunk Works — Making Sure the Cobbler’s Kids Get Shoes

    We’ve all worked at places where there’s never enough time to make sure that things are operationally done the “right way”—bills need to get paid, client or product work needs to get done and takes priority, and hey, everyone deserves to have a little bit of a life, right? There is a bit of a light at the end of this tunnel! Several companies, including Atari, Chuck E. Cheese, Ford, Microsoft, and Google, have pulled of some pretty great things by taking advantage of skunk works teams and projects. I’ve been fortunate enough to see a little bit of success with those teams and projects, as well, and will share how you can apply them to your own practice.

    Way back in the 1940s, Kelly Johnson and his team of mighty skunks used their skunk works process to design—and build—the XP-80 prototype jet fighter to compete with the Germans. In 143 days—seven days less than was needed. Kelly created 14 Rules & Practices for skunk works projects in order to help articulate the most effective way for his team to successful in the projects that they worked on. We can learn from Kelly’s rules, adapt them to our current times and perhaps more digitally dependent needs, and find some ways to put some shoes on the cobbler’s kids. And the results might just smell pretty good, if you’re patient enough.

    Notice that I didn’t just take the topic points and copy and paste them into paragraphs. Instead, I put on my editing hat and tried to establish the flow of what I was writing, keeping the paragraphs limited to 2–3 sentences for the sake of concision.

    Step 4: Phone a friend

    You know that friend you can always count on to tell you when you’ve got a booger on your noise or spinach in your teeth, or who will tell you when you were just a completely out-of-line jerk and you need to get your head on straight?

    That’s the friend you want to send your proposal to. If you’re fortunate enough to have more than one of these friends, send it to all of them. Explain to them—clearly—what they’re about to read and what the purpose is. Give them enough background so that they can provide you with actionable feedback. Tell them about the conference, the expected audience, your topic, why you’ll be good presenting on this topic, and what your proposal is about. Finally, give them a deadline of a day or two so they can review it with the focus that it deserves.

    I sent my proposal off to my friend Gabby Hon, because she’s that friend who will tell me all those things I listed above and because she’s a words-and-grammar nerd who kicks my work as hard as it needs.

    She sent me feedback, and, for once, my confidence was a bit higher than it should have been. I really like my topic and really felt strongly that I’d pulled together a solid proposal. Gabby’s feedback was essentially:

    • You’re using “a bit” and “a little bit” too much. I’ve counted 3 so far within a paragraph
    • Okay, so, there’s too much “this is what skunk works is”—which I can find on Wikipedia—and not enough “why this matters to design/tech/UX”
    • You say you can adapt the rules, but can you give a little hint?
    • I mean obviously it was all about design and working around restrictions and limitations—thus skunk works
    • If design is best when faced with limitations, then skunk works programs are our best historical example of how to do great work under something something

    Not only did Gabby provide some great things for me to think about and improve on, she was also gracious enough to let me know that I didn’t entirely stink up the page when I’d written my proposal:

    • It’s very good
    • Just the second paragraph needs some polishing

    Step 5: Revise your proposal

    Once you’ve had time to process the feedback, sit back down with your proposal and make adjustments. Don’t be shy about killing your darlings; the feedback you’ve received is meant to help you focus on the important parts and make them better. If something doesn’t fit, move it to a parking lot or remove it entirely.

    Here is my final revision that I’ll be submitting to conferences:

    DesignOps Skunk Works: Shoes for the Cobbler’s Children

    We’ve all worked at places where there’s never enough time to make sure that things are operationally done the “right way”—bills need to get paid, client or product/project work needs to get done and takes priority, and hey, everyone deserves to have a life, too. There is light at the end of this tunnel! Several companies, including Atari, Ford, Microsoft, and Google, have pulled off some great things by taking advantage of skunk works teams and projects. I’ve been fortunate enough to see some successes with those teams and projects, as well, and will share them so you can see how to apply the approach(es) to your own practice.

    Way back in the 1940s, Kelly Johnson and his team of mighty skunks used their skunk works process to design—and build—a prototype jet fighter in 143 days. Kelly established 14 Rules & Practices for skunk works projects in order to help articulate the most effective way for his team to be successful in the projects that they worked on. Not only can we learn from Kelly’s rules and adapt them to our current methods of working, we can also create our own skunk works teams and projects to ensure that the cobbler’s kids—the operational areas of our design practices—get some shoes put on their feet. And the results might just smell pretty good, if you’re patient enough.

    There’s a bit of a method to my madness, believe it or not. Here’s a micro-version of the change log of my proposal:

    • I made a key change in the title; I’m pretty uncomfortable with using the word “science” (originally “The Science of Skunk Works”). I’m pretty sure “science” is making a promise that I’m not certain I can keep in the presentation, and I’d prefer not to be called to the mat for that.
    • I tested my title with a few friends and this title fared the best. I was leaning toward “Shoes for the Cobbler’s Kids” personally, and the feedback encouraged me to not be so precious.
    • I also tightened up the copy based on Gabby’s feedback, placing extra focus on the second paragraph.

    Step 6: Submit the proposal to a conference

    You likely had a conference in mind when you started trying to pull together your proposal. Each year, I start contemplating my primary presentation for the next year as soon as I can. Generally, starting around March through April or May is when I really start to try and think about what I’ve learned and what is worth sharing with others, and then I start collecting information—notes, articles, books, and so on—so that I can support my thinking as best as I can.

    When I go through this process, then I know that I’m ready with a pretty solid proposal. I copy and paste the final, vetted version into the form and hit submit, confident that I’m not just winging it.

    And sure enough, that’s when I find that last typo.

  • Rachel Andrew on the Business of Web Dev: The Ways We&#8217;ve Changed—and Stayed the Same 

    In 2005, my husband and business partner Drew McLellan had an idea for a website. He emailed friends and colleagues, we filled in the gaps, and 24 ways was launched: 24 articles in the run-up to Christmas, advent-calendar style. As I write this article, we are on day six of season 10 of that project. By 24 December, there will be 240 articles by 140 authors—many of them well-known names in web design and development. As a fun holiday season retrospective, I thought I would take a look at what 10 seasons of 24 ways can tell us about how our industry has changed—and what hasn’t changed.

    Hacking our way to CSS complexity

    The first season of 24 ways, prior to Christmas 2005, brought us techniques such as using JavaScript to stripe table rows, image techniques for rounded corners, and an article on Avoiding CSS Hacks for IE due to the imminent arrival of Internet Explorer 7. In that first season, we were still very much working around the limitations of browsers that didn’t have full support for CSS2.1.

    By 2006, Andy Budd was teasing us with the rounded corner possibilities brought to us in CSS3 and in 2007, Drew McLellan helped us to get transparent PNG images to work in Internet Explorer 6. The article titles from those early years show how much of our time as designers and developers was spent dealing with browser bugs and lack of CSS to deal with the visual designs we wanted to create. The things we wanted to do were relatively simple—we wanted rounded corners, nice quote marks, and transparency. The hoops we had to jump through were plentiful.

    The introduction to the 2013 archive of 24 ways notes that 2013 was the year that the Web Standards Project “buzzed its last.” By 2013, browsers had converged on web standards. They were doing standard things in standard ways. We were even seeing innovation by browser vendors via the established standards process. My article for the 2013 season described the new CSS Grid Layout specification, initially developed by Microsoft.

    Since 2005, the CSS that we can consider usable in production has grown. We have far more CSS available to us through browser support for the new modules that make up CSS3. The things that CSS can do are also far more complex, expressive, and far reaching. We’ve moved on from spending our time trying to come up with tricks to achieve visual effects, and are spending a lot of time working out what to do with all of this CSS. How do we manage websites and web applications that are becoming more like complex pieces of software than the simple styled HTML documents of days gone by? Topics in recent years include new approaches to using CSS selectors, front-end style guides, Git, and Grunt. The web has changed, and the ways in which we spend our time have changed too.

    We all got mobile

    In the 2006 edition of 24 ways, Cameron Moll explained that,

    The mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing “desktop web” skills to understand how to develop content for it. With WML on the decline, the learning curve is much smaller today than it was several years ago. I’m generalizing things gratuitously, but the point remains: Get off yo’ lazy butt and begin to take mobile seriously.

    The Mobile Web Simplified

    The iPhone wasn’t launched until 2007, a move by Apple that forced us all to get off our lazy butts and think about mobile! In December 2007, Brian Fling explained the state of the mobile landscape half a year after the launch of the iPhone. It wasn’t until responsive design was brought to life by Ethan Marcotte on A List Apart in May 2010, however, that articles about mobile really became numerous on 24 ways. The 2011 season had four articles with the words “responsive design” in the title!

    By 2012, we were thinking through the implications of designing for mobile, and for mobile data. Paul Lloyd took a look back at the two approaches for responsive images discussed in the 2011 season and the emerging proposals for picture and srcset in Responsive Images: What We Thought We Needed. Tim Kadlec reminded us that,

    … there’s one part of the web’s inherent flexibility that seems to be increasingly overlooked: the ability for the web to be interacted with on any number of networks, with a gradient of bandwidth constraints and latency costs, on devices with varying degrees of hardware power.

    Responsive Responsive Design

    As we rushed to implement responsive sites and take advantage of new platforms, we had to take care that we weren’t excluding people by way of bandwidth limitations. Whether it is IE6 or mobile data, some things never change. We get excited about new technologies, then come back to earth with a bump as the reality of using them without excluding a chunk of our audience kicks in!

    The work of change


    In these 10 seasons, we can see how much the web has changed and we have changed, too. Every year, 24 ways picks up a new audience and new authors, many of whom would have still been in school in 2005.

    Always, 24 ways has tried to highlight the new, the experimental, and the technically interesting. However, it has also addressed more challenging aspects. Whether an old hand or a newcomer to the industry, we can all feel overwhelmed at times, as if we are constantly running to keep up with the latest new thing. In 2013, Christopher Murphy wrote Managing a Mind, a piece that starkly illustrated the challenges that constantly keeping up can bring. This year, we are given a reminder that we need to take care of our bodies while performing the repetitive tasks that design and programming require.

    The business of web development

    Often, 24 ways has featured articles that deal with the business of being a web designer, developer, or agency owner. In 2007, Paul Boag gave us 10 tips for getting designs signed off by the client. As the recession hit in 2008, Jeffrey Zeldman wrote up his Recession Tips for Web Designers. We’ve seen articles on subjects ranging from side projects to contracts and everything in-between.

    The business archive contains some of the most evergreen content on the site, demonstrating that good business knowledge can serve you well throughout a career.

    The industry that shares

    Another thing that hasn’t changed over these 10 seasons is the enthusiasm of each 24 ways contributor for their subject, and their generosity in writing and sharing their thoughts. This reflects our industry, an industry where people share their thoughts, research, and hard-earned experience for the benefit of their peers.

    On that note, I’ll close my final A List Apart column of 2014. Best wishes to all of you who are celebrating at this time of year. I look forward to sharing my thoughts on the business side of our industry throughout 2015.

  • Learning to be Accessible 

    I’m trying to be learn more about accessibility these days. Thinking about it more, reading about it some, and generally being aware as I write code what I should and shouldn’t do in that arena.

    I am grateful for the folks in our community who work tirelessly to make sure that I can easily find information about it online. The A11Y Project is constantly getting updates from the community to help me understand what are best practices. I just read Heydon Pickering’s book, Apps For All: Coding Accessible Web Applications, a short, but really good book to remind me of how I should be writing HTML.

    I’m also speaking up more in meetings and on my project teams. When I see something that just doesn’t quite jibe with what I’ve been learning about accessibility, I bring it up. I also ask trusted friends about it too, to make sure I’m not crazy, but that it’s actually a best practice; because it matters. Sometimes the little things, such as removing an outline on focus or using empty links instead of button, are the things that can add up to a bad experience for people using the site with a keyboard or screenreader.

    Unfortunately, sometimes I get pushback—especially when working on a minimum viable product or a quick project. There is always the answer of, “we’ll fix it later.” But will you? I’ve been working on applications and projects long enough to see that going back to refactor can be even tougher to make time for.

    I try, when getting that pushback, to remind people of the fact that it’s hard to make the time for code refactors. Taking a little bit of time to do things right the first time around saves time in the long run. Eventually, I usually share the consequences some companies have faced when they don’t take accessibility seriously. I don’t always win the battles, but often reminding colleagues that there will be a wide range of people using the site—some of whom may not use it exactly as we do—is worth it. Hopefully, next time they’ll be more willing to take the time necessary up front.

    I know this has been said before, but as I’ve started reading more and more on accessibility and trying to learn more about it, I’ve found it to be rewarding. Working on a project where all users can access and use it well, that’s satisfying. One of my most satisfying moments professionally was hearing from a blind user who was thankful they were able to use our app. And if you don’t think about accessibility, well, that can just lead to a world of hurt.

     

  • Antoine Lefeuvre on The Web, Worldwide: Stars and Stripes and ISO Codes 

    This is the real story of a promising French start-up expanding into the U.S. market. The founders now have West Coast offices and the app has been fully translated into English. One minor detail though: to switch to the website’s English version, American customers have to click on… the Union Jack.

    A red flag about flags

    Don’t laugh—this story is no exception. Bla Bla Car, one of Europe’s hottest tech companies, is a truly international operation with a presence in 13 countries. Strangely enough, only 11 flags are listed in Bla Bla Car’s version selector. Who’s missing? Belgium and Luxemburg, whose only “fault” is to be multilingual countries. Requiring Dutch-speaking Belgian users to click on the Netherlands flag, or American users on the British flag, is a cultural faux-pas. It can even raise political hackles when you have, say, a Ukrainian user clicking on the Russian flag.

    Bla Bla Car language selector list with flag icons
    Bla Bla Car language selector.

    Version links are a key element of an international website’s navigation. But many web designers still confuse flags and languages. “What is the flag of English?” is a surreal yet often-heard question in web agencies throughout Europe (and arguably all around the world). The obvious answer is that languages have no flags. But does this mean flags are not to be used on websites?

    Do you speak es-MX?

    Let’s be honest, flags are also popular with designers because they are small, colorful, handy 16px-wide icons you can stick in the top-right corner. Sometimes you really have to deal with limited space. We had this problem a few months ago at my start-up, Clubble, when setting up our pre-launch website. We’re happy to write “English,” “Français,” and “Español” in big letters in the desktop version, but what about the mobile version?

    We need the language selector to be immediately visible and don’t want to hide any language in a drop-down list. The solution: ISO codes, “a useful international, and formal, shorthand for indicating languages.” English is “en,” French is “fr,” and Spanish is “es.” ISO codes include language variants such as American English: en-US, Brazilian Portuguese: pt-BR, or Mexican Spanish: es-MX. (Note that the first part is in lowercase, leading some purists to argue that language codes should always be in lowercase as on the European Union official portal.)

    Clubble mailing groups mobile home page
    Language ISO codes on Clubble’s mobile websitee.

    Avoid the temptation to create your own abbreviations just because ISO codes don’t fit your design. Visitors to Switzerland’s official portal might be surprised to find an English-only website. Well, the French, German, and Italian versions exist. They’re just hiding behind the tiny F, D and I links!

    Government of Switzerland home page, English version
    Hidden language selector at gov.ch.

    One powerful language selection feature is the automatic detection of a user’s language and/or country based on the browser’s language and IP address. However it isn’t a substitute for a well-designed, easily-accessible language selector, as it cannot always detect who the user is and which content he wants. On my last trip to Spain, nike.com stubbornly refused to let me access the French website no matter what I tried—going to nike.fr, or even choosing France in the country selector.

    A country is more than its language

    A German version is not the same as a version for Germany. A user in Austria or Switzerland would be reading local restaurant reviews in German; we can’t assume they are interested in restaurants in Germany. When you localize an app or website, you go beyond the translation to adapt your product to a specific country. Think in terms of places under the same laws and customs rather than populations that share a way of speaking. It’s the same principle you apply to domain names: .de is for the country Germany, not the language German. In this case, using a country’s flag to tell users you offer a version tailored to their culture is a great idea.

    Mercado Libre country selector with flag icons
    Mercado Libre’s country selector.

    E-commerce websites tend to be less confused about flags and languages, as most are localized, not just translated. Selling in a foreign market implies you comply with local laws, take payments in the local currency, understand your clients’ culture and sometimes ship goods out of country. Chances are your website won’t have one but two selectors: language and country. Or even three!

    Site with multiple selectors
    Keen Footwear and Skyscanner’s language and country (and currency) selectors.

    The Middle Language

    When labeling a link with the name of a language, localize the name too! That is, write “Tiếng Việt,” “Русский,” and “עברית” rather than “Vietnamese,” “Russian,” and “Hebrew.” Facebook’s and Wikipedia’s language selectors are two impressive examples.

    Facebook and Wikipedia language selectors
    The Facebook and Wikipedia language selectors.


    Culture is often a sensitive issue. How to localize the name “Spanish,” for instance? The answer seems easy—“español,” of course. But this word happens to have connotations. In the national context of Spain, where strong regional identities and languages such as Catalan or Basque make headlines, the term “castellano” (from the historical region of Castile) is often used, as it puts all the Peninsula languages on the same level. During a six-month-long trip across South America, I also heard “castellano” a lot, probably used by those who think “español” has a colonial feeling. Many websites and applications choose to offer two Spanish versions: one for Spain and one for Latin America that is commonly named “español latinoamericano.”

    Government of Spain regional language selector
    La Moncloa (headquarters of the Spanish government) language selector.

    Naming the Chinese language isn’t any easier, I realized when I naively asked journalist and orientalist Silvia Romanelli how to say “Chinese” in Chinese. Chinese is what ISO calls a macrolanguage, i.e. a family of dozens of languages and dialects. What’s more, since Chinese languages use ideograms, there’s a big difference between written language (文 wen) and oral language (语 yu). So unless you’re designing a voice-based app, label your link 中文, zhongwen—literally, the Middle Language (China is 中国, zhongguo—the Middle Kingdom).

    Wait, that’s not all! In the People’s Republic under Mao Zedong, Chinese characters were simplified, so that today two written forms exist: 简体中文 (simplified) and 繁体中文 (traditional). Again, this can be a political issue, as the traditional form is mostly found in Taiwan. Silvia Romanelli therefore recommends following a country-agnostic approach like Global Voices and simply stating “Simplified” or “Traditional”, not “Chinese for China” or “Chinese for Taiwan” as does Facebook.

    Language selector with two versions of Chinese named simply “Simplified” and “Traditional”
    Global Voices language selector.

    Venture forth

    You had a nice app in English with no character-encoding bugs, no awfully long words that don’t wrap nicely, and no emotional or political issues about flags and languages. Now you’re leaving the comfort of designing at home to venture into foreign lands of unknown cultures and charsets. But what can be seen as a threat is actually an opportunity. Because it is tedious and complicated, internationalization is often overlooked. If you can provide a well-localized user experience, that’s still is a nice surprise today for most non-English-speaking users. Isn’t this what we’re all looking for: a great way to stand out.

  • The Only Constant is Change: A Q&amp;A with Ethan Marcotte 

    It’s here: a new edition of Responsive Web Design is now available from A Book Apart! Our editor-in-chief, Sara Wachter-Boettcher, sat down with Ethan Marcotte—who first introduced the world to RWD right here in A List Apart back in 2010—to talk about what’s new in the second edition, what he’s been working on lately, and where our industry is going next.

    The first edition of Responsive Web Design came out in the summer of 2011. What projects have you been working on for the past three years?

    I’ve been fortunate to have worked on some really great stuff. I’ve worked on client projects for publishers—like The Boston Globe and People Magazine—as well as for some ecommerce and financial companies. I cofounded Editorially, a responsive web application for collaborative writing. (And a product I dearly miss using.) More recently, I’ve been doing some in-house consulting to help companies planning to go responsive, including the responsive design workshops I’ve been doing with my friend and colleague Karen McGrane.

    Also, Karen and I have a podcast! (Which is an entirely new thing for me to say!) New as the experience might be, it’s been ridiculously fun: we’re interviewing the people who oversee large responsive redesigns at large organizations, and I’ve learned quite a bit.

    So I’d say the years since the first edition have been a blur. But it’s been a happy, wonderful blur, and I’ve been learning so much.

    Those are some pretty big projects. What have you learned by applying responsive principles to major media sites?

    A couple things, I guess.

    First, the importance of flat, non-interactive comps has been lessening—at least in my practice. They’re still incredibly valuable, mind—nothing’s better than Photoshop or Illustrator for talking about layout and aesthetics—but prototypes, even rough ones, are much more important to early discussions around content, design, and functionality. So yeah, I’m with Dan Mall: we need to decide in the browser as soon as we can.

    Related to that: since working on The Boston Globe back in 2011, I try to incorporate devices as early as possible in design reviews. Does a great job reinforcing that there’s no canonical, “true” version of the design. Getting a prototype in someone’s hands is incredibly effective—it’s worth dozens of mockups.

    All right, let’s talk about the book. What changes will readers see in the second edition?

    The second edition’s changed quite a bit from the first, but the table of contents hasn’t: as in the first edition, the chapters revolve around the three “ingredients” of a responsive design—fluid grids, flexible images, and media queries—and how they work in concert to produce a responsive design.

    But if you look past the chapter headings, you’ll see a slew of changes. As ALA’s readers probably know, tons of people have written about how to work responsively—whenever possible, tips and resources have been pulled in. (I mean, heck: we now have a responsive images specification, which gets a brief but important mention.) On top of all of that, errors were corrected; broken links fixed; figures updated; questions I’ve received from readers over the years have, whenever possible, been incorporated. I can’t tell you how good it feels to have those edits in—it feels like it’s the book it should’ve been.

    But even more than that, it was incredibly exciting to revisit the sheer volume of responsive sites that’ve been launched since I first wrote the article. Pulling in screenshots of so many beautiful responsive sites was, well, a real joy.

    And finally, I’d be remiss if I didn’t mention that Anna Debenham was the technical editor. Anna is a talented writer, speaker, and front-end developer; she’s also the co-founder of Styleguides.io, and responsible for invaluable research into the various web browsers on handheld game consoles. I don’t know how she found the time to review my second edition, but I’m impossibly grateful she did: the book is better for her criticisms, her insightful questions, and her great suggestions.

    You mentioned your podcast with Karen earlier. I’m personally a huge fan. It’s fascinating to hear how all kinds of different organizations, like Harvard, Fidelity, and Marriott, have gone responsive. What have you learned from having diverse teams tell you about their projects?

    I think part of responsive design’s appeal is we realized our old ways of working weren’t, well, working. Siloing our designs into device-specific experiences might work for some projects, but that “mobile site vs. desktop site” approach isn’t sustainable. So as we began designing for more screens, more device classes, and more things than ever before, the device-agnostic flexibility at the heart of responsive design—or, heck, at the heart of the web—is appealing to many.

    But as teams and companies design responsively, they often find their challenges go beyond the code—advertising or content workflows need to be optimized for multi-device work, both of which are infinitely more challenging than flexible layouts and squishy images.

    Frequently, one of the biggest challenges is the relationship between design and development: in many organizations and project teams, they’re discrete groups that only overlap at certain points in a project. That old idea of “handoff” between design and technology is where problems most commonly pop up.

    In other words, I think we’re at a point where treating “design” and “development” as discrete teams is a liability. The BBC wrote about this problem beautifully: when we’re designing for a web that’s not just flexible, but volatile—“in a constant state of flux,” even—we need to iterate more quickly, and collaborate more closely. And a closer relationship between design and development is a large part of that.

    What do you think is the biggest misperception about RWD?

    If you’ve read anything about responsive design, you’ve probably come across it: this suggestion that responsive design is somehow incompatible with performance. In other words, if you care about building a site that loads quickly for your users—and you do, right?—then you should steer clear of responsive design.

    So, what’s the reality, then?

    The idea that responsive design can’t be fast is, bluntly, false. As everyone from Filament Group to The Guardian to the British Government have shown us, you can have responsive designs that are as fast as they are flexible. It just takes careful planning, as well as an acknowledgement that performance isn’t just a technical issue—it’s everyone’s problem. There’s even data to suggest that responsive sites are faster than mobile-specific “m-dot” sites. But even so, the suggestion still floats around.

    That said, I confess I’m not too worried. Because when it comes to the whole “responsive design is bad for performance” myth, I’m with Tim Kadlec: anything that gets people discussing performance, even a misconception, is great. And on most of my projects, the result of that conversation is usually a site that’s both lightweight and responsive.

    (Thankfully, Scott Jehl’s new book, Responsible Responsive Design, dives into these questions with gusto.)

    It’s awesome to see people making such great strides on performance. What other challenges do you see RWD needing to overcome in the next year or two?

    It’s a bit difficult to focus on one in particular: process is a big concern, as I mentioned above; there are lots of discussions around the best way to do multi-device QA/testing; and I get lots of questions about how to tackle more challenging design patterns.

    More broadly, I often say the most common words you hear in a responsive redesign—“mobile,” “tablet,” and “desktop”—are also the most problematic. Quick example: “mobile” is frequently used as a proxy for a “small touchscreen, limited bandwidth.” But what if the “mobile” user’s connected to wifi? Or the “desktop” user’s tethered to a spotty 3G connection? Shorthand terms can be helpful, it’s true, but it’s often more productive to discuss specific challenges: challenges like screen size, CPU/GPU quality, input mode, network quality, and so on, and design for each independent of specific device classes.

    I mention this because, now more than when I wrote the book, responsive design isn’t about designing for “mobile.” It’s about designing for the web, a medium that’s both flexible and device-agnostic by default. And while we’re looking ahead with excitement (and maybe some trepidation) to the next big thing, I think it’s worth remembering that thinking device-agnostically can be a real, real strength.

    It sounds like we’ll be busy figuring this stuff out for a while. What would you recommend to a reader who’s just getting started—besides cough buying your book, of course? How can they keep from losing their shit at all the new stuff to learn?

    First of all: if someone figures out how to not freak out at how quickly things change? Please do email me. I’d love to know your secret. (Please.)

    When the browsers are especially bad, when the layout doesn’t seem to be gelling, I reread John Allsopp’s “A Dao Of Web Design.” Really. Honestly, the idea that we can’t control the display of our work is actually pretty freeing. We can guide it, shape it, but we can’t know if the user’s network connection is reliable, or if their browser runs JavaScript, or whether our layout will be shown on a screen that is large or small (or very, very small).

    The only constant we have on the web is the rate of change. And progressive enhancement is the best way for us to manage that. That’s why I always turn back to “A Dao Of Web Design.” Not just because it was a huge influence on me, and a direct influence on responsive web design: but because now, more than ever, we have to accept “the ebb and flow of things” on the web.

    Let’s get started.

    Pick up your copy of the second edition of Responsive Web Design from A Book Apart.