EW Resource

Newsfeeds

There's a huge resource of Newsfeeds on the Net which brings up-to-date information and content on a wide range of subjects.

Here are just a few relating to web development.



A List Apart: The Full Feed
  • Finessing `feColorMatrix` 

    Have you seen Spotify’s end-of-year campaign? They’ve created a compelling visual aesthetic through image-color manipulation.

    Screenshot of Spotify’s end-of-year campaign

    Image manipulation is a powerful mechanism for making a project stand out from the crowd, or just adding a little sparkle—and web filters offer a dynamic and cascadable way of doing it in the browser.

    CSS vs. SVG

    Earlier this year, I launched CSSgram, a pure CSS library that uses filters and blend modes to recreate Instagram filters.

    Image grid from Una Kravets’ CSSGram showing a variety of filters and blend modes that recreate Instagram filters

    Now, this could be done with tinkering and blend modes—but one key feature CSS filters lack is the ability to do per-channel manipulation. This is a huge downside. While CSS filters are convenient, they are merely shortcuts derived from SVG and therefore provide no control over RGBA channels. SVG (particularly the feColorMatrix map) gives us much more power and lets us take CSS filters to the next level, granting significantly more control over image manipulation and special effects.

    SVG filters

    In the SVG world, filter effects are prefixed with fe-. (Get it? For “filter effect.”) They can produce a wide variety of color effects, ranging from blur to generated 3-D textures. The term fe- is a bit loose, though; see the end of this article for a summary of each of the SVG filter effects’ methods.

    SVG filters are currently supported in the following browsers:

    Screenshot from caniuse.com

    Screenshot from caniuse.com

    So yeah, you should be good to go for the most part, unless you need to support IE9 or older. SVG filter support is relatively stable, and is more widespread than CSS filters and blend modes. There are also few major weird bugs, unlike with CSS blend modes (where only Chrome 46 has issues rendering the multiply, difference, and exclusion blend modes).

    Note: Some of the 3-D filters, such as feConvolveMatrix, do have known bugs in certain browsers, though feColorMatrix, which this article focuses on, does not. Also, keep in mind that performance will inevitably take a tiny hit when it comes to applying any action in-browser (as opposed to rendering a pre-edited image on the page).

    Using filters

    The basic layout of an SVG filter goes like this:

    
    <svg>
      <filter id="filterName">
        // filter definition here can include
        // multiple of the above items
      </filter>
    </svg>
    
    

    Within an SVG, you can declare a filter. Most of the time, you’ll want to declare filters within defs of an SVG and can apply them in CSS like so:

    
    .filter-me {
      filter: url('#filterName');
    }
    
    

    The filter URL is relative, so both filter: url('../img/filter.svg#filterName') and filter: url('http://una.im/filters.svg#filterName') are valid.

    feColorMatrix

    When it comes to color manipulation, feColorMatrix is your best option. feColorMatrix is a filter type that uses a matrix to affect color values on a per-channel (RGBA) basis. Think of it like editing the channels in Photoshop.

    This is what the feColorMatrix looks like (with each RGBA value as 1 by default in the original image):

    
    <filter id="linear">
        <feColorMatrix
          type="matrix"
          values="R 0 0 0 0
                  0 G 0 0 0
                  0 0 B 0 0
                  0 0 0 A 0 "/>
      </filter>
    </feColorMatrix>
    
    

    The matrix here is actually calculating a final RGBA value in its rows, giving each RGBA channel its own RGBA channel. The last number is a multiplier. The final RGBA value can be read from top to bottom like a column:

    
    /* R G B A 1 */
    1 0 0 0 0 // R = 1*R + 0*G + 0*B + 0*A + 0
    0 1 0 0 0 // G = 0*R + 1*G + 0*B + 0*A + 0
    0 0 1 0 0 // B = 0*R + 0*G + 1*B + 0*A + 0
    0 0 0 1 0 // A = 0*R + 0*G + 0*B + 1*A + 0
    
    

    Here’s a better visualization:

    Hand-drawn sketch showing a schematic visualization of the fecolormatrix

    RGB values

    Colorizing

    You can colorize images by omitting and mixing color channels like so:

    
    <!-- lacking the B & G channels (only R at 1) -->
    <filter id="red">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   0   0   0   0
                0   0   0   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <!-- lacking the R & G channels (only B at 1) -->
    <filter id="blue">
     <feColorMatrix
        type="matrix"
        values="0   0   0   0   0
                0   0   0   0   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <!-- lacking the R & B channels (only G at 1) -->
    <filter id="green">
      <feColorMatrix
        type="matrix"
        values="0   0   0   0   0
                0   1   0   0   0
                0   0   0   0   0
                0   0   0   1   0 "/>
    </filter>
    
    

    Here’s what adding the “green” filter to an image looks like:

    Photo showing what the addition of the “green” filter would look like

    Channel mixing

    You can also mix RGB channels to get solid colorizing results:

    
    <!-- lacking the B channel (mix of R & G)
    Red + Green = Yellow
    This is saying there is no yellow channel
    -->
    <filter id="yellow">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   1   0   0   0
                0   0   0   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <!-- lacking the G channels (mix of R & B)
    Red + Blue = Magenta
    -->
    <filter id="magenta">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   0   0   0   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <!-- lacking the R channel (mix of G & B)
    Green + Blue = Cyan
    -->
    <filter id="cyan">
      <feColorMatrix
        type="matrix"
        values="0   0   0   0   0
                0   1   0   0   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    

    In each of the previous examples, we mixed colors in CMYK mode, so removing the red channel would mean that green and blue remain. When green and blue mix, they create cyan. Red and blue make magenta. We still retain some of the red and blue values where they are most prominent, but in areas that lack the two (light areas of white, where all colors are present in the RGB schema, or areas of green), the RGBA values of the other two channels replace them.

    Justin McDowell has written an excellent article that explains HSL (hue, saturation, lightness) color theory. With SVG, the lightness value is the luminosity, which we also need to keep in mind. Here, each luminosity level is retained in each channel, so for magenta, we get an image that looks like this:

    Photo showing how a magenta effect is produced when each luminosity level is retained in each channel

    Why is there so much magenta in the clouds and lightest values? Consider the RGB chart:

    RGB chart

    When one value is missing, the other two take its place. So now, without the green channel, there is no white, cyan, or yellow. These colors don’t actually disappear, however, because their luminosity (or alpha) values have not yet been touched. Let’s see what happens when we manipulate those alpha channels next.

    Alpha values

    We can play with the shadow and highlight tones via the alpha channels (fourth column). The fourth row affects overall alpha channels, while the fourth column affects luminosity on a per-channel basis.

    
    <!-- Acts like an opacity filter at .5 -->
    <filter id="alpha">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   1   0   0   0
                0   0   1   0   0
                0   0   0   .5  0 "/>
    </filter>
    
    <!-- increases green opacity to be
         on the same level as overall opacity -->
    <filter id="hard-green">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   1   0   1   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <filter id="hard-yellow">
      <feColorMatrix
        type="matrix"
        values="1   0   0   1   0
                0   1   0   1   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    

    In the following example, we’re reusing the matrix from the magenta example and adding a 100% alpha channel on the blue level. We retain the red values, yet override any red in the shadows so the shadow colors all become blue, while the lightest values that have red in them become a mix of blue and red (magenta).

    
    <filter id="blue-shadow-magenta-highlight">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   0   0   0   0
                0   0   1   1   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image showing what happens when we reuse the matrix from the magenta example and add a 100% alpha channel on the blue level

    If this last value was less than 0 (up to -1), the opposite would happen. The shadows would turn red instead of blue. At -1, these create identical effects:

    
    <filter id="red-overlay">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   0   0   0   0
                0   0   1  -1   0
                0   0   0   1   0 "/>
    </filter>
    
    <filter id="identical-red-overlay">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   0   0   0   0
                0   0   0   0   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image showing a red overlay, making the shadows red instead of blue

    Making this value .5 instead of -1, however, allows us to see the mixture of color in the shadow:

    
    <filter id="blue-magenta-2">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   0   0   0   0
                0   0   1  .5   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image showing a mixture of colors in the shadows

    Blowing out channels

    We can affect the overall alpha of individual channels via the fourth row. Since our example has a blue sky, we can get rid of the sky and the blue values by converting blue values to white, like this:

    
    <filter id="elim-blue">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   1   0   0   0
                0   0   1   0   0
                0   0   -2   1   0 "/>
    </filter>
    
    
    Image showing an example of blowing out a channel. We can get rid of the sky and the blue values by  converting blue values to white

    Here are a few more examples of channel mixing:

    
    <!-- No G channel, Red is at 100% on the G Channel, so the G channel looks Red (luminosity of G channel lost) -->
    <filter id="no-g-red">
      <feColorMatrix
        type="matrix"
        values="1   1   0   0   0
                0   0   0   0   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <!-- No G channel, Red and Green is at 100% on the G Channel, so the G Channel looks Magenta (luminosity of G channel lost) -->
    <filter id="no-g-magenta">
      <feColorMatrix
        type="matrix"
        values="1   1   0   0   0
                0   0   0   0   0
                0   1   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <!-- G channel being shared by red and blue values. This is a colorized magenta effect (luminosity maintained) -->
    <filter id="yes-g-colorized-magenta">
      <feColorMatrix
        type="matrix"
        values="1   1   0   0   0
                0   1   0   0   0
                0   1   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    

    Lighten and darken

    You can create a darken effect by setting the RGB values at each channel to a value less than 1 (which is the full natural strength). To lighten, increase the values to greater than 1. You can think of this as expanding or decreasing the RGB color circle shown earlier. The wider the radius of the circle, the lighter the tones created and the more white is “blown out”. The opposite happens when the radius is decreased.

    Diagram showing how you can create a darken effect by setting the RGB values at each channel to a a value less than 1; to lighten, increase the values to greater than 1

    Here’s what the matrix looks like:

    
    <filter id="darken">
      <feColorMatrix
        type="matrix"
        values=".5   0   0   0   0
                 0  .5   0   0   0
                 0   0  .5   0   0
                 0   0   0   1   0 "/>
    </filter>
    
    
    Image with a darken filter applied
    
    <filter id="lighten">
      <feColorMatrix
        type="matrix"
        values="1.5   0   0   0   0
                0   1.5   0   0   0
                0   0   1.5   0   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image with a lighten filter applied

    Grayscale

    You can create a grayscale effect by accepting only one shade’s pixel values in a column. There are different grayscale effects, however, based on which active levels one applies. Here we’re doing a channel manipulation, since we’re grayscaling the image. Consider these examples:

    
    <filter id="gray-on-light">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                1   0   0   0   0
                1   0   0   0   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image showing a 'gray on light' effect
    
    <filter id="gray-on-mid">
      <feColorMatrix
        type="matrix"
        values="0   1   0   0   0
                0   1   0   0   0
                0   1   0   0   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image showing a 'gray on mid' effect
    
    <filter id="gray-on-dark">
      <feColorMatrix
        type="matrix"
        values="0   0   1   0   0
                0   0   1   0   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image showing a 'gray on dark' effect

    Pulling it all together

    The real power of feColorMatrix lies in its ability to mix channels and combine many of these concepts into new image effects. Can you read what’s going on in this filter?

    
    <filter id="peachy">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0  .5   0   0   0
                0   0   0  .5   0
                0   0   0   1   0 "/>
    </filter>
    
    

    We’re using the red channel at its normal alpha channel, applying green at half strength, and applying blue on the darker alpha channels but not at its original color location. The effect gives us dark blue in the shadows, and a mix of red and half-green for the highlights and midtones. If we recall red + green = yellow, red + (green/2) would be more of a coral color:

    Image showing what happens when we use the red channel at its normal alpha channel, apply green at half strength, and apply blue on the darker alpha channels but not at its original color location

    Here’s another example:

    
    <filter id="lime">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   2   0   0   0
                0   0   0  .5   0
                0   0   0   1   0 "/>
    </filter>
    
    

    In that segment, we’re using the normal pixel hue of red, a blown-out green, and blue devoid of its original hue pixels, but applied in the shadows. Again, we see that dark blue in the shadows, and since red + green = yellow, red + (green*2) would be more of a yellow-green in the highlights:

    Image showing what happens when we use the normal pixel hue of red, a blown-out green, and blue devoid of its original hue pixels, but applied in the shadows. Again, we see that dark blue in the shadows, and since red + green = yellow, red + (green*2) would be more of a yellow-green in the highlights

    So much can be explored by playing with these values. An excellent example of such exploration is Rachel NaborsDev Tools Challenger, where she filters out the longer wavelengths (i.e., the red and orange channels) from the fish in the sea, explaining why “Orange Roughy” actually appears black in the water. (Note: requires Firefox.)

    How cool! Science! And color filters! Now that you have a basic grasp of the situation, you, too, have the tools you need to create your own effects.

    For some of those really rad Spotify duotone effects, I recommend you check out an article by Amelia Bellamy-Royds, who goes into even more detail about feColorMatrix. Sara Soueidan also wrote an excellent post on image effects where she recreates CSS blend modes with SVG.

    Filter effects reference

    Once you understand what’s going on with the feColorMatrix, you have the basic tools to create detailed filters within a single contained filter definition, but there are other options out there that will let you take it even further. Here’s a handy guide to all of the fe-* options currently out there for further exploration:

    • feBlend: similar to CSS blend modes, this function describes how images interact via a blend mode
    • feComponentTransfer: an umbrella term for a function that alters individual RGBA channels (i.e. , feFuncG)
    • feComposite: a filter primitive that defines pixel-level image interactions
    • feConvolveMatrix: this filter dictates how pixels interact with their close neighbors (i.e., blurring or sharpening)
    • feDiffuseLighting: defines a light source
    • feDisplacementMap: displaces an image (in) using the pixel values of another input (in2)
    • feFlood: complete fill of the filter subregion with a specified color and alpha level
    • feGaussianBlur: blurs input pixels using an input standard deviation
    • feImage: for use within other filters (like feBlend or feComposite)
    • feMerge: allows for asynchronous application of filter effects, instead of layering them
    • feMorphology: erodes or dilates lines of source graphic (think strokes on text)
    • feOffset: used for creating drop shadows
    • feSpecularLighting: source for the alpha component as a bump map, a.k.a. the “specular” portion of the Phong Reflection Model
    • feTile: refers to how an image is repeated to fill a space
    • feTurbulence: allows the creation of synthetic textures using Perlin Noise

    Additional resources

  • The Pain With No Name 

    Twenty-five years into designing and developing for the web and we still collectively suck at information architecture.

    We are taught to be innovative, creative, agile, and iterative, but where and when are we taught how to make complex things clear? In my opinion, the most important thing we can do to make the world a clearer place is teach people how to think critically about structure and language.

    We need to teach people that information architecture (IA) decisions are just as important as the look and feel of technology stack choices. We need to teach people the importance of semantics and meaning. We need to teach people to look past the way the rest of the web is structured and consider instead how their corner of the web can be structured to support their own unique intentions.

    The web was born to be a democratized building site, and it’s grown into a place that most people visit multiple times per day.

    The role of IA is democratizing as well. The tools and resources we use to structure, design, and develop the web are becoming easier to use, and so we need to know the impact that our structural and linguistic choices have on the integrity, efficacy, and accessibility of the places we’re making.

    The choices we make about structure and language so things make sense is the essence of IA. It’s a responsibility unevenly distributed across job titles ranging from user experience design, interaction design, content strategy, instructional design, environmental wayfinding, and database architecture. It’s also practiced widely outside the technology and design sector by people like teachers, business owners, policy makers, and others who make things make sense to other people.

    Fact: Most people practicing information architecture have never heard the term before. I believe that this is why we aren’t collectively getting better at this important practice.

    Without a label, a common nomenclature, IA can seem like an insurmountable mountain to climb. Let’s say you’re working on how to arrange and label the parts of your marketing website, as well as improve the categorization of your online product catalog. To help with these tasks, what do you use as keywords to find your way?

    “How to organize a website?”

    “What are e-commerce catalog best practices?”

    “How to choose categories for product catalogs?”

    This is like googling symptoms of a disease you’re suffering from. It is a long, hard, frustrating road to take. Without knowing the words “information architecture,” you are only likely to find the ways other people have already solved specific problems.

    Don’t get me wrong, this is a fine first step, but without understanding the conceptual underpinnings of IA, people are more likely to end up propagating patterns they see on the parts of the web they experience. This trend is making too much of the web look and act the same, as if everyone is working from a single floor plan and the entire world is slowly becoming one big suburban subdivision.

    In 2013, I was preparing to interview Lou Rosenfeld onstage at World Information Architecture Day in New York City. While doing my homework for the interview, I had the chance to speak with Peter Morville about the rise of IA as a field of practice. He told me that before the term “information architecture” was popularized, people referred to something called “the pain with no name.”

    Users couldn’t find things. Sites couldn’t accommodate new content. It wasn’t a technology problem. It wasn’t a graphic design problem. It was an information architecture problem.
    Peter Morville, A Brief History of Information Architecture

    The phraseology of “the pain with no name” is powerful because it properly captures the anxiety involved in making structural and linguistic decisions. It is messy, painful, brain-melting work that takes a commitment to clarity and coherence.

    These pains did not die with the birth of web 2.0. Every single person working on the web today has dealt with a situation where the pain with no name has reared its ugly head, leaving disinformation and misinformation in its wake. Consider:

    “Our marketing team has a different language than the technology team.”

    “Our users don’t understand the language of our business.”

    “The way this is labeled or classified is keeping users from finding or understanding it.”

    “We have several labels for the same thing and it gets in the way when discussing things.”

    These pains persist on every project; disagreements about language and structure often go unresolved due to a lack of clear ownership. Since they’re owned and influenced by everything from business strategy to technical development, it’s hard to fit these conversations onto a Gantt chart or project plan. IA discussions seem to pop up over the course of a project like a game of whack-a-mole.

    When I worked on an agency team, it was quite common for copywriters to want responsibility for coming up with the final labels for any navigation system I proposed. They rightly saw these labels as important brand assets. But it was also quite common for us to learn through testing and analytic reports that these branded labels were not performing as expected with users. In meeting after meeting, we struggled and argued over the fact that my proposed labels—while more to the point than theirs—were dry, boring or not “on brand.” Sometimes I won these arguments, but I was usually overpowered by the creative team’s preference for pithy, cute, metaphoric, or irreverent labels that “better matched the brand.”

    In the worst incident, the label I proposed made sense to 9 of 10 users in a lab usability test of wireframes. The same content was tested again following development, but was now hidden behind a cute, branded label that made sense to 0 of 10 users. Ultimately, the client was convinced by the creative team that the lab test had biased it in this direction. Once we had a few months of analytics captured from the live site, we saw the problem was, in fact, real. It was the first time I’ve ever seen 0% of users click on a main navigation item.

    Seven years later, that label is still on the site and no users have ever clicked on it. The client hasn’t been able to prioritize the budget to fix it since they need to pay for campaign-based work (much of which is ironically hidden behind that cute but confusing label). This was the first time I fully understood how much of my job is to teach others to consider IA and not just listen to my recommendations around it.

    I fear that we have become lost in a war of dividing responsibility. Clarity is the victim in these battles. It doesn’t matter who comes up with the label or who decides how something is arranged. What matters is that someone thinks about it and decides a way forward that upholds clarity and intention.

    The web is too new—heck, software design is too new—for us to say there is a clear and easy answer when we design. Every time we make something, we are leaping out of an airplane and all the research in the world is just us packing our parachute carefully. The landing will still be felt.
    Christina Wodkte, Fear of Design (2002)

    There is more information swirling around in the world than ever before. There are more channels through which we disseminate content. There has never been such a pressing need for critical thinking about structure to ensure things make sense. Yet, I believe that the pain with no name is experiencing a second coming.

    In too many cases, educational programs in design and technology have stopped teaching or even talking about IA. Professionals in the web industry have stopped teaching their clients about its importance. Reasons for this include “navigation is dead,” “the web is bottom up, not top down,” and “search overthrew structure”—but these all frame IA as a pattern or fad that went out with tree controls being used as navigation.

    These misconceptions need to be addressed if we are going to deal with the reality of the impending “tsunami of information” approaching our shores. The need for clarity will never go out of style, and neither will the importance of language and structure. We will always need to have semantic and structural arguments to get good work done.

    I have worked with too many businesses with inherited “lacksonomies” that emerged from the sense that there’s only one way to organize an e-commerce site, mobile app, or marketing site. We forget that most of the interfaces out there are more experiment than proven pattern. In other words, be careful when copying from others.

    Many people believe that a large or popular brand has “probably” tested their architectural decisions, when in reality, that’s often not the case. The truth is that we never know if we’re looking at something being A/B tested or redesigned behind the scenes because it’s not working.

    How can we be sure that the patterns we’re copying are well-founded?

    The truth is that we can’t. Something that works for Amazon might not work for your business. Something Google did might be a terrible decision when applied to your context. I once had a client who wanted their product to be structured like iTunes, because Apple is so great at design.

    Changing requirements means changing IA, and that means the entire downstream process will need to be adjusted.
    Keith LaFerriere, Educating the Client on IA

    Only you can help the world to give this pain a name.

    When a structural or linguistic decision is being discussed, call it out as information architecture. Give people the label they’re searching for to describe the pain and anxiety being faced. If there is a semantic argument to be had, have it and make sure those you’re arguing with know the impact of leaving such things unresolved.

    Teach others about the ramifications of IA decision-making. Warn your coworkers and clients that IA is not a phase or process that can be set once and forgotten. It’s an ongoing discussion that can be impacted during any stage of the work.

    Share your IA struggles with colleagues and peers so our community can grow from collective experiences. If you want a venue for sharing and learning more about the global conversation happening around information architecture, find a World IA Day location near you.

     

  • This week's sponsor: BUGHERD 

    BUGHERD. It’s like sticky notes for a website. Just point, click and send to create visual bug reports. Check out Bugherd.com

  • The Art of the Commit 

    A note from the editors: We’re pleased to share an excerpt from Chapter 5 of David Demaree’s new book, Git for Humans, available now from A Book Apart.

    Git and tools like GitHub offer many ways to view what has changed in a commit. But a well-crafted commit message can save you from having to use those tools by neatly (and succinctly) summarizing what has changed.

    The log message is arguably the most important part of a commit, because it’s the only place that captures not only what was changed, but why.

    What goes into a good message? First, it needs to be short, and not just because brevity is the soul of wit. Most of the time, you’ll be viewing commit messages in the context of Git’s commit log, where there’s often not a lot of space to display text.

    Think of the commit log as a newsfeed for your project, in which the log message is the headline for each commit. Have you ever skimmed the headlines in a newspaper (or, for a more current example, BuzzFeed) and come away thinking you’d gotten a summary of what was happening in the world? A good headline doesn’t have to tell the whole story, but it should tell you enough to know what the story is about before you read it.

    If you’re working by yourself, or closely with one or two collaborators, the log may seem interesting just for historical purposes, because you would have been there for most of the commits. But in Git repositories with a lot of collaborators, the commit log can be more valuable as a way of knowing what happened when you weren’t looking.

    Commit messages can, strictly speaking, span multiple lines, and can be as long or as detailed as you want. Git doesn’t place any hard limit on what goes into a commit message, and in fact, if a given commit does call for additional context, you can add additional paragraphs to a message, like so:

    
    Updated Ruby on Rails version because security
    
    Bumped Rails version to 3.2.11 to fix JSON security bug. 
    See also http://weblog.rubyonrails.org/2013/1/8/Rails-3-2-11-3-1-10-3-0-19-and-2-3-15-have-been-released/
    
    

    Note that although this message contains a lot more context than just one line, the first line is important because only the first line will be shown in the log:

    
    commit f0c8f185e677026f0832a9c13ab72322773ad9cf
    Author: David Demaree 
    Date:   Sat Jan 3 15:49:03 2013 -0500
    
    Updated Ruby on Rails version because security
    
    

    Like a good headline, the first line here summarizes the reason for the commit; the rest of the message goes into more detail.

    Writing commit messages in your favorite text editor

    Although the examples in this book all have you type your message inline, using the --message or -m argument to git commit, you may be more comfortable writing in your preferred text editor. Git integrates nicely with many popular editors, both on the command line (e.g., Vim, Emacs) or more modern, graphical apps like Atom, Sublime Text, or TextMate. With an editor configured, you can omit the --message flag and Git will hand off a draft commit message to that other program for authoring. When you’re done, you can usually just close the window and Git will automatically pick up the message you entered.

    To take advantage of this sweet integration, first you’ll need to configure Git to use your editor (specifically, your editor’s command-line program, if it has one). Here, I’m telling Git to hand off commit messages to Atom:

    
    $: git config --global core.editor "atom --wait"
    
    

    Every text editor has a slightly different set of arguments or options to pass in to integrate nicely with Git. (As you can see here, we had to pass the --wait option to Atom to get it to work.) GitHub’s help documentation has a good, brief guide to setting up several popular editors.

    Elements of commit message style

    There are few hard rules for crafting effective commit messages—just lots of guidelines and good practices, which, if you were to try to follow all of them all of the time, would quickly tie your mind in knots.

    To ease the way, here are a few guidelines I’d recommend always following.

    Be useful

    The purpose of a commit message is to summarize a change. But the purpose of summarizing a change is to help you and your team understand what is going on in your project. The information you put into a message, therefore, should be valuable and useful to the people who will read it.

    As fun as it is to use the commit message space for cursing—at a bug, or Git, or your own clumsiness—avoid editorializing. Avoid the temptation to write a commit message like “Aaaaahhh stupid bugs.” Instead, take a deep breath, grab a coffee or some herbal tea or do whatever you need to do to clear your head. Then write a message that describes what changed in the commit, as clearly and succinctly as you can.

    In addition to a short, clear description, when a commit is relevant to some piece of information in another system—for instance, if it fixes a bug logged in your bug tracker—it’s also common to include the issue or bug number, like so:

    
    Replace jQuery onReady listener with plain JS; fixes #1357
    
    

    Some bug trackers (including the one built into every GitHub project) can even be hooked into Git so that commit messages like this one will automatically mark the bug numbered 1357 as done as soon as the commit with this message is merged into master.

    Be detailed (enough)

    As a recovering software engineer, I understand the temptation to fill the commit message—and emails, and status reports, and stand-up meetings—with nerdy details. I love nerdy details. However, while some details are important for understanding a change, there’s almost always a more general reason for a change that can be explained more succinctly. Besides, there’s often not enough room to list every single detail about a change and still yield a commit log that’s easy to scan in a Terminal window. Finding simpler ways to describe something doesn’t just make the changes you’ve made more comprehensible to your teammates; it’s also a great way to save space.

    A good rule of thumb is to keep the “subject” portion of your commit messages to one line, or about 70 characters. If there are important details worth including in the message, but that don’t need to be in the subject line, remember you can still include them as a separate paragraph.

    Be consistent

    However you and your colleagues decide to write commit messages, your commit log will be more valuable if you all try to follow a similar set of rules. Commit messages are too short to require an elaborate style guide, but having a conversation to establish some conventions, or making a short wiki page with some examples of particularly good (or bad) commit messages, will help things run more smoothly.

    Use the active voice

    The commit log isn’t a list of static things; it’s a list of changes. It’s a list of actions you (or someone) have taken that have resulted in versions of your work. Although it may be tempting to use a commit message to label a version of the work—“Version 1.0,” “Jan 24th deliverable”—there are other, better ways of doing that. Besides, it’s all too easy to end up in an embarrassing situation like this:

    
    # Making the last homepage update before releasing the new site
    $: git commit -m "Version 1.0"
    
    
    
    # Ten minutes later, after discovering a typo in your CSS
    $: git commit -m "Version 1.0 (really)"
    
    
    
    # Forty minutes later, after discovering another typo
    $: git commit -m "Version 1.0 (oh FFS)"
    
    

    Describing changes is not only the most correct format for a commit message, but it’s also one of the easiest rules to stick to. Rather than concern yourself with abstract questions like whether a given commit is the release version of a thing, you can focus on a much simpler story: I just did a thing, and this is the thing I just did.

    Those “Version 1.0” commits, therefore, could be described much more simply and accurately:

    
    $: git commit -m "Update homepage for launch"
    $: git commit -m "Fix typo in screen.scss"
    $: git commit -m "Fix misspelled name on about page"
    
    

    I also recommend picking a tense and sticking with it, for consistency’s sake. I tend to use the imperative present tense to describe commits: Fix misspelled name on About page rather than fixed or fixing. There’s nothing wrong with fixed or fixing, except that they’re slightly longer. If another style works better for you or your team, go for it—just try to go for it consistently.

    What happens if your commit message style isn’t consistent? Your Git repo will collapse into itself and all of your work will be ruined. Kidding! People are fallible, lapses will happen, and a little bit of nonsense in your logs is inevitable. Note, though, that following style rules like these gets easier the more practice you get. Aim to write the best commit messages you can, and your logs will be better and more valuable for it.

  • This week's sponsor: Hired 

    Get 5+ job offers with 1 application from companies like Uber, Square, and Facebook—plus a $1000 bonus when you get a job—with our sponsor Hired. Join today.

  • The High Price of Free 

    Doing business in the web industry has unbelievably low start-up and fixed running costs. You need little more than a computer and an internet connection. The overheads of freelancers and small agencies that build websites and applications for other people, or develop a digital product, are tiny in comparison to a traditional business. Your training can be free, as so many industry experts write and teach and share this information without charging for it. Even the tools you use to build websites can be downloaded free of charge, or purchased for very little.

    As an industry we have become accustomed to getting hundreds of hours of work, and the benefit of years of hard-won knowledge for free.

    My free time in the last couple of years has been put into looking at the Grid Layout spec. I start most days answering emailed questions about the examples I’ve posted, before I get down to the work that pays the bills.

    I’m not unusual in that. Most of my friends in the industry have tales of invites to events where no payment is offered, a queue of issues raised on their personal project on GitHub, or people requesting general web development technical support via email.

    What pays the bills for me, and enables me to spend my spare time doing unpaid work, is my product Perch. Yet we launched Perch to complaints that it wasn’t open source. There are very good reasons why someone might want, or be required, to use software that has an open source license. However, when we ask about it, people rarely cite these reasons. When they say open source, they mean free of charge.

    I’ll be 41 this year. I don’t feel 41, but the reality is that at some point I won’t be able to keep up a pace of work that encompasses running a business, putting together talks and workshops, writing books, and contributing as much as possible to the industry that I love being a part of. I need to make sure that I am building not only a body of work and contributions that I’m proud of, but also financial security for when I can’t do this anymore. Yes, that free work does sometimes result in someone trying my software or offering me paid consultancy, but not as often as you might think. Despite having very marketable skills, I don’t own a home, much less have a pension and savings in place.

    I wondered how other independent and freelance web workers dealt with this conflict between earning money and contributing back. I also wondered if I was alone in feeling that the clock is ticking. I put together a survey (the responses to which probably will be the background to several other pieces of research), and a few things stood out immediately.

    Of the 211 people who responded and said they worked for themselves, 33% said they had some provision but not enough to fully retire, while 39% said they had no pension or retirement savings at all. In fact, 30% of the 211 said that they live pretty much “month to month” without so much as a contingency fund. Even filtering out the under-40 age groups, those percentages remained roughly the same.

    I asked the question, “Are you involved in open source projects, writing tutorials, mentoring, speaking at events-that you do free of charge or for expenses only?” 59% said they were not involved, with 27% of those people citing time constraints. Some people did explain that they were involved in volunteer work outside of the web. By the time I filtered out the under-40s, the non-involvement figure rose to 70%.

    We know that not paying speakers and not covering speaker expenses causes events to become less diverse. The ability to give time, energy and professional skills free of charge is a privilege. It is a privilege that not everyone has to begin with, but that we can also lose as our responsibilities increase or as we start to lose the youthful ability to pull all-nighters. Perhaps we begin to realize how much that free work is taking us away from our families, friends, and hobbies; away from work that might improve our situation and enable us to save for the future.

    If you are in your early twenties, willing to work all night for the love of this industry, and have few pressing expenses, then building up your professional reputation on open source projects and sharing your ideas is a great thing to do. It’s how we all got started, how I and the majority of my peers found our voices. As I get older, however, I have started to feel the pressure of the finite amount of time we all have. I’ve started to see people of my generation taking a step back. I’ve seen people leave the industry, temporarily or permanently, due to burnout. Others disappear into companies, often in managerial (rather than hands-on) roles that leave limited time for giving back to the community.

    Some take on job roles that enable them to continue to be a contributing part of the community. The fact that so many companies essentially pay people to travel around and talk about the web or to work on standards is a great thing. Yet, I believe independent voices are important too. I believe that independent software is important. For example, I would love to see more people who are not tied to a big company be able to contribute to the standards process. I endorse that, yet know that in doing so I am also advocating that people give themselves another unpaid job to do.

    The enthusiasm of newcomers to the industry is something I value. I sit in conference audiences and have my mind changed and my eyes opened by speakers who are often not much older than my daughter. However, there is also value in experience. When experience can work alongside fresh ideas, I believe that is where some of the best things happen.

    Do we want our future to be dictated by big companies, with independent input coming only from those young or privileged enough to be able to work some of the time without payment? Do we want our brightest minds to become burned out, leaving the industry or heading into jobs where the best scenario is contribution under their terms of employment? Do we want to see more fundraisers for living or medical expenses from people who have spent their lives making it possible for us to do the work that we do? I don’t believe these are things that anyone wants. When we gripe about paying for something or put pressure on a sole project maintainer to quickly fix an issue, we’re thinking only about our own need to get things done. But in doing so we are devaluing the work of all of us, of our industry as a whole. We risk turning one of the greatest assets of our community into the reason we lose the very people who have given the most.

  • Motion with Meaning: Semantic Animation in Interface Design 

    Animation is fast becoming an essential part of interface design, and it’s easy to see why. It gives us a whole new dimension to play with—time. This creates opportunities to make our interfaces better at every level: it can make them easier to understand, more pleasant to use, and nicer to look at.

    For example, animation can help offload some of the work (PDF) of processing interface changes from our brains to the screen. It can also be used to explain the meaning and relationships of interface elements, and even teach users how to do things—all without making an interface more complex.

    Sometimes all of these factors converge: when you minimize a window, would you be able to tell where it goes without the animation? How much longer would it take you to find the minimized window in the dock?

    Hard cuts make it difficult to understand state changes, because changing the entire page means you have to rescan the entire thing to see what changed.

    So far, so uncontroversial, right? Animation is a good thingwhen done well, at least. But there’s one aspect of animation that nobody ever seems to talk about. Some animation, while technically well executed, makes interfaces more confusing instead of less.

    Consider the following process:

    When we tap the Overcast app icon on the homescreen, the icon zooms and morphs into the app. The other icons stay in place. They’re still in their initial positions, laid out in a grid around the open app.

    We start multitasking. The app zooms out. We should get back to the icons around the app on the homescreen, but instead we see a stack of other apps. Why is the icon above the app now? Where are all the other icons? And why does another homescreen appear next to the apps?

    The app is both inside its icon on the homescreen and next to the homescreen. The two animations give us conflicting information about where the homescreen and the apps are located in space.

    Diagram showing the multitasking workflow
    The two zooming animations have completely different effects in this case.

    These animations might make sense if you designed the individual screens in a vacuum. It’s only when they all try to play together as parts of a single experience that things get confusing. The problem isn’t with any of the individual transitions, but with the fact that the animations contradict each other.

    History

    How did we get here? Let’s take a step back and quickly review the history leading up to this point.

    Since their inception in the 1970s, graphical user interfaces were basically a series of static screens (PDF) linked together without any transitions. Every state change was a hard cut.

    Although there are some notable early examples of good interface animation that date all the way back to the original Macintosh (PDF), because of the limited graphical capabilities of computers back then, effective animation was the exception rather than the rule.

    Example of a remarkably fluid animation in an early version of Mac OS.

    As computers got increasingly powerful, animation started to be used more frequently for things like maximizing windows or opening new tabs. It was still mostly pressed into service for small things, though, and rarely influenced the overall structure of interfaces.

    Only now are we starting to get to a point where computing resources aren’t holding interfaces back anymore. With today’s devices, everything can be animated—and increasingly everything is. The problem is that the design process hasn’t caught up to this change in technology. For the most part, interfaces are still conceived as separate, static screens and then linked together with relatively crude animations.

    This is probably how our multitasking example came to be; different parts of the experience were designed separately and then linked together without considering either the relationships between them or the consequences of arranging them this way. The problem is that if animation (and therefore the spatial structure of an interface) is an afterthought, it’s all too easy to create contradictory behaviors.

    Now that we’ve figured out how we got here, let’s think about how we can avoid such pitfalls.

    A simple shift

    Adding animation to interfaces fundamentally changes them, and necessitates a new way of thinking. We call this new approach semantic animation. It all starts with a simple conceptual shift:

    You can’t treat individual screens as separate entities if the transitions between them are animated. To the user, the entire experience is one continuous space.

    Similarly, two representations of the same element on different screens can’t be seen as separate from each other. To the user, there is only one element—it simply changes form.

    This means that when you add animations, an interface isn’t a series of screens anymore, but a collection of semantic components inside a single, continuous space. These self-contained components enclose everything associated with them, like meta information or interactive controls.

    Example of how a post on Dribbble could work as a semantic component. The post always remains one cohesive element; it simply changes its representation over time.

    This may sound complicated, but in practice it’s actually quite simple: instead of designing screens and then adding transitions between them, start by designing individual components and thinking about how they change and move in different contexts. With that in place, layout and animations will come together naturally, following the semantic structure of the content.

    Explain relationships between elements

    Animations are most useful when they reflect and reinforce the semantic relationships between elements: for example, “this comment belongs to this article,” or “these menu items are part of this menu.”

    Think of every element of your interface as a single, self-sufficient component with a specific meaning, state, and position in space. Then make sure your animations reflect this. If a popover belongs to a button, it shouldn’t just fade in; it should emerge from that button. When opening an email, the full message should not just slide in from the side, but come from within the preview.

    You get the idea, right? Once you’re used to this way of thinking, it almost becomes second nature.

    The dialogs on Meteor Toys are a great example of semantic components.

    The following examples show two completely different approaches to the same problem: one is screen-based; the other takes semantic animation into account. When opening Launchpad on OS X, the app icons just fade in and the background is blurred. This doesn’t tell the user anything about where the icons come from and what their relationship is with other parts of the interface.

    OS X Launchpad.

    The app drawer in GNOME (a desktop environment for GNU/Linux), on the other hand, uses an animation that elegantly explains where the icons come from and where they are when they’re not visible.

    GNOME application launcher.

    Multiple representations

    A common problem to look out for is different representations of a single element that are visible at the same time. This is bad, because it doesn’t make sense from the user’s point of view to see the same thing in more than one place simultaneously.

    In the following example from Google’s Material Design Guidelines, when you tap on an image in a list view, a bigger version of the image is shown. The bigger version slides in from the right on a separate layer. This is a textbook case of multiple representations: there’s no connection between the two images.

    Why is the image both in the list view and on that other screen? Are all of the big versions of the images stacked to the right?

    Google recently changed this example in their guidelines. Here’s the updated version:

    The new example is better because there are no multiple representations, but the animation fails to account for the interface elements on top, which change with no animation at all.

    Now, here’s an instance of something that checks all the boxes: Facebook Paper’s timeline makes the relationship between thumbnail and detail view completely obvious. No multiple representations, no semantic loose ends. The transition is so smooth that you barely notice the state change.

    See how the interface elements on the bottom of the detail view come from within the image? The image is a self-sufficient component, containing all of the information and actions associated with it.

    Another example of how to do this well is Apple’s Passbook app. It may seem unremarkable at first, but imagine if it behaved like the first Material example, with the full cards sliding in from the right when you tap a list item. That would be ridiculous, wouldn’t it?

    The transition between list and detail view is so fluid that you don’t really think of it as a transition; the elements just naturally move in space. This is semantic animation at its best.

    Keep space consistent

    Animations create expectations about where the animated elements are in space. For example, if a sidebar slides out to the left, you intuitively know that it’s somewhere left of the visible elements. Thus, when it comes back into view, you expect it to come in from the left, where you last saw it. How would you feel if it came back in from the right?

    Lest they break the space established by earlier animations, animations should not communicate contradicting information. Our earlier iOS multitasking example shows exactly why this is problematic: two different transitions tell the user conflicting things, completely breaking their mental model of the interface.

    Interestingly, OS X does something similar, but handles it in a spatially consistent way. When you full-screen a window, it scales to fill the entire screen, similar to iOS. However, it also moves horizontally to a new workspace. The window isn’t inside the first workspace anymore; spatial consistency is thus preserved.

    Remember: violating spatial consistency is the user interface equivalent of an M.C. Escher painting. So unless your app is Monument Valley, please make sure to keep your space simple and non-contradictory.

    Practical considerations

    Right now you may be thinking, “But wait—there’s no way to apply this to everything!” And yeah, you’re probably right.

    If you did manage to do that, you’d end up with something like a single, zoomable surface containing every possible state of the system. And although that’s an interesting idea, “pure” zoomable user interfaces tend to be problematic in practice, because they’re hard to navigate if they go beyond a certain level of complexity (PDF).

    When designing real products, semantic animation needs to be balanced with other considerations like user experience and performance. Most of your interfaces are probably never going to be 100 percent correct from a semantic-animation perspective, but that’s okay. Semantic animation is a way of thinking. Once you’ve adopted it, you’ll be surprised how many things it applies to, and how much low-hanging fruit is out there. It forces you to think about hierarchy and semantics, which helps you find simpler solutions that make more sense to the people who use your products.

    Note that we, the authors, are far from having figured all of this out yet. Our article doesn’t touch on many important questions (which we’re still exploring) related to the consequences of animating interfaces. Nevertheless, we’re convinced that semantic animation is a promising idea with real potential for making digital interfaces more intuitive and comprehensible.

  • This week's sponsor: Code School 

    CODE SCHOOL’s experienced instructors and engaging content have helped over a million existing and aspiring developers learn by doing.

  • This week's sponsor: O’Reilly Design Conference 

    Join designers, innovators, and leaders as they explore new ways to shape the future. Attend the January 20-22 O’Reilly Design Conference​, sponsors of A List Apart.

  • Back to the Future in 2016 

    A funny thing happened on the way to 2016. We asked some of our smartest friends in the web design and development communities what new skills they planned to master, or what new focuses they intended to bring to their work, in the new year. It being a holiday week, we didn’t expect many folks to contribute. Never underestimate the passion of this community. We got what we asked for—and more. Heaps more.

    Our friends’ responses fell into four broad categories—design, insight, tools, and work—but one notion cropped up repeatedly: sometimes it’s necessary to take a step back in order to move forward. It gives us great pleasure to share this cornucopia of wisdom with all of you. Happy New Year!

    Design

    Cennydd Bowles, digital product designer

    I’m going to play around with sound design. The beige-box era of computing is long gone, but we’re still paralyzed by that time an autoplaying MIDI made our officemates glare at us. I have a hunch intelligent, relevant blips and swooshes can really make our products better. So expect to find me waist-deep in Max MSP and Andy Farnell’s physics-and-boxes opus Designing Sound.

    Josh Clark, principal at Big Medium

    I’m digging into a renewed focus on physical interfaces for digital systems. Instead of pulling us ever deeper into our screens, I’m keen on relocating digital interactions to the world where we actually live, breathe, move. In part, this continues the work I detailed in Designing for Touch, giving digital interactions the illusion of physicality via touchscreens. But I also aim to travel in the reverse direction, giving the physical world a digital presence. How might we push the web’s frontier beyond the screen by using new input/output methods? With my retail and medical clients, I’m exploring how apps and web services can use sensors to touch the world (and how the world can touch back). In side projects, I’m experimenting with the Physical Web to see what happens when you can “click” any object or place to spin up a web interface for it. In all of this, the amazing mobile devices we carry in our pockets and handbags are central. The new wrinkle is that instead of distracting us from the world, our mobile gadgets can also light it up with new intelligence.

    Nathan Curtis, founder and principal of EightShapes

    Living style guides are all the rage. Teams from prominent organizations publish new ones almost weekly, celebrated by an adoring community. Bootstrap, Foundation, Material Design (and its threaded connections to MDL and Polymer), PatternLab, and Lightning offer blueprints for masses to follow. The result? Guides are a predictable and formulaic commodity.

    What’s more interesting—and complicated—is how to thread and align this tangible design definition across Many People Making Things. This puts the “living” in style guide: modeling and operationalizing a well-defined system for many teams working concurrently. Design wanted a seat at the table, and this is it.

    Salesforce’s Tokens and 18F’s APIs offer hints of how to propagate properties to all these atomic designs. But mostly, we still see the surface: the artifact as the system, not yet how everyone else relies on each part and how to make the system work for them.

    The architect in me senses opportunity. How can our profession build more powerful models that thread digital products and platforms (like web to iOS and Android) and beyond (from digital to print and more) to make an entire enterprise systematic?

    Meg Dickey-Kurdziolek, freelance UX’er

    Way back when, in one of my graduate classes at Virginia Tech, we had a guest lecturer from the Services for Students with Disabilities office. He told us we should think of ourselves as TABs—Temporarily Able-Bodied. We all grow older and will one day develop a disability. Our goal should be to make products that we will still be able to use when that day comes.

    At the time, my twenty-something self found it hard to imagine developing a disability. Over a decade later I am still, thankfully, a TAB, but I have met more people and had more experiences that have impressed upon me how fleeting TAB status can be. One of those people is Chris Maury, who a few years ago was diagnosed with Stargardt’s macular degeneration disease. This means Chris is progressively going blind. When Chris started looking into the accessibility tools he would one day have to rely upon, he was deeply disappointed. He took matters into his own hands and started Conversant Labs.

    This year I will be working with Chris and the Conversant Labs team to explore what UX designers can do to make more accessible products. I will also try to keep my TAB status in mind, as I sketch, design, and build.

    Daniel Ferro, senior interaction designer at Forum One

    I feel there is a bit of a flaw in the UX industry. The focus is so much on simply getting a user from point A to point B, about designing just what a user needs, that we can forget to design what a user wants, or what will make their experience even better. It’s like we forgot that the X in UX stands for experience.

    In a typical UX document, joy and whimsy are usually nowhere to be found. What’s wrong with adding a little bit of delight? It takes nothing away from the functionality of the website or application—in fact it’s what differentiates a merely functional tool from one that’s fun to use.

    That’s what I really want to focus on in 2016: delight in unexpected places. I want the subtle animation when a user clicks on a button, or hovers over a photo, or does something as simple as highlighting text to copy it, to delight the user. I want the simplest of interactions to bring a smile. As an example, check out the subtle social-sharing floating action button (FAB) I implemented into the bottom right of the Farm to School 2015 Census website. It was inspired by Google Material design, since I feel that Google is leading the way in web and interaction design at the moment and will continue to do so through 2016.

    Anne Gibson, information architect

    In 2016, I want to spend more time learning about Lean UX, especially in enterprise contexts. I want to explore how our designs affect the way our users feel. How can we make it easier for someone to feel less stressed, more relaxed, more in control of their lives? Finally, I want to continue to explore how to communicate design goals through tools like capability strategy sheets.

    Cyd Harrell, UX researcher and citizen experience advocate

    In 2016, I’m doubling down on exploring how to shift major institutions from the 20th century to the 21st. I work a lot on helping public servants design government to meet user needs, but I’m also fascinated by education. Our current systems of both (and to a certain extent medicine and finance as well) are built around institutional authority and direction. We still need institutions, of course, but as our society grows increasingly complex, diverse, and technological, we need them to use their power differently. We need them to become more flexible, nimble, and responsive to the needs of their clients, and supportive of many kinds of human potential. We have the tools, and visions are relatively easy to come by. But visions are cheap, frankly. Actually doing the work, unlocking the design minds of dedicated people who are experts in those fields—that kind of meta-design problem is my current obsession.

    Val Head, designer & consultant

    Two things I want to explore more in 2016 are sounds design and data visualization. I picked up Sound Design: The Expressive Power of Music, Voice and Sound Effects in Cinema over the holidays to start getting into sound design. The way sound can be used to inform and tell a story fascinates me. I’m excited to learn more about it and maybe even use it in my design work.

    One of my favorite projects of 2015 involved animating small SVG data visualizations for a series of articles. It was a fun project, but it also made me realize how much I don’t know about working with data. I’ve got Nicholas Felton’s Skillshare courses on data visualization queued up as my starting point for improving my data design skills this year.

    Getting away from the computer, I’d also like to do more metalsmithing this year. I had a blast learning to make some basic jewelry pieces this past summer. I’ll be taking some weekend workshops at the Contemporary Craft in Pittsburgh in the coming months to make more.

    Andrew Johnson, designer

    I have no idea what’s in store this year, but the design challenges associated with typography’s ubiquity and key role in interfaces fascinate me. The type community is great and I’m really looking forward to continuing to collaborate on/build projects like Typography.supply and Cartography, which, I hope, contribute to the conversation.

    In parallel to designing for the web, I’m also planning on venturing into game development. Its affinity for creating immersive and emotional experiences through code deviates interestingly from product design.

    On both fronts, Wilson Miner’s talk “When We Build” continues to be an inspiration. Just make stuff.

    Jake from Adventure Time Making Bacon Pancakes

    Michael Johnson, creative director at Happy Cog

    I’ve been looking into how others have used chance to shape an experience, such as how John Cage authored performance parameters and then allowed chance events to determine the outcome. I see a loose correlation in the “performance” between designer/client and client/site, where over the course of a website’s life you have to consider the aesthetics of time. Previously I saw the best-case approach as forestalling inevitable decay, and I’ve looked, mostly unsuccessfully, at ways to encourage a sort of graceful aging. In nearly every case I’ve been thwarted by the unexpected (Well, Client, that’s certainly a new way to use a carousel…) but I’ve seen enough minor successes that make me think leaving a responsible level of ambiguity and opening a closed system to targeted chance operations may help a system grow and evolve rather than just endure a slow decline. Some utopian thinkers in the sixties approached urban planning similarly, concluding that like in nature, random mutations with apparent non-intention, such as the synaptic or fractal-like patterns seen in emerging cities, could give purchase to a purer form once relieved of initial authorial control. And that is the goal for many of us, isn’t it? To design for longevity?

    Gerry McGovern, founder of Customer Carewords

    The decline of trust in brands, organizations, and experts, and how that impacts digital design, is an area I’ll be exploring in 2016. Concomitantly, there is a rise in trust in peers and “people like me.” One of the implications of this shift in trust is that people distrust complexity and trust simplicity and things that empower them and allow them to connect more. There’s a lot of opportunity here for designs that are empowering and easy to use.

    I will be very interested in exploring whether traditional marketing and communication (emotional language, stock images, “beautiful” designs) continue to undermine trust. In the past, I’ve noticed that designs that are fast to load and quick to use increase trust, and I’ll be watching out for research on how speed impacts trust.

    What I’m reading:

    Cameron Moll, CEO, Authentic Jobs

    There are many things I’ll be exploring in 2016, but relevant to this collection are two in particular. First, I’ll be diving into unified design more heavily. I started speaking about this topic at conferences nearly two years ago, and it has become increasingly relevant since then. Two years feels like a decade in our industry, which tells me this isn’t a passing fad. I plan to speak and write even more about unified design in 2016. Second, I’ve become…intrigued, concerned, I don’t know what the right word is…by the toll our work takes on us over the course of a career. Maybe it’s because I’m turning 40 soon, or maybe it’s because I watched the movie Everest over the holiday break and found myself wondering why we go so unreasonably far to accomplish our dreams. At any rate, I’ve had Paul Goldberger’s biography on Frank Gehry, Building Art: The Life and Work of Frank Gehry, in my cart on Amazon for a few weeks, and maybe I’ll finally flip the switch and order it to understand how his career has impacted his personal life.

    Yesenia Perez-Cruz, senior product designer at Vox Media

    In 2016, I’m focusing on being more deliberate with my design decisions. Designers today have to juggle many tasks: making sites that are beautiful, engaging, and delivered quickly across often unreliable networks. It’s not surprising that the current web landscape is full of heavy websites serving dozens of web fonts, images, and complex interactions—or super-minimal sites that lack personality.

    Last year, I advocated for finding a balance between speed and aesthetics when designing a website. My process for finding this balance was a bit reactive. I’d remove details of visually rich designs until I met my performance budget. This year, I want to be more proactive.

    One way I’ll be more deliberate is with typography. I loved Medium design lead Marcin Wichary’s post on their decision to use system fonts for their user interface. The system fonts feel native to users’ devices and save valuable bytes. This gives us room to be expressive with display text and headings. I’ve already begun to apply some of this thinking to my work, and I’m excited to share what I’ve learned at An Event Apart this year.

    Susan Robertson, front-end developer

    I spent my “winter break” exploring drawing again, getting very low-tech and taking time away from the screen. It got me excited about things like composition, design, layout, and type as I worked on making sketchbook spreads that worked in the space and made interesting use of color, layout, and my own block lettering (such as it is). So in 2016 I hope to continue down that road. As a developer who is always implementing designs, I hope to dig deeper into visual design and the elements that make it work well at various screen sizes. I’m most interested in the “seams” as Ethan Marcotte called them in his latest book, Responsive Design: Patterns and Principles. Since I spend so much time thinking about the patterns, I want to think more about the whole. I’ll be doing that by going back to design books, taking a look at books such as Visual Grammar, The ABCs of Bauhaus, Comics and Sequential Art, and possibly some books on the history of animation.

    Jen Simmons, host and executive producer of The Web Ahead; designer advocate at Mozilla

    Once upon a time, we used hacky HTML full of table tags to lay out our web pages. Then we switched to using CSS. And our design patterns changed. Our collective idea of what a website should be changed. And for about five or six years, we made a bazillion fixed-width, header-main-sidebar-footer layout-shaped websites.

    Then along came tiny screens and media queries. And Responsive Web Design. We’ve spent the last four or five years getting comfortable with new tools and techniques. And with a new idea of what a webpage should be. We’ve been making our websites squishy, moving those columns around at different breakpoints. And settling into a new idea of how we should lay out the page.

    Well… That’s all about to change—again.

    Whether designing fixed, fluid, or responsive, we’ve been severely limited by what CSS could do. Turns out, we’ve been creating our page layouts with CSS properties that were actually invented to handle only small bits of a page. We spent years coming up with clever hacks to accomplish a few page designs, and stopped there. Without any real tools for layout in CSS, we didn’t dare think creatively.

    It has been a painful decade. We’ve mitigated this pain by inventing and using tools like 960.gs, Bootstrap, and Foundation. Such tools prevented bugs, made development faster, and abstracted away the need for a lot of nasty math. But very soon, we won’t need such tools anymore. We’ll be able to write real CSS, vanilla CSS to create custom page layouts with ease. How? By using new CSS. Better CSS. CSS properties that were invented for page layout.

    Flexbox is already here. 2016 is the year CSS Grid will arrive. We can combine these with CSS Shapes, Viewport Units, Multicolumn Layout, Rotation, and more to design some amazing pages. We can finally do real art direction on the world’s biggest digital platform—if we so choose.

    Of course, the new CSS will make it faster and easier to implement the same old layouts we’ve been designing for years. Yawn. We’re already completely bored with seeing the same layout over and over. I’m much more intrigued by what will come after that. The real revolution will come when we start designing pages that no one has seen before. When we collectively create new design patterns. When we inject new life into our sites, using layout to truly serve the content at hand, creating a fresh reading/viewing/using experience.

    It’s time to let ourselves dream up wild page layouts. It’s time to play around with CSS to see what is, or isn’t, possible. I’m incredibly excited about what’s coming. I’ll be spending all of 2016 experimenting and inventing. I’ll be presenting at a bunch of conferences, including every An Event Apart in 2016. I’ll be posting to CodePen, writing articles (including for A List Apart), creating screencasts, and making more podcast episodes. Follow me on Twitter to keep up.

    Rian van der Merwe, product design director at Jive Software

    2015 was a year of inward focus for me. I spent a lot of time learning new design skills and tools, and focusing on the intricate details of the products I work on. In 2016, I hope to free up a little more time to study and explore things on the periphery of design—areas that might not have much to do with digital product design on the surface, but that help me expand the way I think about and practice design.

    This includes some seemingly strange hobbies. I’ve recently become really interested in the craft of mechanical watches. I’m also (still!) very interested in the intersection of architecture and design. I’m particularly drawn to urban design, so I look forward to reading The Death and Life of Great American Cities. And then, in addition to my design activities at work, I hope to get back to blogging a bit more this year. 2015 was the year of Medium and newsletters, and I guess I have a bit of nostalgia for the humble personal blog—the forgotten front porch of the internet. Follow along if you’d like.

    Jeffrey Zeldman, founder of Happy Cog & publisher of A List Apart

    In 2016, I’ll roll up my sleeves, teach myself something new, and get back into the hustle and grind of client-facing design.

    With wonderful partners, I’ve spent the past few years building, shaping, and solidifying such things as A Book Apart, An Event Apart, and this magazine. There’s a lot to be said for detaching from the day-to-day work of a designer and focusing on design in a different sense: namely, the creation and direction of products. I’ve loved every minute of it. It has resulted in milestones like A Decade Apart, as An Event Apart enters its second decade​...a​nd in books whose insightfulness and relevance for our industry blow me away.

    But now, in 2016—while keeping those good things going—it’s time to step back into the ring. 2016 will see the reopening of Happy Cog’s NYC design studio, working in tandem with the great studio in Philly. I’m going to teach myself CSS Grid Layout, and get up to my elbows in the daunting, messy, maddening, exhilarating work of web design.

    Insight

    Liz Danzico, chair and cofounder of SVA MFA Interaction Design; creative director, NPR

    To me, progress always meant motion. If something moved (professionally, geographically, biologically, chronologically, alphabetically), I thought, it was intrinsically better. In this way, “different” and “advancement” were synonymous. For the next year, I’m practicing non-motion. No sudden shifts; no pivots; no renovations. Instead: continuity and flow. I’m up for a year to take stock on what is, not what could be; where everything is only infinitesimally different from what was before. That will be progress.

    Brad Frost, web designer

    My goal for 2016 is to be positive and productive. I’ve got a lot on my plate this year and I’m excited for it all! Between client work, speaking, consulting, and finishing my Atomic Design book, I’m hoping to release a project that encapsulates the ideas put forth in Death To Bullshit. I’m trying to improve my skills as a developer, designer, teacher, and human being. A key factor in accomplishing this will be to stay away from all the negativity out there and surround myself with positive people and attitudes. Here’s to a positive, productive year for everybody!

    Andrew Grimes, user experience consultant

    My big ambition for the year is to develop better techniques for avoiding distraction online. Particularly while researching. I just don’t have a great success rate when relying on search or user-generated content streams to find my way to the good stuff. The algorithms aren’t reliable enough, or I’m not doing it right, or both. In any case, time and again, I seem to find the best content in the same curated spaces (like ALA).

    And so I plan to bypass the likes of Google, FaceBook, LinkedIn, and Twitter a little more this year—and go directly to trusted sources. Not just the well-known publisher sites. I’m planning to seek out and frequent some lesser-known establishments, too. Like restaurants, I suspect these exciting sorts of places might be easier to find via word-of-mouth.

    I’m also keen to learn more about editorial design, having been inspired by Travis Gertz’s brilliant article, “Design Machines”. The idea of treating design systems as a beginning, rather than as a fixed end point, seems particularly important for the web right now. There’s something really exciting about challenging the rigidity of templates and rules, instead aiming to create sites that match design with content, and vary layout as often as magazines do.

    Lara Hogan, senior engineering manager at Etsy

    I’ve been thinking a lot about how strange an endeavor public speaking can be. Is there anything else that is so hard to practice, where there’s so much at risk? What other kinds of work only happen in a spotlight, where you get just one shot? Unlike with writing or running a race or other kinds of goals we may have in the new year, we can’t exactly practice public speaking in safer, low-risk environments that accurately mirror what it feels like to be onstage, in front of an audience.

    We all have fears about public speaking, and these fears run the gamut. I fear tripping and falling onstage. I’ve spoken with others who fear stumbling over their words or forgetting what they want to say, who worry about a wardrobe malfunction, or “being judged” by an audience. I want to spend more time this year thinking through ways that I can help people with these fears by articulating ways that we can more safely prepare ourselves for the spotlight and the stage. At the very least, I’d love to help more people overcome the mental blockers we have about submitting proposals for talks or even picking a topic to talk about. My hope is that, in 2016, we can help tackle these worries so that more diverse voices can share their knowledge in meetups, conferences, and other venues. I think the whole industry would benefit.

    Denise Jacobs, founder and CEO of The Creative Dose; speaker + author + creativity evangelist

    My mission as a creativity evangelist is to free people from the tyranny of their inner critic so they can allow their creativity to flow. What thrills me most about 2016 is that I’m dedicating time to transform my two most popular talks “Banish Your Inner Critic” and “Hacking the Creative Brain” into books! It’s time to give my content further reach and longevity.

    After living out of my suitcase the past few years (in 2015 alone, I did 28 talks around the world), I look forward to holing up in my home office and writing these books people have been asking for—and frankly, that I need too! I’ll be creating a lot of peripheral articles and blog posts around the book content as well. Other 2016 projects include taking on additional professional coaching clients and launching a live online seminar on speaking (which also supports my Rawk The Web initiative).

    Over the holiday break, I got clear that I can’t share my work if my own inner critic keeps me from taking care of myself. So my other focus is on fantastic self-care: reviving a regular exercise routine, cooking yummy meals with fresh produce from my organic garden, creating with my hands (handmade herbal soaps and funky earrings), continuing to learn improv, reading plenty of sci-fi and magical realism, reconnecting with friends, and napping with my cats in the sun.

    Erin Lynch, writer/designer/founder of shop, and production manager at A List Apart

    2016 = growth. I have four functional areas I want to continue extending my skillsets in over the next year. While I work in these areas (design, development, writing, and illustration) on a daily basis, I haven’t done a concentrated, focused study to extend and add to those skills in a quite a while. 2016 is about pushing boundaries for me—focus, engagement, and exploration.

    I was greatly inspired by a recent article (which I have since lost track of) about a man who decided he was tired of his career (financial planner, CPA, or something non-arts related like that) and wanted a change. He read an article about the 10,000 hours methodology and decided to give it a try. Fast-forward three years (and a hectic learning curve) and he landed a job as an animator at Aardman. There’s a lot of talk about the validity of the 10,000 hours method, but the one thing it reinforces is that practice = growth, and growth is what we’re all about.

    As for reading, I always have a ton of stuff on deck. I’m currently reading The Vignelli Canon, Makers: The New Industrial Revolution, and Parting It Out. I’m trying to get to this season’s 24 Ways articles, as well as the two new ABA books Responsive Design: Patterns & Principles and Going Responsive.

    Alice Mottola, freelance web developer/writer

    The Enneagram personality typing system may not have the peer-reviewed clout of the MBTI (yet), but don’t let that scare you away. In my experience, the Enneagram’s insight and accuracy leave the MBTI in the dust. Every time a friend or colleague of mine discovers their type description, they’re overcome with waves of alternating elation and embarrassment as they recognize patterns of thought and behavior they always knew they had, but never quite put into words. Plus, Enneagram literature provides solid, practical advice that can lead to serious positive developments. For me, discovering my (terrifyingly accurate) Type 4 personality inspired me to start coding creative projects in my spare time, since Type 4s feel their best when they can let their creativity flourish. As the Enneagram predicted, I’m a good deal happier for it. If you’re intrigued, I recommend taking a look at the Enneagram Institute website or checking out the book Discovering Your Personality Type. Even if you don’t end up quite as impressed as I am, you’ll almost certainly find some good advice you can apply to both your career and personal evolution.

    Sophie Shepherd, designer at GitHub

    The last few years have felt like the web’s adolescence—transitional and rocky at times, yet exciting. Our jobs changed with the introduction of RWD, and we’ve been dealing with the aftershocks ever since. It has been a few years of asking ourselves really hard—sometimes existential—questions. What tools are best? Where does design end and development begin? Should designers code? Do I have to design for watches now? Has web design lost its soul?

    Whether we have all the answers or not, the dust is settling. Responsive web design is just web design, design systems and style guides are the norm, and it doesn’t matter whether you design in Photoshop or Sketch or the browser. I think web design is entering a new golden age, one where we can focus less on having the right answers and feeling stifled by constraints. I’m excited to get back to the reason I fell in love with the web the first place: creativity. I’m looking forward to seeing what we all make, and how we push the boundaries of the medium.

    Tools

    Rachel Andrew, founder of edgeofmyseat.com, the company behind Perch

    In 2016, I’m going to be going back to basics and really learning JavaScript. I first learned JavaScript right back in the early days of the web, driven by a desire to add rollover images and popup windows to my websites. At one point I would create Dreamweaver extensions, and ultimately picked up enough knowledge of jQuery to do the things I needed to do. However, I’ve never considered myself a capable JavaScript developer, nor have I ever really liked the language.

    In 2016, though, I think JavaScript is vital for any web developer to learn—whether you are mainly a front- or a back-end developer. It’s no longer just a tool for adding trivial and annoying things to websites; our tooling makes use of it, and increasingly it is being used on the server side as well as running in the client.

    I like to get back to basics when relearning things. As a developer, it is very easy to not read the manual, to just jump in partway and pick up things as I go along. When I do that, I tend to miss some basic fundamental that comes back to bite me. What I have discovered is that it is still hard to find really great materials for learning JavaScript that don’t assume I want to jump right into some framework, but also don’t spend the entire time discussing basic programming constructs.

    I’m enjoying Eloquent JavaScript and have also found the information on the Mozilla Developer Network a good jumping-off point. Speaking JavaScript seems to be aimed at people like me who already have a programming background. I’d be very happy to take suggestions as to what to read next.

    Anthony Colangelo, iOS developer, Big Cartel

    As a software developer, tinkering with hardware projects is a really great way to push yourself to learn new things, think in new ways, and solve interesting (and fun!) problems. I’ve been experimenting for a while now with Arduino-based projects—my biggest projects yet have been building custom game controllers for Kerbal Space Program and flight simulators. This year I plan to take it more seriously and work with some more interesting pieces of technology, like building voice-controlled devices, or devices that communicate via Bluetooth.

    I’m also looking to get started with 3-D printing, thanks to Big Cartel’s Employee Art Grant, which will help me build proper enclosures for the devices I build, rather than hacking them into generic casings.

    Getting started with Arduino is incredibly easy, whether you know how to code or not. If you already have a favorite language, I’ll bet you can find something out there that will let you write Arduino programs with it. There are frameworks for JavaScript, Ruby, and just about every other language. There are also some great places like SparkFun and Adafruit that sell components, provide tutorials, and are filled with inspiration.

    Garin Evans, developer

    As a developer, I’ve had varying success writing cross-platform mobile applications. I’ve used frameworks like PhoneGap and Xamarin, and while these solve some problems, I still haven’t found the silver bullet for cross-platform app development. That’s why I was excited when Facebook announced React Native, their “learn once, write anywhere” framework for building native mobile apps using ReactJS. In the last year I’ve fully embraced ReactJS; it’s a fantastic framework that, for me at least, helps create clean, modular front-end components; what compels me most to dive into React Native in 2016 is the transference of familiar JavaScript and React techniques to mobile application development. React Native is still in its infancy: Android support was only introduced in v0.11.0, released in September 2015, and there still isn’t a major release, but what excites me about React Native is that Facebook has potentially removed the requirement for budding app developers to whom JavaScript is already familiar to learn Objective-C, Swift, Java, or C#—a requirement that is enough to put some off.

    Lyza Danger Gardner, CTO, Cloud Four

    Last year, I embarked on the great journey of becoming a mentor. In 2016, I’ll continue that growth. But there’s a new flavor, a zesty focus to my goals: I want to help software people learn how to work with objects in the real world. That is, I want to show web developers how to do things with hardware.

    2015 brought a surge of boundary-pushing APIs for the Web. I believe that in 2016 we’ll see an increased clamor specifically for standardized web-hardware interfaces. This desire is already starting to manifest. Even now, there are numerous options for controlling hardware with JavaScript, Google is championing the Physical Web project, and cloud services for manipulating the Internet of Things (IoT) continue to proliferate.

    The Web has the opportunity to serve as the connective tissue between real-life objects and the data and services that can make them magical. I want to help us get to that future.

    Matt Griffin, founder of Bearded

    This year I will finally get my head around ARIA! Forms are the messy, bewildering lifeblood of the web. Making them more accessible for everyone is wonderful. Making them more readable by machines is—at this point—common sense. Luckily, they both require the same approach. These are the things I plan to delve into in my quest:

    Krystal Higgins, senior interaction designer

    In 2016, I’m excited about dusting off my old Arduino kit and tinkering! Although I’m always exposed to the capabilities of physical computing (heck, I work with wearables every day), it was when I did a first-time UX evaluation of Ozobot, a programmable robot toy for students, that inspired me to do more electronics prototyping. Toys like Ozobot illustrate how the medium can be a tool for education and personalization—and I’m keen to explore ways for it to augment new user onboarding. So, my reading list includes the tried-and-true Arduino guide, Charles Platt’s Make: Electronics: Learning Through Discovery, and r/arduino (for inspiration). I’ll also be exploring Open Hybrid, an MIT project that allows anyone with knowledge of HTML to design an interface for controlling objects.

    Ryan Irelan, builder of Mijingo

    I’m going to continue exploring decoupled content management systems (like we discussed on ALA’s “Love Your CMS” panel). I’ve spent the last decade working with monolithic CMSes and it’s fun to break what I know in pieces and learn and explore. What gets me the most excited about exploring decoupling CMSes is that there’s a very similar discussion happening in software architecture and software development right now. I’ll keep sharing everything I learn over at my training site, Mijingo.

    Scott Jehl, designer/developer at Filament Group and author of Responsible Responsive Design

    360 kickflips. Definitely. Always wanted to do that. I’ve been skateboarding for, what, like 20 years now and—oh! Right, websites… Okay.

    In the past couple of years, I’ve focused a lot of my development attention on page loading performance. 2015 brought us a new standardized version of http (http/2) and support for it is already excellent across browsers and server environments alike. In 2016, I plan to do more experimentation with http/2 to improve my understanding of how best to manage the optimizations that are still helpful for older browsers (which won’t support the new protocol) while taking advantage of http/2 features that make many of those optimizations no longer necessary. I’m also excited to start using Service Workers in production. And skateboarding a lot.

    Peter-Paul Koch, mobile platform strategist

    In 2016, I’m going to put some major effort into installable web apps (which Google now calls progressive web apps for reasons I don’t entirely understand).

    The following sketches my ideal, though I’m fairly certain we won’t come this far in 2016: a user goes to a website on a mobile device and wants to bookmark it. Hitting “Bookmark” can have one of two effects:

    1. If the website does not have a manifest file, it simply place the site’s favicon on the user’s home screen. This is a simple link that starts up the browser and loads the site.
    2. If the website has a manifest file, it’s installed locally.

    What does installing locally mean? It simply means that all relevant files, with the possible exception of actual data, are installed on the device as one package. Tapping the icon starts up this local version of the site, possibly loads external data (if a connection is available) and displays the site in the browser.

    The real trick comes when one user shows a locally installed web app to another, and that other person wants it as well. The user opens a Bluetooth (or NFC or whatever) connection to the other person’s device and just sends over the installed web app. The icon appears on the other person’s home screen, and it can be launched.

    Main problem: security. I know. There are some hard nuts to crack here.

    Still, this is not some utopian pie-in-the-sky idea. I’ve DONE it. Six years ago, I worked on the W3C Widget installable web apps system, and created a lot of test apps for Symbian. One day I noticed that Windows Mobile supported W3C Widgets as well. I opened a Bluetooth connection, sent over an app from Symbian to Windows Mobile, tapped the icon, AND IT WORKED!

    That’s the future of the web on mobile. (And, thinking about it, maybe on any device, but let’s do mobile first.) Ever since, I’ve been patiently waiting for others to get the idea as well. Maybe 2016 will be the year that it finally starts working.

    Una Kravets, front-end developer at IBM Design, Austin

    2015 was a really exciting year for JavaScript. Increased framework debates led to a growing collection of ideas on how best to streamline production. This gave developers a lot of power, but also further fractured the field of front-end development. I’m hoping to make JavaScript more accessible to designers and UI developers in the coming year, focusing on how we can continue to #artTheWeb while leveraging the advantages of these new tools like componentization (is that a real word?), and performance improvements of the Virtual DOM. How can we best style these components and continue to innovate interfaces as well as architectures?

    Not only am I hoping to take a look at JavaScript more in 2016, I’m also going to expand my CSS image-effects work into SVG and image composition and also hope to experiment with offline web apps. Cheers! It’ll be a great year for the web!

    Jeff Lembeck, www engineer at npm, Inc.

    Since starting at npm, Inc. in June, my web development focal point has moved off of almost exclusively front-end development and into dealing with more backend work. This means server-side code. This means operations. This means a whole heck of a lot of code running in the Node.js runtime.

    Understanding the runtime for your JavaScript is important for development. As a client-side developer, understanding how your browser works and the subtle nuances to each engine can save you days of banging your head against the wall when you run into a bizarre bug. In 2016, I plan to extend my knowledge of the inner workings of my runtime to Node.js. Up until this point, I’ve been able to use Node as an abstraction, keeping the gritty details and gears underneath out of my view, but now I want to go deeper and really understand what is happening.

    To dig in, I am learning a lot from Thorsten Lorenz and Brendan Gregg’s talks. They both focus fantastically on the internals of Node. I’m also setting aside some time to read through the source code. This might be tough because I’m, at best, a novice at C++.

    I hope I’m not biting off more than I can chew over a year, but taking things one step at a time is always a smart bet for learning. Here’s to knowing more in 2016!

    Mark Llobrera, technology director, Bluecadet

    The new year always feels like choosing just a few pieces from a giant bag of candy. Here’s my short list for the start of 2016:

    • React. I’m diving in with Wes Bos’ React for Beginners. My team at Bluecadet has been building JavaScript-based touchscreen applications for a while now, and I’m excited to use React with something like Electron to build native OS X/Windows applications.
    • Swift. I’m taking Nishant Kothary’s advice to heart on this one. Compared to Objective-C, I’m finding Swift a bit more accessible to web folks like me.
    • Drupal 8. I feel like I’ve been prepping for Drupal 8 for three years, and it’s finally here. Time to get acquainted.
    • Adaptive Web Design, 2nd Ed. by Aaron Gustafson. The first edition is a very important book for me, and one of the first books I give Bluecadet’s web apprentices.
    • Performance and resilience. Scott Jehl’s “Delivering Responsibly” really stuck with me, and I hope to incorporate more from it into my work.

    Paul Robert Lloyd, independent graphic designer and web developer

    When examining my skillset for areas of weakness, JavaScript usually tops the list. This language has always been a tough one for me to understand, but each year I make small steps. This year, however, I hope to make a giant leap.

    As always, the best way to learn is by doing and, with Kyle Simpson’s “You Don’t Know JS” in hand, I hope to rebuild (and complete) a neglected side project: a digitized version of George Bradshaw’s victorian railway guide. I’ll no doubt want to play with Service Workers to enable offline access, and get familiar with browser APIs like geolocation. I’d also like to work out the best way to modularize my scripts; I suspect this will involve navigating a landscape of competing tools, differing approaches, and evolving best practices. Wish me luck!

    Sarah Parmenter, designer and founder, You Know Who

    Over the Christmas break I started learning Ruby on Rails again. It was strange to go back to a programming language I was fairly good at in 2005 and realizing that many years of user-interface-design thinking and HTML/CSS/JavaScript coding meant that object-orientated programming had been all but wiped from my brain. I’m really interested in being able to get myself to 70 percent of the way there, in any project I want to tackle. Being able to shake off the expensive chains of using another programmer to code my designs is liberating to me.

    I’m really excited by Perfect.org and the idea of Swift becoming a much larger programming language than it is today. Programming excites me again; it has become so important in my work as a visual designer. I’m happy to be riding that wave again.

    On the flip side, I’ve always been very excited by social media and the new opportunities it affords each of us. I think this year is going to be the year we see companies try to understand how to better position themselves for natural engagement with their audience. Less buzz words, more honesty. I love working on social media campaigns with my clients and seeing the day-to-day shift in public perception of a company or service based on the ever changing creative ebb and flow of our social feeds.

    Simon St. Laurent, strategic content director at O’Reilly Media, Inc.

    I’m going to spend a lot more time in the borderlands between best practices for app development and site development. While both kinds of projects use the same tools, their approaches are diverging, and I wonder how far this can go. One key conversation I’m watching is the revival of inline styles, a practice that React encourages but CSS has long discouraged. The code smells terrible to me, but others are enjoying the aroma. If the cascade proves unnecessary, what might that mean for website developers? If inline styles tangle, what does that mean for the future of app development?

    Ian Vanhoof, technical editor at A List Apart

    This is the time of year that I focus on personal growth and I’ve got a little tradition: I craft two lists. The first itemizes all the things I didn’t finish learning last year. The second is packed with things that have recently piqued my interest. Then I combine the two and get started.

    While setting up my lists this year, something popped up on my radar that’s simply left me enthralled—and I haven’t felt this fired up by a web technology since CSS came on the scene in 1996. We finally have an update to Hypertext Transfer Protocol.

    I am fired up about http/2. This overhaul fixes web performance issues we’ve had to hack around since 1999. (Sure, the hacks were used successfully, but they caused a host of their own problems.) The protocol’s improvement list is too long to go into here, but I’ve got to mention a few things like unique header compression, server push, and client side content prioritization via stream dependencies.

    I also love that it’s one binary TCP stream instead of blocked plain text—multiplexing and concurrent resource loading are now possible out of the box!

    Aarron Walter, vice president of R&D at MailChimp

    In 2016 I’m sharpening my tools. I’m learning new programming languages like Python (oh so elegant and fun), and tightening my development process with Grunt.

    I’ll also be looking to history to jump-start my creative thinking. I’m combing through folios of work from industrial designers, painters, photographers, and architects to see how they solved problems and to rekindle my passion for making things. There’s much to be learned from creative thinkers in other media!

    These books are currently on my coffee table:

    Work

    Ida Aalen, UX designer, speaker, and author

    This fall, I was going to meet with a client. We had an idea for a very different way of presenting their main sign-up form. I really believed this would be the better option, but because it was so different from what they had, we decided to very quickly put together a usability test before we showed the design to the client.

    This worked so well that it ended up being a kind of dogma for this project: never show anything you haven’t tested yet. I’ve always believed testing to be important, but I have to be the first to admit that I haven’t always been able to make the time. But this dogma forced us to be very creative with our testing and also to think about what the success criteria for the different parts of the project really were. (For example: no point in doing a usability test of our long-read article pages—we need to figure out if people will actually want to read them in a more realistic setting.)

    It was also very motivating because we seemed to have better discussions with the client: discussing whether stuff worked rather than if the button should be red or blue.

    So what I want to do for 2016 is to try to stick to this dogma: to never show a design or an idea that hasn’t been tested in one way or another.

  • This week's sponsor: ALA via email 

    A LIST APART’s mailing list. Stay in the game—and ahead of the curve. Never miss an article for people who make websites.

  • Blending Modes Demystified 

    Web imagery increasingly tends toward losslessness. When we make changes to a design or graphic, we want to be able to apply them without damaging the source material. That way, the original is preserved if we ever need to revert back or make other adjustments.

    One of the latest capabilities to fall into the hands of web designers is image processing with blending modes. Blending modes allow us to easily create colorization and texturization and apply other special effects without having to crack open an image editor. This saves time by not having to manually reprocess graphics whenever a change is needed, and prevents the headache of having to recall the exact settings of a visual standard that may have been created months earlier. Instead, graphics can be neatly specified, maintained, and manipulated with a few CSS declarations.

    Blending modes explained

    Technically, color blending applies mathematical operations to the color components of image pixels. That’s right, underlying all this creative stuff is math! Don’t worry, you don’t need to memorize formulas to use blending modes, but it’s worth at least having a cursory understanding of how blending works under the hood.

    There are 15 new blending modes recommended by the W3C. Information abounds about how the different blending modes work, and there’s no one right way to use each one. Let’s look at just a few of the more useful modes in depth. Here are three of the most common ways I use blending modes in my workflow:

    • Transparency
    • Texturing
    • Colorization

    Transparency effect with multiply

    Let’s start with multiply. This mode’s mathematical formula can be broken down like so:

    x = a × b

    That’s it. It literally multiplies the color on the top layer (a) with the layer below it (b) to get the resulting color (x), hence the name multiply.

    But how do you multiply colors? It works like this: on computer screens, colors are built using red, green, and blue channels. Each of those channels is given a luminance value—a number that dictates how bright they’re supposed to shine. Now that we have numbers, we can do mathy things!

    When we use the multiply blending mode, the computer takes the luminance value of the red channel for both layers, converts them to a scale between zero and one, and multiplies them together. Then it does the same for the green and blue channels. Once it has all of the totals, it recombines those channels into the resulting color.

    That’s all well and good, but what’s the practical effect?

    One of my favorite things to use multiply for is to shortcut my way through bad assets. Have you ever begged for a nice sharp vector version of a client’s logo, and all you could get your hands on was a JPG, complete with the white background of the letterhead it was hastily scanned from? Instead of retracing the logo by hand or working your marching-ants magic, you can use multiply. The following example shows how it works.

    Image showing two layers without any blending applied.
    The two layers without blending.
    Image showing two layers after the multiply blend mode has been applied.
    The two layers with the multiply blend mode applied.

    Once multiplied, the black pixels on the top layer display at their full value: black. The white pixels, on the other hand, don’t show up at all. They’re completely transparent. The grading shades of gray along the edges of the letters will darken the layer below. This provides a nice smooth edge with minimal processing effort. It’s as if the graphic had a transparent background all along.

    This particular trick only works if you’re using black assets. If the source has a color, that will colorize the result to some degree. However, if your asset is white, you can use the screen blending mode.

    Dust and scratches with screen

    The functional opposite of multiply is called screen. Wait a minute, if it’s the opposite of multiply, why isn’t it called divide? The answer lies, once again, in the math:

    x = 1 − (1 − a) × (1 − b)

    It’s not called “divide” because we’re actually doing more multiplying! This time we’re multiplying the inverse of a times the inverse of b and then inverting it once more. The result is that now white pixels on the top layer are completely opaque, while black pixels are transparent. Every tint in between now lightens the layer below.

    In the following example, I wanted to give my photo an aged look, so I took a scan of dust and scratches and blended it with screen. I also decided to use this same blending mode to wash out my photo using a lavender color for effect.

    Blending with a layer of dust and scratches and a layer of lavender.
    Blending with a layer of dust and scratches (available from Spoon Graphics) and a layer of lavender.
    The image with the two layers on top of it applied with the screen blending mode.
    The image with the two layers on top of it applied with the screen blending mode.

    Incidentally, some software applications do have a divide mode, but it doesn’t exist in the W3C spec. (I don’t mourn the loss. I’ve never had a need for it.)

    Colorizing with hue and color

    All blending modes have the potential to shift the color of a graphic, but two are particularly useful for colorization: hue, and the aptly named color mode.

    Hue

    This blending mode takes the hue component of the overlapping layer and applies it to the colors below, while leaving the saturation and luminosity unmodified. I can overlap distinctly different colors but still get the exact same result, as long as their hue values match. In the case of the following image, my three brown hues are all set at 26 degrees, but the photo looks the same no matter which shade is blended.

    Image showing three distinct brown hues (all set at 26 degrees).
    Image showing that the result looks the same no matter which shade is blended.

    Color

    This blending mode affects both the hue and saturation of the source, while ignoring luminosity. A reddish-brown overlay will turn the pixels of the source reddish-brown, just as it will with the hue mode, but will also make them the same saturation, which usually creates more of a striking colorization effect than hue alone.

    Image showing that a reddish-brown overlay turns the pixels of the source reddish-brown, just as it will with the hue mode, but will also make them the same saturation.
    A reddish-brown overlay turns the pixels of the source reddish-brown, just as it will with the hue mode, but will also make them the same saturation.

    You can achieve the same effect if you reverse the order of your layers, putting the color below the photo, and blending the photo with the luminosity blending mode.

    Cross-browser blending

    Using these blending modes, we can now apply Photoshop-level blending solely with CSS. But even though each browser is using the same math, you may find that sometimes the results differ noticeably.

    Chart showing the blending modes as rendered across different browsers.

    Color management is a complex world, and while the W3C recommends defaulting to the sRGB color profile, support from vendors is inconsistent. Each browser renders color according to its own whims. For example, Chrome renders images in its default “unmanaged” color space unless the image is tagged with a color profile. Firefox works the same, but also has a switch buried in the configuration settings to turn on sRGB for untagged images. Meanwhile, Safari is most likely to be a close match to Photoshop because Apple’s graphics API is closely based on Adobe’s PostScript language. Even then there are differences.

    Furthermore, it’s not just browsers that are inconsistent. People are inconsistent! Consider, for example, the millions who live with color blindness. They likely already see your designs differently than you intended. As ever, test your creations in relevant browsers, check your accessibility, and don’t expect your designs to look the same everywhere!

    Additionally, test on real devices so you can understand how hardware constraints (like low RAM, for example) will affect your site. Some blending modes can cause scrolling to lag. If you’re looking for that 60-frames-per-second buttery smoothness, this may affect your available choices.

    Applying blending modes

    Blending modes can be applied with a couple different CSS properties: background-blend-mode and mix-blend-mode. A third property, isolation, can come in handy, too.

    Blending background images

    background-blend-mode blends between the layers of a background-image declaration. This means that as background images stack on top of each other, you can apply a blending mode to mix them together.

    Let’s try this to put dust and scratches on our photo. (Note that only the relevant code is shown in these examples.)

    
    <div class="background"></div>
    
    
    
    .background {
      background-image: url("dust-and-scratches.jpg"), url("mountain.jpg");
      background-blend-mode: screen;
    }
    
    
    Image showing how background-blend-mode has been used to add dust and scratches to our photo.

    You can apply a different blending mode for each background-image declaration. List them in the same order as your backgrounds and separate them with commas. The final declaration—the bottom layer—is automatically given a normal blending mode, and this can’t be changed. If you are using a background-color, that will be your bottom layer.

    Occasionally, you may want to make use of color overlays. Unfortunately, CSS’s background-color property limits us to a single color, and it will always be the bottom layer, whether it’s declared at the beginning of the list or at the end. A W3C recommendation proposes an image() notation that allows an author to “use a solid color as an image,” but the necessary user-agent support isn’t there yet. Luckily, because gradients are a type of image in CSS, we can trick the browser into generating a solid color by declaring two matching color-stops!

    Now, let’s lighten up the image like we did before, and change it to a sepia color.

    
    .background {
      background-image: 
      linear-gradient(hsl(26, 24%, 42%), hsl(26, 24%, 42%)),  /* sepia */
      linear-gradient(hsl(316, 22%, 37%), hsl(316, 22%, 37%)), /* lavender */
      url("dust-and-scratches.jpg"), url("mountain.jpg");
    
      background-blend-mode: color,   /* sepia */
      screen,  /* lavender */
      screen;  /* dust-and-scratches */
    }
    
    
    Our image shown in sepia.

    Blending HTML elements

    mix-blend-mode blends between stacked HTML elements, so elements on overlapping layers will blend with those beneath it. Let’s add our title back into the image and blend away the undesirable white background with multiply. I’ve also made it slightly transparent to give it a nice overprint effect.

    
    <div class="background">
      <div class="text-box">
        <h1>
          <img class="graphic" alt="Chamonix Harmony" src="chamonix-harmony.jpg" />
        </h1>
      </div>
    </div>
    
    
    
    .background {
      background-image: 
      linear-gradient(hsl(26, 24%, 42%), hsl(26, 24%, 42%)),  /* sepia */
      linear-gradient(hsl(316, 22%, 37%), hsl(316, 22%, 37%)), /* lavender */
      url("dust-and-scratches.jpg"), url("mountain.jpg");                     
      
      background-blend-mode: color,  /* sepia */
      screen, /* lavender */
      screen;  /* dust and scratches */
    }
    
    .graphic {
      mix-blend-mode: multiply;
      opacity: 70%;  /* overprint effect */
    }
    
    
    Demonstration of how we’ve used multiply to add our title back into our image and blend away the undesirable white background.

    Here’s a new example using mix-blend-mode to blend multiple elements.

    
    <div class="background">
      <div class="red-disc">
        <img alt="" src="red-disc.svg" />
      </div>
      <div class="green-disc">
        <img alt="" src="green-disc.svg" />
      </div>
      <div class="blue-disc">
        <img alt="" src="blue-disc.svg" />
      </div>
    </div>
    
    
    
    .red-disc, .green-disc, .blue-disc {
      mix-blend-mode: screen;
    }
    
    
    Image showing how mix-blend-mode can be used to blend multiple elements.
    Using mix-blend-mode to blend multiple elements.

    If you don’t want an element on a lower layer to be blended with a particular layer above it, you can separate it using a third property: isolation. This is useful for blending a few elements together without affecting the base layer. Each of these discs has their mix-blend-mode property set to screen, which causes them to create new colors where they overlap. However, we want to isolate the mountain image so that it isn’t blended along with the colors.

    
    .background {
      isolation: isolate;
    }
    
    
    Image showing how the isolation property can be used to prevent an element on a lower layer from being blended with the layers above it.
    Using the isolation property to prevent an element on a lower layer from being blended with the layers above it.

    Keep in mind that mix-blend-mode is applied to an entire element along with all of its children. In the same way that opacity had the side effect of making the contents of containers transparent, we also see this happening with mix-blend-mode. The contents and the container are blended together.

    In the following example, I’ve gone into Photoshop and mocked up a promotion for a fictitious ski equipment manufacturer I’m calling Masstif. In it, I’ve created a box to feature some copy and a logo. I’m blending the box using the color dodge mode. This gives a strong contrast to the background and allows the text and graphic to stand out better.

    Image showing how the color-dodge mode can be used to make the mark stand out better from the background.


    When I build this with HTML and CSS, I might expect it to work like this:

    
    <div class="background">
      <div class="ad-contents">
        <p>When you’re on top of the world,<br/>
        the only way to go is down.</p>
        <p>Gladly.</p>
        <img alt="Masstif" src="logo.svg" />
      </div>
    </div>
    
    
    
    .background {
      background-image: url("mountain.jpg");
    }
    .ad-contents {
      background-color: white;
      mix-blend-mode: color-dodge;
    }
    
    

    But the actual result is that all of the contents are blended along with the container, as the following image shows.

    Image showing how the copy and logo have blended with the container.

    Just as the opacity issue can be addressed to some degree by taking advantage of background alpha channels, here too we can tackle this problem with mix-blend-mode by moving what we can into the background. Instead of creating a box and blending with mix-blend-mode, it might work by converting the box to a background-image. This won’t solve every problem, but it’s worth trying. Other than that, there’s no way to isolate child nodes from a blended element.

    Browser support

    Blending modes are supported in most major browsers, except Internet Explorer and Edge. The silver lining is that Microsoft lists the properties as “under consideration” for Edge, and that browser does already support all of these blend modes in SVG, so one can hope for a speedy implementation. Votes for these properties on the Microsoft Edge Developer Uservoice forum would help, too.

    Also, note that Safari 9 doesn’t support the hue, saturation, luminosity, and color blending modes.

    Keep in mind that browsers that don’t support blending modes won’t render your designs exactly as you intended. Unless you know for sure that your audience is running sufficiently advanced browser technology, this can make things tricky. Ask yourself if fallbacks are acceptable for a portion of your audience. If not, you’ll need to find a workaround.

    Despite these caveats, blending modes are a welcome addition to any designer’s tool belt. We can now add transparency, rich color, and texture processing to our designs with CSS—and we can do so losslessly and flexibly. That’s more power available directly in the browser and at our fingertips.

  • Dealing with Difficult Workshop Attendees 

    No workshop will be free of disagreement, especially when there’s a group of designers or developers in the room. We are highly technical people, after all, and the internet is full of possibilities. It’s OK to disagree, especially when it means we reach cool new conclusions. The problem comes when arguments or difficult conversations prevent the workshop from continuing, or prevent attendees from participating. Let’s look at some indicators of coming conflict and how to diffuse it.

    Things attendees do when stressed

    When people get stressed out in group situations, there are a number of behaviors they display. Often these are unconscious reactions to a stressful situation, which makes it easier for the facilitator to identify these behaviors and redirect the energy to a more positive conversation or task.

    • Dominate the conversation. Attendees can sometimes feel the need to “prove” their worth or knowledge on a subject. This can mean they talk for long periods, ignoring other attendees and interrupting them to reiterate points.
    • Introduce unrelated topics or issues. When an attendee feels the workshop goals or tasks have not addressed their specific concerns, they will often attempt to reframe the conversation around what they think is important. For example, during a workshop focused on defining research methods, an attendee may start to discuss issues with marketing or brand strategy.
    • Withdraw and stop participating. Some attendees are not naturally vocal. Once they see the workshop environment is not conducive to them, they will withdraw and refuse to add anything unless specifically called on.

    Tactics for dealing with conflict

    As with all workshops you facilitate, the key is to listen for cues. It’s not about you, it’s about the success of your goals. Take a deep breath, and say that again. It’s not personal. The health and success of the workshop comes first. So, as facilitators, how do we deal with these types of disruptive attendees?

    Let’s quickly restate our workshop schedule from the first post in this series, as this can help us with a few tactics:

    Intro (5 minutes) Task A: Defining the Research Problem (15 minutes)
    Task B: Selecting a Style of UX Research to Conduct (20 minutes)
    Break (10 minutes)
    Task C: Conducting the Research (20 minutes)
    Task D: Collecting and Sharing Data (10 minutes)
    Wrap-up (10 minutes)

    Use designers innate curiosity in your favor

    Design workshops will almost always be about graphic or interaction work, and our practice naturally focuses heavily on exploration and examination. Restate the task goals to everyone, and ask some pointed questions that require creativity or curiosity to answer.

    Imagine we’re conducting Task A, defining the research problem, and an argument is brewing. Let’s look at some of that language:

    Let me jump in here for a minute here. Abdullah, it’s clear that particular UI issue is dear to you. Remember your goal for the next 10 minutes is to define the research problem. In order to do that, you will need to generate quite a few potential UX problems with our current site, not just one. Let’s go around in a circle, and each person state a completely new UX issue they have found. After a few rounds, we’ll have a good list. Understood?

    Ask open questions that get to the root cause of the disruption

    When the task you have set starts veering off into other operational discussions, like branding or development needs, there may be a legitimate reason this is happening. As the workshop leader, it’s up to you to find this out and reframe the conversation.

    Imagine we are on Task D, trying to define ways to collect and share data, and the task has been dropped in favor of a discussion about poor data security:

    Tola, mind if I ask a few questions? I know you have all mentioned data security a few times, and that seems to be the way this task is headed. We have about 10 more minutes to get some concrete plans down for sharing and collecting UX research data—can you tell me a little about how you feel those two are connected? What are some suggestions you have for focusing on the research data first, and once that is clear in your group, turning to data security?

    Make the process of participation explicit

    We have been focusing on design and UX teams, but in every workshop, regardless of the industry, there are those who have incredible insights. They just might not be comfortable voicing them freely. That’s ok. Every human, and designer, is different. But there are times when a withdrawn or reticent attendee at a workshop disrupts the tasks, as other attendees don’t know how to involve them.

    In these situations, there are a few different ways to demand participation, but a common one is to give the attendee the task of documentation and synthesis. Let’s go back to that first task and see how this looks:

    Carmen, you seem to be listening carefully to everyone’s ideas and feedback—can I ask you to be the team spokesperson and take notes of how your group defines the research problem? When you need some extra clarification on what someone says, I’d like you to ask one or two questions so you can get it accurately documented. Once we are done with the task, each spokesperson will present the results of their task, so keep that in mind.

    In each of these situations, we’ve done something very specific. We’ve defined the task clearly for the workshop, so there is no ambiguity on what the “deliverables” will be. After that, we have identified potential problems before the end of the task, and called specifically on one of the attendees, asking them to modify their output or responsibilities in the task. That responsibility is framed as beneficial to the group, and can feed back into the final outputs of the workshop. Basically, always have a plan for when things go off the rails, and adjust accordingly!

    There will be rare cases when interpersonal relationships or company politics are simply too great to overcome. Again, that is just part of being a human, and as designers and developers, we need to know when a challenge is just too great to fix. When you are confronted with one of these situations, it’s ok to call a timeout and do one of two things:

    • revisit the goals of the workshop and work with the attendees to create new ones that better reflect their concerns; or
    • acknowledge the conflict, call a stop to the workshop, and let everyone leave.

    I have seen too many workshops falter and fail because the facilitator could not address and redirect the conflicts brewing in the room. It’s an amazing time for digital design, as tools and code for the web keep getting more powerful and diffuse. We are bound to have different opinions on how to achieve our team’s goals and build great new sites and apps. These arguments or difficult conversations can prevent our workshops from succeeding, or prevent specific attendees from participating, but when this begins to happen, successful facilitators help attendees refocus on goals and specific assignments to move the event forward.

  • The Perfect Storm in Digital Law 

    It has been a very strange year to be a digital law specialist. In a matter of months we have seen “experts,” courts, and politicians suggest that website administrators should get rid of comments, social sharing, analytics, links, and shopping carts, all as a means of coming into compliance with European legislation. Even pro-European developers are at their wits’ end dealing with the deluge of uninformed advice about how their websites are apparently supposed to work.

    The response of web professionals outside the European Union to 2015’s legal developments has been, quite understandably, “Well, I’m not in the EU, so these laws don’t apply to me, right?” To their surprise, legislators have been clear in their expectation that any website in the world accessed by any European citizen must comply with European digital laws. In response, developers worldwide have had to invest countless unpaid hours and resources into understanding their compliance obligations, despite much of the guidance being incorrect, outdated, or even incomplete.

    These digital laws, of course, are only as good as the work that has gone into them. That work has clearly left much to be desired. To create better digital laws, legislators need to be able to work with experienced industry professionals who possess both technical expertise and political savvy. If only they could.

    Nearly twenty years into our craft’s existence there is, in the literal sense, no such thing as the web profession. We have never professionalized. We lack any form of centralized, cross-platform, and cross-industry organization. There is no Royal College of Web Designers, American Web Developers Association, or Chartered Institute of Programmers. There is no one we pay dues to, no association we belong to, and no union to fight our corner. No common organization protects Ruby coders, content writers, and Drupal developers. We do not have lobbyists, government liaison officers, or rapporteurs. No one is employed to make our voice heard in the policy sphere—and this is a problem.

    To date, conversations within the profession about industry organization have revolved around the issues of accreditation and certification. The political aspects of professional organization have never entered the picture. For what groups do exist, a search for “web design and development organizations” reveals pay-per-inclusion directories and outdated sites offering outdated courses. These “organizations” are little more than moneyspinners preying on vulnerable entrants to the profession, and proof that in the absence of a genuine association, anyone can buy a domain calling themselves one.

    A true organization aspires to what governments call consultative status: recognition of a professional body as the authentic voice of its industry. Consultative status conveys authority to industry professionals in the eyes of governments. Organized industries working within this model have political offices in their national capitals, are funded by their members, provide expertise born of practical experience, and review draft legislation pertaining to their field. It’s a model that has worked for centuries for older, wiser, and more organized professions.

    There is, however, a catch: the consultative model of government legislation does not work when there is no industry body to consult with. That’s why the web profession, through its own decisions, has neither consultative status nor the resources to provide it.

    The gathering storm

    On a policy level, the fierce individuality characterizing the web profession has meant that we have chosen not to have a voice. That lack of organization has left us unable to address a range of existing problems that have recently combined to create a “perfect storm” in digital law. That storm now threatens to engulf the web profession.

    Ivory towers

    Some digital laws do indeed make the web a better place, and there are even politicians who can code to a competent level. Unfortunately, they are few and far between. Too many of the laws impacting our craft are what internet law professor James Grimmelmann dubbed “unhelpful interventions”:

    Unhelpful interventions fail because they fail to engage with key aspects of how and why people use [the internet]… The key principle is to understand the social dynamics of technology use, and tailor policy interventions to fit. When an intervention keeps users from doing what they want to, they fight back. Helpful interventions, on the other hand, succeed because they do engage with these social dynamics.


    “Unhelpful interventions,” as we are all painfully aware, are regulations drawn up on paper by politicians who not only never touch a computer, but are openly proud of that fact. These politicians then rely on the opinions of academics who have credentials, but no technical experience. (Indeed, a professor retorting Grimmelmann’s work expressed bewilderment that he actually used the technology he was writing about. Academics, after all, are supposed to live in an ivory tower.) When policies about code are drawn up inside a theoretical vacuum, what results are counterproductive digital laws—like VATMOSS, the EU Cookie Law, and, after Paris, the renewed legislative assault on encryption—grounded in a defiant rejection of how people actually use the web.

    Of course, no one is saying that the makers of the web should be exempt from compliance with digital laws, helpful or otherwise. Indeed, greater legal understanding is desperately needed within our industry. The problem is that these obligations are not only global, but carry frightening levels of legal liability. In a single day, a developer may have to code in compliance with European privacy law, Japanese VAT, and Spain’s “Google Tax.” Good luck explaining that to your professional indemnity insurance provider. Add those obligations to the fact that some digital laws directly conflict with each other—at times, within a single web page—and what results is having to make a personal judgment on which laws to break and which laws to comply with. When digital professionals are put in that position, as you inevitably will be, there is no industry body to call on for support.

    Endless committees

    An added complication in digital law is the glacial process of internet regulation. Laws governing code can take six, eight, or even ten years to travel from proposal to implementation. Web development works a bit faster than that.

    Accessibility laws are a classic example. In the United States, the Section 508 refresh—the update of the law regulating accessibility standards in Federal government websites—has been in progress since 2006. The refreshed standard may be published in 2016 for implementation in 2018. Until then, developers working for the federal government are required to retrofit their work to a pre-WCAG desktop standard from 1998. In the EU, a comparable law mandating accessibility in public sector websites has been languishing in committees since 2012. The only progress made since 2014 has been a four-page committee report (PDF) and yet another draft (PDF).

    Government legislation must be thorough, deliberate, and measured. Indeed, legislation regarding the internet that is pushed through quickly—like attempts at mass digital surveillance—is rarely good news. However, digital laws are ultimately documents. It should not take longer to create and ship a document than it takes to create and ship the Olympics. Yet it does.

    When an industry working at light speed meets a political process working at committee speed, what results is a surreal state of play where professionals are required to comply with digital regulations that are already outdated by the time they become law. Web professionals are also obliged to wait years for promised reforms, and ensuing changes to their workflows, that never materialize. Without an organisation to act through, liaising with governments becomes a matter of individual effort, which inevitably results in frustration, burnout, and contempt. In the absence of support, the absurd pace of the political process repels the very people who need to be the most involved.

    Now it’s personal

    Perhaps the most disturbing damage caused by our lack of professional organization has been the slandering of web professionals for offering informed challenges to uninformed laws. Unaffiliated individuals, after all, are easy targets.

    For example, ecommerce developers who were not informed about the poorly communicated VATMOSS reforms have been labelled “unprofessional”; a web-savvy EU politician trying to modernize copyright law was accused of being an agent provocateur in an American conspiracy bankrolled by Facebook and Wikipedia; and I have been called a “useful idiot” and “anti-privacy” for speaking up about the problems with the EU Cookie Law, and was monitored on a Twitter list called “underhand people” by a compliance software vendor (curious behavior for a privacy advocate).

    Personal attacks on web professionals are even being deployed as a political tactic. Campaigners on VATMOSS have written extensive briefings on the ensuing web development issues involving shopping carts, geolocation data, and IP spoofing. Rather than address those concerns, bureaucrats adhere to talking points insisting that the outcry amounts to a few British moaners complaining about the lack of a VAT threshold. Informed professionals who actually understand the technologies at hand are being openly disparaged by uninformed policymakers, and no one has their back.

    Unsafe harbors

    The final element in this perfect storm is differing cultural expectations about the role of digital laws. The United States, says the stereotype, sees Europe’s digital laws as anti-business, anti-free speech, and pro-regulation. The EU, in turn, sees the United States’ digital laws as anti-privacy, reckless, and dictated by corporate interests.

    While neither stereotype was ever really accurate, the Safe Harbor verdict in October formalized that difference of opinion into law. The ruling (PDF) by the European Court of Justice, which examined the Safe Harbor agreement by the United States to respect European data protection standards, found that the 15-year-old document—in light of the Snowden revelations—was no longer worth the paper it was printed on.

    The two systems have reached a stalemate. Europe is demanding that the US tech industry respect European traditions of data protection and privacy, while exasperated US companies reply that they are powerless to change the source of the problem—mass surveillance—and that they, too, are victims of it. While the two sides bicker, the makers of the web have been left to fathom the implications for our work in a world where the most fundamental agreement in digital law has been torn up.

    The web we have always known is now at risk of becoming the “splinternet,” a web divided along political and ideological lines. International walls are being built across a web that was meant to be borderless. Some of the walls are actually physical. Sadly, there are those who would gladly build them. That was the case with a French politician who claimed that American services like Facebook and Soundcloud were harming European creators and wanted to restrict their activities in Europe by law. Digital professionals in Europe were briefly at risk of losing their US-based tools out of politicians’ bitterness and jealousy.

    Meanwhile, in America, the US Department of Commerce, which administers the Safe Harbor agreement, has responded to the verdict declaring it invalid by…completely ignoring it. On their website, they proudly announce that they are continuing to administer the program as usual. I can’t decide whether their determination to continue running an invalidated program is tragic or comic. Whatever it is, their stance is the digital law equivalent of sticking your fingers in your ears and singing “LA LA LA CAN’T HEAR YOU!”

    Spite is never a healthy basis for policy, but at the moment, digital laws are in the hands of some rather dysfunctional individuals who are determined to take their balls and go home. No one is standing in their way.

    We are the problem

    This, then, is the perfect storm over our heads. The foundations on which we have built the web are being torn apart; the international matrix of compliance obligations grows more complex by the month; those who speak up are being attacked; and our everyday tools have become political footballs.

    What can we do about it? In order to weather the storm, we must move past our tribal mentality and literally professionalize. We must redefine our craft as a profession, and that means acting like one: having organizations, industry bodies, and political representation.

    Our craft is lateral: we educate each other through informal channels, communities, and social media. Governments, however, are vertical: they distribute information downward through authority organizations. This mismatch in communication means unnecessary hassle on both sides: for web professionals, it means learning about digital laws and compliance obligations by chance on social media. For implementing bureaucracies, it means being bombarded with complaints from individuals who, as far as they are concerned, had their chance to contribute to the process in a formal consultation held years ago. That passive-aggressive cycle is every bit as dysfunctional as the laws themselves.

    We cannot expect governments to deviate from the consultative model that works for every other industry in the world to accommodate our personality quirks. Whether we like it or not, we have to start playing their game by their rules. The fact is that governments do ask and they do consult. We have simply chosen not to show up for the talks.

    Unifying as a professional industry gets results. Earlier this year, a frightening UK draft internet surveillance bill was temporarily defeated thanks to the intervention of digital rights groups and tech-savvy politicians. One of the grounds that led to its defeat was the fact that the relevant industry body—in this case, the Internet Service Providers’ Association—had not been consulted on the draft law. In the scariest of political circumstances, the answer really was as simple as that. Where an industry body with professional standing exists, and that body has not been consulted on a law, the law is not legitimate. But where no industry body with professional standing exists to offer informed challenges to a law, it’s not the law that lacks in legitimacy.

    Divided we fall

    In the 1990s, researcher Alan Ryan wrote that “the internet is good at reassuring people that they are not alone, and not much good at creating a political community out of the fragmented people that we have become.” He might have been talking about us today. We have invented everything but a political community. We have no excuse for that.

    Our lack of a voice in digital law is no one’s fault but our own. We refuse to look past our personal differences, we do not show up for the political process regulating our own work, we squander our energies firefighting unhelpful interventions, and we disparage the legislators who made them—and they disparage us right back. If it seems as though politicians don’t take the web profession seriously, it’s because we have given them absolutely no reason to believe otherwise.

    This year the most crucial elements of the web were placed under legislative threat. Those threats are already returning, and to fight them, we need to change tactics—and fast. That perfect storm is over us, right here, right now. Until we unify, organize, and act, we are standing in it by choice.

  • This week's sponsor: World IA Day 

    Join the fine folks at World IA Day on Feb 20, 2016 to celebrate advancing the practice of information architecture.

  • Mark Llobrera · Professional Amateurs: Write What You Know (Now) 

    Sometimes the writing comes easy enough, and then there’s the last two months. I really wondered if I had run out of things to say. I knew I wanted to write about how more web designers and developers need to write about their work. So I wrote a bunch of paragraphs down, and then on a lark (I’m not even a regular listener), I downloaded episode 110 of The Web Ahead.

    I nodded along with host Jen Simmons and guest Jeremy Keith saying some very smart things about the web and its roots as the El train cut across Philadelphia. But at the 48-minute mark things got weird, because Jen and Jeremy basically started writing my column for me while I listened. Jeremy said:

    I wish people would write more… In the future, we would have a better understanding of what people are thinking now. I’m very glad that I’ve been doing my blog for 15 years. I can go back to 2002 and get a feel for what it was like to build websites. Back then we thought X was true or hadn’t even considered Y. You forget these things. Having these written records—not of anything important or groundbreaking—but just the day-to-day. The boring stuff. That’s actually what’s most interesting over time.

    The fear of stating the obvious is one of my primary personal roadblocks to writing. Jeremy’s words evoke Samuel Pepys’ diary, which is a famously important resource for historians precisely because it includes so much detail about life in the 1600s—including many items that he could have dismissed as being mundane and obvious.

    Non-experts please apply

    I appreciate a well-written, logically-structured, authoritative blog post about code as much as the next person. But I also have a love for the blog post that is written with a posture of humility: I know this much, so far. Or even: Why is this happening? Seeing your own stop-start journey through design and code reflected in someone else’s writing can remind you that it’s ok to not have everything figured out. Turns out nobody does.

    Often when a teammate shows me something cool, at the end of our conversation I’ll half-jokingly say, “write it up.” I think that’s pretty good advice regardless of where you currently sit on the continuum of despair to triumph. Don’t even wait until you have everything figured out, at the end of the project. It’s ok to write what you know now, while everything is fresh in your mind and the sheer agony (or thrill of discovery) borders on the physical. If you’ve just solved a particularly flabbergasting problem, imagine yourself on the other side of that experience, just hoping that DuckDuckGo or Google will turn up a post with even the barest hint of a solution. If that were you, you wouldn’t care that the blog post wasn’t polished. You’d just thank your lucky stars that someone took the time to bang something out and hit publish.

    Lies your brain will tell you

    “But nobody will read it,” you say. That may be true. But the opposite could also happen. A few years ago I was interviewing for a job, and my future boss said, “I like how you articulate things. I’ve read some of the stuff on your blog.” I was so surprised, I didn’t even think to ask which posts she liked. For all I know it was the ones where I recount the silly things my kids say. That memory made me think of this fantastic interview with Ursula K. Le Guin, where she says, “There’s always room for another story… So if you have stories to tell and can tell them competently, then somebody will want to hear it….”

    On the other hand, that might be exactly what you fear—that someone actually will read it. But that needn’t be an intimidating thought. Through your writing people can to get to know you, in a way that often gets lost in casual interactions with coworkers (or formal ones with a potential boss or client). Because if you read enough of a person’s writing, their voice comes through. It doesn’t matter if it’s just explaining a technical issue and offering a solution. Their quirks, their interests—these things bleed through if you read enough of their words.

    Getting more adept at writing can help you communicate confidently in other ways too. In a recent ALA On Air live event, Jeffrey Zeldman emphasized the benefit writing can have on your problem-solving process:

    I think if you can articulate your thoughts in writing, even though that’s really hard, you’re going to be better in meetings, you’re going to have a point of view; you’re not going to roll over and say, “The client said I should make the button blue and make it bigger, so that’s what I will do,” but instead will go, “What is behind the client’s request? What are we really trying to achieve?”

    chmod 777

    So now that you’re convinced, where do you start? You might start by giving yourself permission: “I allow myself to publish this thing.” This is one of those things that I’m still working on, myself. I’ve always found writing enjoyable, even easy. Publishing, however, is not. I’ve been blogging for several years, but my ratio of drafts to final posts is pretty dismal. This column has been good for me, because now I’m accountable to someone other than myself. I’ve got an editor that I don’t want to let down. I’ve got folks who actually read this column and respond. But maybe you don’t need more accountability. Maybe you actually need to lower the stakes, and give yourself permission to just get it out there. And again, Jeremy Keith said something along the lines of what I wanted to write:

    The whole point of the web is there isn’t a gatekeeper. There isn’t someone with a red pen saying, “That isn’t good enough to be published. That’s not up to scratch. You’re not allowed to publish it.”… It could be the worst thing ever and you still have the right to publish it on your website. You should do it. Don’t let anyone tell you otherwise.

    Holy smokes. I heard that, and I almost bulk-published my entire WordPress drafts bin.

    Recently, CSS-Tricks ran a survey that asked its community to weigh in on topics that they face daily. I answered the survey, and one of the interesting results was for the question, “You’re stuck. You search the web. You prefer to find answers in these formats:”. The top answer was blog post. Blog post! One of the other leading answers was “Q&A format page” (something like Stack Overflow). That made me think. Why wasn’t Q&A the top answer? Maybe it’s because while web designers want something that works if we simply copy-and-paste, we are also driven by why as much as how.

    Code has a story. One of my favorite posts to write (and read) goes something like: “This wasn’t documented anywhere I could find, and it’s such a weird situation that if I don’t write about it nobody would believe me.” I made a category on my blog just for those posts: Technology’s Betrayal. I feel like a web designer’s life is full of those little stories, every day. And usually you tell your teammates over lunch, or over a beer, and you laugh and say, “Isn’t that nuts?” Well, I’m here to say, “write it up.” Let someone else hear that story, too.

  • This week's sponsor: Padded Spaces 

    Relax smarter with your devices—wherever you are—with lap desks and bedside caddies from our sponsor, Padded Spaces.

  • Interaction Is an Enhancement 

    A note from the editors: We’re pleased to offer this excerpt from Chapter 5 of Aaron Gustafson’s book, Adaptive Web Design, Second Edition. Buy the book from New Riders and get a 35% discount using the code AARON35.

    In February 2011, shortly after Gawker Media launched a unified redesign of its various properties (Lifehacker, Gizmodo, Jezebel, etc.), users visiting those sites were greeted by a blank stare. Not a single one displayed any content. What happened? JavaScript happened. Or, more accurately, JavaScript didn’t happen.1

    Screenshot of a completely blank website with only the Lifehacker logo displayed.
    Lifehacker during the JavaScript incident of 2011.

    In architecting its new platform, Gawker Media had embraced JavaScript as the delivery mechanism for its content. It would send a hollow HTML shell to the browser and then load the actual page content via JavaScript. The common wisdom was that this approach would make these sites appear more “app like” and “modern.” But on launch day, a single error in the JavaScript code running the platform brought the system to its knees. That one solitary error caused a lengthy “site outage”—I use that term liberally because the servers were actually still working—for every Gawker property and lost the company countless page views and ad impressions.

    It’s worth noting that, in the intervening years, Gawker Media has updated its sites to deliver content in the absence of JavaScript.

    ■ ■ ■

    Late one night in January 2014 the “parental filter” used by Sky Broadband—one of the UK’s largest ISPs (Internet service providers)— began classifying code.jquery.com as a “malware and phishing” website.2 The jQuery CDN (content delivery network) is at that URL. No big deal—jQuery is only the JavaScript library that nearly three-quarters of the world’s top 10,000 websites rely on to make their web pages work.

    With the domain so mischaracterized, Sky’s firewall leapt into action and began “protecting” the vast majority of their customers from this “malicious” code. All of a sudden, huge swaths of the Web abruptly stopped working for every Sky Broadband customer who had not specifically opted out of this protection. Any site that relied on CDN’s copy of jQuery to load content, display advertising, or enable interactions was dead in the water—through no fault of their own.

    ■ ■ ■

    In September 2014, Ars Technica revealed that Comcast was injecting self-promotional advertising into websites served via its Wi-Fi hotspots.3 Such injections are effectively a man-in-the middle attack,4 creating a situation that had the potential to break a website. As security expert Dan Kaminsky put it this way:

    [Y]ou no longer know, as a website developer, precisely what code is running in browsers out there. You didn’t send it, but your customers received it.

    Comcast isn’t the only organization that does this. Hotels, airports, and other “free” Wi-Fi providers routinely inject advertising and other code into websites that pass through their networks.

    ■ ■ ■

    Many web designers and developers mistakenly believe that JavaScript support is a given or that issues with JavaScript drifted off with the decline of IE 8, but these three stories are all recent, and none of them concerned a browser support issue. If these stories tell you anything, it’s that you need to develop the 1964 Chrysler Imperial5 of websites—sites that soldier on even when they are getting pummeled from all sides. After all, devices, browsers, plugins, servers, networks, and even the routers that ultimately deliver your sites all have a say in how (and what) content actually gets to your users.

    Get Familiar with Potential Issues so You Can Avoid Them

    It seems that nearly every other week a new JavaScript framework comes out, touting a new approach that is going to “revolutionize” the way we build websites. Frameworks such as Angular, Ember, Knockout, and React do away with the traditional model of browsers navigating from page to page of server-generated content. Instead, these frameworks completely take over the browser and handle all the requests to the server, usually fetching bits and pieces of content a few at a time to control the whole experience end to end. No more page refreshes. No more waiting.

    There’s just one problem: Without JavaScript, nothing happens.

    No, I’m not here to tell you that you shouldn’t use JavaScript.6 I think JavaScript is an incredibly useful tool, and I absolutely believe it can make your users’ experiences better…when it’s used wisely.

    Understand Your Medium

    In the early days of the Web, “proper” software developers shied away from JavaScript. Many viewed it as a “toy” language (and felt similarly about HTML and CSS). It wasn’t as powerful as Java or Perl or C in their minds, so it wasn’t really worth learning. In the intervening years, however, JavaScript has changed a lot.

    Many of these developers began paying attention to JavaScript in the mid-2000s when Ajax became popular. But it wasn’t until a few years later that they began bringing their talents to the Web in droves, lured by JavaScript frameworks and their promise of a more traditional development experience for the Web. This, overall, is a good thing—we need more people working on the Web to make it better. The one problem I’ve seen, however, is the fundamental disconnect traditional software developers seem to have with the way deploying code on the Web works.

    In traditional software development, you have some say in the execution environment. On the Web, you don’t. I’ll explain. If I’m writing server-side software in Python or Rails or even PHP, one of two things is true:

    • I control the server environment, including the operating system, language versions, and packages.
    • I don’t control the server environment, but I have knowledge of it and can author my program accordingly so it will execute as anticipated.

    In the more traditional installed software world, you can similarly control the environment by placing certain restrictions on what operating systems your code supports and what dependencies you might have (such as available hard drive space or RAM). You provide that information up front, and your potential users can choose your software—or a competing product—based on what will work for them.

    On the Web, however, all bets are off. The Web is ubiquitous. The Web is messy. And, as much as I might like to control a user’s experience down to the pixel, I understand that it’s never going to happen because that isn’t the way the Web works. The frustration I sometimes feel with my lack of control is also incredibly liberating and pushes me to come up with more creative approaches. Unfortunately, traditional software developers who are relatively new to the Web have not come to terms with this yet. It’s understandable; it took me a few years as well.

    You do not control the environment executing your JavaScript code, interpreting your HTML, or applying your CSS. Your users control the device (and, thereby, its processor speed, RAM, etc.). Depending on the device, your users might choose the operating system, browser, and browser version they use. Your users can decide which add-ons they use in the browser. Your users can shrink or enlarge the fonts used to display your site. And the Internet providers sit between you and your users, dictating the network speed, regulating the latency, and ultimately controlling how (and what part of) your content makes it into their browser. All you can do is author a compelling, adaptive experience and then cross your fingers and hope for the best.

    The fundamental problem with viewing JavaScript as a given—which these frameworks do—is that it creates the illusion of control. It’s easy to rationalize this perspective when you have access to the latest and greatest hardware and a speedy and stable connection to the Internet. If you never look outside of the bubble of our industry, you might think every one of your users is so well-equipped. Sure, if you are building an internal web app, you might be able to dictate the OS/browser combination for all your users and lock down their machines to prevent them from modifying any settings, but that’s not the reality on the open Web. The fact is that you can’t absolutely rely on the availability of any specific technology when it comes to delivering your website to the world.

    It’s critical to craft your website’s experiences to work in any situation by being intentional in how you use specific technologies, such as JavaScript. Take advantage of their benefits while simultaneously understanding that their availability is not guaranteed. That’s progressive enhancement.

    The history of the Web is littered with JavaScript disaster stories. That doesn’t mean you shouldn’t use JavaScript or that it’s inherently bad. It simply means you need to be smart about your approach to using it. You need to build robust experiences that allow users to do what they need to do quickly and easily, even if your carefully crafted, incredibly well-designed JavaScript-driven interface can’t run.

    Why No JavaScript?

    Often the term progressive enhancement is synonymous with “no JavaScript.” If you’ve read this far, I hope you understand that this is only one small part of the puzzle. Millions of the Web’s users have JavaScript. Most browsers support it, and few users ever turn it off. You can—and indeed should—use JavaScript to build amazing, engaging experiences on the Web.

    If it’s so ubiquitous, you may well wonder why you should worry about the “no JavaScript” scenario at all. I hope the stories I shared earlier shed some light on that, but if they weren’t enough to convince you that you need a “no JavaScript” strategy, consider this: The U.K.’s GDS (Government Digital Service) ran an experiment to determine how many of its users did not receive JavaScript-based enhancements, and it discovered that number to be 1.1 percent, or 1 in every 93 users.7, 8 For an ecommerce site like Amazon, that’s 1.75 million people a month, which is a huge number.9 But that’s not the interesting bit.

    First, a little about GDS’s methodology. It ran the experiment on a high-traffic page that drew from a broad audience, so it was a live sample which was more representative of the true picture, meaning the numbers weren’t skewed by collecting information only from a subsection of its user base. The experiment itself boiled down to three images:

    • A baseline image included via an img element
    • An img contained within a noscript element
    • An image that would be loaded via JavaScript

    The noscript element, if you are unfamiliar, is meant to encapsulate content you want displayed when JavaScript is unavailable. It provides a clean way to offer an alternative experience in “no JavaScript” scenarios. When JavaScript is available, the browser ignores the contents of the noscript element entirely.

    With this setup in place, the expectation was that all users would get two images. Users who fell into the “no JavaScript” camp would receive images 1 and 2 (the contents of noscript are exposed only when JavaScript is not available or turned off). Users who could use JavaScript would get images 1 and 3.

    What GDS hadn’t anticipated, however, was a third group: users who got image 1 but didn’t get either of the other images. In other words, they should have received the JavaScript enhancement (because noscript was not evaluated), but they didn’t (because the JavaScript injection didn’t happen). Perhaps most surprisingly, this was the group that accounted for the vast majority of the “no JavaScript” users—0.9 percent of the users (as compared to 0.2 percent who received image 2).

    What could cause something like this to happen? Many things:

    • JavaScript errors introduced by the developers
    • JavaScript errors introduced by in-page third-party code (e.g., ads, sharing widgets, and the like)
    • JavaScript errors introduced by user-controlled browser add-ons
    • JavaScript being blocked by a browser add-on
    • JavaScript being blocked by a firewall or ISP (or modified, as in the earlier Comcast example)
    • A missing or incomplete JavaScript program because of network connectivity issues (the “train goes into a tunnel” scenario)
    • Delayed JavaScript download because of slow network download speed
    • A missing or incomplete JavaScript program because of a CDN outage
    • Not enough RAM to load and execute the JavaScript10
    Screenshot of an error message reading, “HTTP Error 413: Request Entity Too Large. The page you requested could not be loaded. Please try loading a different page.”
    A BlackBerry device attempting to browse to the Obama for America campaign site in 2012. It ran out of RAM trying to load 4.2MB of HTML, CSS, and JavaScript. Photo credit: Brad Frost

    That’s a ton of potential issues that can affect whether a user gets your JavaScript-based experience. I’m not bringing them up to scare you off using JavaScript; I just want to make sure you realize how many factors can affect whether users get it. In truth, most users will get your enhancements. Just don’t put all your eggs in the JavaScript basket. Diversify the ways you deliver your content and experiences. It reduces risk and ensures your site will support the broadest number of users. It pays to hope for the best and plan for the worst.

    Footnotes

  • Selecting Effective Workshop Tasks 

    Having defined goals and set an agenda for a workshop are a crucial first step. But what happens during the workshop is even more critical. Last time, we looked at how to define goals and attendees. This time we’ll look at the tasks and activities you choose to use, and how they’re determined by your stated goals. In the last post, these were our workshop goals:

    At the end of this workshop:
    • attendees will be able to define, plan, and conduct UX research on a new product feature.
    • attendees will have decided on a set of research questions, so they can gather user feedback.
    • attendees will have demonstrated the power of UX research planning to the organization.

    Let’s take a look at a few common types of tasks we can use to get there.

    Brainstorming

    The task of brainstorming simply asks participants to generate information. It’s not important exactly what they generate, as long as it’s on topic and there’s a lot of it. Using post-its, paper, and other tools for quick documentation are essential. One participant should be assigned as a scribe, so every idea is taken down. Participants should not be evaluating the worth of the ideas, just making a lot of them.

    Sketching and ideation

    Sketching and drawing activities ask participants to generate versions or concepts of a general idea. Sketching wireframes, tools, or content structures for example. This process is a bit more focused, as there is a concrete topic or interface structure everyone is riffing on.

    Ranking and rating

    Ranking is putting existing options or ideas in order from best to worst, or from one to five—just like the Olympics, where only one person can win gold, silver, or bronze. Rating is assigning a value to existing options. More than one option can have the same rating, like on Yelp, where lots of restaurants have three or four stars. These tasks force attendees to use discussion and debate to evaluate existing options and how they relate to each other.

    Mapping

    Mapping asks attendees to provide suggested actions based on certain constraints or criteria. It can mean mapping pathways through research questions, an interface, or even product delivery strategies. The key is that the obstacles to success are previously defined, and the attendees choose procedures that minimize those risks.

    Connecting tasks and goals

    These activities form a core set that you can rely on in any workshop. Regardless of the specific steps in the task, it needs to relate to your goal. Let’s look again at our example.

    We want to define UX research on a new product feature. This means it’s at the beginning and all wide open. A brainstorming activity would work well here, as you want to uncover new ideas and interface concepts.

    We also want to decide on a set of research questions. This could call first for ideation on a general question format, and then ranking or rating to choose research questions the team feels will be most effective.

    Finally, in order to show the value of the workshop and research in general, we want to demonstrate the power of UX research internally. There are a few ways to connect this goal with the activities. First, we can brainstorm a short list of internal company objections to UI and product changes. Then, we map our research questions to those objections, in essence forcing our research to prove them right or wrong. We’re making a direct link between information we want to get, and how it will affect design changes.

    Setting up and running tasks

    Now that you have some ideas of tasks to run and have connected them to the goals, we can go over how to actually set it all up in the workshop.

    • Introduce the task. Say what you will be doing, and why. Restate the workshop goals, even if you don’t think you need to.
    • Set teams. Keep the energy focused by assigning teams and groups. If you have a set of particularly outspoken attendees, make sure they are in a group that is able to work with more vocal attendees. I often ask workshop groups to choose a topical name (ice-cream flavors, colors, etc.) as it fosters a group identity.
    • Assign a scribe. By asking one team member to document their task, you do two things: force the team to record their decisions, and create a shareable record for those who did not attend. (Remember how our third goal was to demonstrate the power of UX research?)
    • Set a time limit. Everyone loves a finish line. By telling people exactly how long they have to finish a task, you tell them that their time is important and that this will be a focused session.
    • Let them work. Stop talking and let the attendees complete the task. As the facilitator, you only need to intercede if people are confused or completely off topic.
    • Review. Once the time is up, call everyone back to the larger group and either solicit conclusions, or state them yourself. Even for a small groups, review is critical to making sure everyone is on the same page.

    Workshops are hard work. A lot of that comes in the preparation, but carefully choosing tasks that match your stated goals is also key to success. Think carefully about what you want attendees to do, and then define the activities to achieve that. Explain what will be happening, as many times as necessary, and then step back and act as a facilitator, the person in charge, so they don’t need to worry about it. Each of these steps sets you up for success. But people are complex, and not every workshop goes according to the plans you set out. In the third installment of this series, we’ll look at some techniques for dealing with difficult attendees.

  • This week's sponsor: Liquid Web 

    Speed up your Wordpress sites with managed hosting and round the clock support from our sponsor, LiquidWeb.