EW Resource

Newsfeeds

There's a huge resource of Newsfeeds on the Net which brings up-to-date information and content on a wide range of subjects.

Here are just a few relating to web development.



A List Apart: The Full Feed
  • On Our Radar: Communication Builds Community 

    This week, we at ALA have been thinking about processes of inclusion—that is, how we communicate with our communities. Who (and what) gets to be included? How do we use vocabularies, fonts, even emojis, to make those choices? And how do those choices create our culture?

    Here’s what’s on our radar:

    Anna Debenham, technical editor:
    The UX team at Salesforce have written about the difficulties they’ve had coming up with color schemes that look good and meet the WCAG2 guidelines on color contrast—so they’ve built a wonderful site called Color Safe that generates color palettes that meet these guidelines. It’s great to see companies release tools like this that help make everyone’s sites more accessible.

    Marie Connelly, blog editor:
    I really loved this piece over on Hopes & Fears on how the Deaf community is incorporating new terminology (think: selfie, photobomb) into American Sign Language. It touches on so many things I love: words, the subtle complexities of language, and the beautiful messiness of community collaboration. I think the examples of how the Deaf community works through this process offer great food for thought for any of us working on content and communication.

    Caren Litherland, editor:
    “I’m pretty content,” writes Indra Kupferschmid in a pragmatic survey of the current state of web typography. Almost anything we could ever do in print, we can now do on the web; the web “forces us to think about typography in terms of parameters and to get clear about content versus form.”

    Ethan Marcotte, technical editor:
    Kathy Sierra’s essay on skater culture is a fascinating, moving look at a once-inclusive industry that, over time, marginalized its female members. It’s also an urgent warning for the digital industry, which faces a similar crisis.

    A gif about a music video we are into:

    A gif of Tina from Bob's Burgers jumping up and down.
    No outline will ever hold us.

    What about you?
    What stories are drawing your attention? Got any posts, pictures, tools, or code you want to share? (We love sharing!) Tweet us what’s on your radar—and what should be on ours.

  • Antoine Lefeuvre on The Web, Worldwide: Designing for Post-Connected Users — Part 1, the Diagnostic 

    I toured the world twice—first in 2009–10, then in 2013–14. Only four years between the two trips, but it felt like a century internet-wise. Where I had to go wifi-hunting in 2009, in 2014 the web was absolutely everywhere—even in places with no mobile coverage, such as remote El Chaltén in Argentine Patagonia. Yet, I had the feeling this advent of a truly connected world wasn’t much cause for celebration. Indeed, I met many who struggled with an increasing need to disconnect.

     

    I’m so glad I’m taking a year off. Off from work, off from stress, off from modern life.

    …Do you have WhatsApp?

    Twenty-something European trekker in Northern Laos

    I heard this line from fellow travelers numerous times, be it in Laos, Costa Rica, or New Zealand. I actually said it myself! As absurd as it sounds, it’s a perfect illustration of our ambiguous relationship with the internet.

    Hyper-connected, hypo-social

    Has the internet become repulsive? It certainly has in the eyes of Italian artist Francesco Sambo. His HyperConnection series depicts a dark and creepy humanity transformed—or tortured—by technology. Strikingly, Sambo is a savvy internet user, showcasing his work through Behance and SoundCloud.

    HyperConnection, CC BY-NC-ND, Francesco Sambo.

    Artists are often the first to capture the collective unconscious. Antisocial network I and II by Congolese artist Maurice Mbikayi are skulls made out of keyboards. “The […] sculptures ask questions such as to whom such technological resources are made available and at what or whose expense? What are the consequences impacting on our people and environment?” states Mbikayi. Less morbid but equally shocking is the alienation depicted in the Strangers in the Light series by French photographer Catherine Balet. In a very visual way, she questions us: are our babies born in a mad world?

    Digital malaise

    Not only does hyper-connection alter our social relationships, it also makes us dumber, as pointed out as early as 2005. It threatens our health too. Twenty-first-century afflictions include digital fatigue, social media burnout or compulsive internet use.

    Cures for these rising internet-related disorders include such radical solutions as rehab centers, or disconnection.

    “I was wrong”

    Most of the experiments in living offline have begun with the same cause and led to the same conclusion: the internet drives us crazy, but it brings us much more than we realize.

    “The internet isn’t an individual pursuit, it’s something we do with each other. The internet is where people are,” says journalist Paul Miller in his famous “I was wrong” piece on The Verge. When you disconnect, you’re not just cutting the link with a network of computers, you’re actually isolating yourself from the rest of society. Miller also emphasizes that there is no such thing as a divide between virtuality and reality. To me, the best example of this is the sharing economy of “virtual” communities such as AirBnb or Kickstarter that is all about changing the “real” world.

    The cure is worse than the disease

    A lot of people today feel torn between two extremes. They aren’t against modern ways of interaction per se, but they won’t close their eyes to the excesses. The concern becomes even greater when the developing minds of children and teenagers are at stake. Many parents believe their digital-native offspring aren’t capable of using the internet moderately. You can’t blame them when you come across stats such as 20 percent of French young people are addicted to their mobile.

    Is disconnection the only alternative to unhealthy internet use? That cure is worse than the disease. There must be another way.
    Internet users are ripe for a new era, for the next step. A “more asserted, more mature” use, in the words of Thierry Crouzet, another famous disconnectee. Neither hyper- nor dis-connected: post-connected.

    I see the advent of post-connected users wary of addictive or invasive tools. Post-connected users are also well aware that a social network centered on the individual, rather than on the group, inevitably leads to narcissism. They see the internet as a means for more direct human relationships—not a thing that feeds on our continual attention.

    The internet pictured as monstrous should sadden us all, for it is one of mankind’s greatest inventions, one which has done so much for knowledge, education and human rights. Besides, it isn’t addictive by nature, we have turned into a drug.

    We are the drug dealers

    We love it if other people listen to us. Why else would you tweet?

    Psychologist James Pennebaker at the University of Texas at Austin interviewed by WSJ

    We, the web makers, have designed interactions which encourage selfishness and competition. We created tools that cause fatigue and stress. We practically invented hyper-connection.

    It is therefore our responsibility to design for post-connected users. If we’ve been powerful enough to create addiction, then we must surely have the resources to imagine post-connected user experiences. How? I’ll give you some leads in my next column.

    In the meantime, I would very much like to discuss this topic with you. Have you ever felt the urge to disconnect? Do you agree there is such a thing as post-connected users? Would you say addiction is the sign of a successful design? Your comments, criticism, and true stories are most welcome.

  • This week's sponsor: Proposify 

    Thanks to Proposify for sponsoring A List Apart this week! They know you don’t love writing proposals, so they built some tools to help your agency win more projects.

  • 10 Years Ago in ALA: Attribute Anarchy 

    WARNING: there are experimental elements and deeply controversial syntaxes ahead! Proceed at your own peril! You have been warned, and the website you save…could be your own. Ten years ago, right here in ALA, a wild-eyed hell-raiser going by “PPK” made a radical proposal: custom attributes in markup.

    In my opinion, using custom attributes to trigger the behavior layer … will help to separate behavior and structure and to write simple, efficient scripts.

    Besides, triggers can grow to be more complicated than just a “deploy behavior here” command. Sometimes you’ll want to add a value to the trigger.

    Well, okay. At the time it was radical. Here in the future, we have perfectly valid HTML5 data- attributes to contain all manner of information and act as behavioral triggers for our scripts.

    The front end of a website consists of three layers. XHTML forms the structural layer, which contains structural, semantic markup and the content of the site. To this layer you can add a presentation layer (CSS) and a behavior layer (JavaScript) to make your website more beautiful and user-friendly. These three layers should remain strictly separate. For instance, it should be possible to rewrite the entire presentation layer without touching either the structural or the behavior layer.

    All of this holds as true today as it did a decade ago. I know I’ve used data- attributes for both: to invoke custom behavior without touching the classes I use for styling—keeping my behavioral layers and presentation layers separate—and to pass relevant configuration information to said scripts. Picturefill 1’s data-srcset="1x source.jpg, 2x hd-source.jpg" comes to mind: we could define an attribute and write a script that dictates how the associated element should behave, all in one perfectly valid package.

    The presence of the maxlength attribute alerts the script to check user input in this textarea, and it can find the maximum length of this specific textarea in the value of the attribute. As long as we’re at it we can port the “required” trigger to a custom attribute, too. required=“true”, for instance, though any value will do because this trigger just gives a general alert and doesn’t carry extra information.

    maxlength? required? These custom attributes that once so daringly flew in the face of conventional web standards are now part of the HTML5 standard.

    Maybe it’s best that the web didn’t linger too long on our warning at the top of the page.

     

  • Rian van der Merwe on A View from a Different Valley: Managing and Making: It Doesn’t Have to Be One or the Other 

    We work in interesting times. We recognize and accept that if you want to move “up” at a company, you have to become a manager. So, to rise up in the ranks means doing less of the thing you’ll be more responsible for. For a design manager, this means more time in email and Evernote, less time in Sketch and Photoshop. That doesn’t make a lot of sense, but it’s the way it is.

    I’m not saying we don’t need managers—we desperately need good ones. But I started thinking about our blind acceptance of this cornerstone of modern business, and I wonder if there might be a way to create a system that values doing as much as managing—while also improving the skills of both groups.

    I moved into my first management role about six years ago. I can’t quite remember the motivation behind it, but it was some combination of company need and my desire to further my career (and a little bit of “I wonder if I can do it,” I guess). I also had the good fortune of having an excellent manager in one of my first jobs. It opened my eyes to the challenges and opportunities of management, and I wanted to contribute to that. It’s been a huge learning experience (I would say it was humbling, but hashtags have ruined that word forever) and I’m glad I did it.

    But a couple of years ago something about being a manager started to bother me. At first it was just a a small voice in the back of my head: How can you be a good design manager if you don’t design any more? I tried to ignore it, but that voice grew louder over time, and eventually I had to deal with the question head on.

    The problem is, if you’re a manager, you have career opportunities. Manager turns into Senior Manager turns into Director turns into Senior Director, and so on. If you’re “just a designer,” the path is less clear. Sure, there are Senior and Lead roles out there, but they’re very rarely equated with real career progress. And that’s a problem. It forces some individual contributors to become managers even if they prefer to let someone else take the lead, and it creates a management culture that can become extremely out of touch with day-to-day design activities.

    So at the end of last year I made a change. Partly because I was tired, partly to test this theory, I stepped away from management and became “just a designer” again. At first it was weird. Where did all the meetings go? What is this flat surface that I get to sit and work at for most of the day? But then the weirdness subsided and it just got… enjoyable. I now spend most of my days designing products, talking about and helping teams implement those designs. I realized I fell behind on design skills a little bit, so I went into a learning phase, and it was fun.

    What does this mean? Am I done with management? Is anyone who chooses a life of management doomed to heartache and despair? Absolutely not! If anything, going back to being an individual contributor has cemented my belief that good managers are as important as they are hard to find. And I certainly hope and plan to be in that role again in the future. Just not right now.

    So here’s how all of this comes together. I think we need a career system that encourages people to oscillate between individual contributor roles and manager roles. Maybe we provide “manager sabbaticals” where a manager becomes an individual contributor on a team for six to nine months. Maybe when a manager goes on vacation, an individual contributor takes on their role for a period of time (or for the duration of an entire project). I don’t know exactly what this looks like yet, but I think it’s important for us to figure it out.

    Being an individual contributor makes you a better manager because you understand the day-to-day frustrations of your team better, and it ensures that you keep your technical skills up to date. Being a manager makes you a better designer because you understand the needs of leadership teams better, which allows you to communicate more effectively. One feeds the other, so we shouldn’t be forced to “pick a track.”

    There are, of course, caveats. People shouldn’t be forced into management by the stigma that only management = career advancement. Some managers have no desire to become individual contributors again, and they shouldn’t have to. It’s about choice. If we encourage (and reward) people to have the freedom to explore different kinds of roles, it can only be a good thing for our industry—and, more importantly, for users.

  • Prioritizing Structure in Web Content Projects 

    Most web content projects have both structural and editorial aspects: for example, the information needs to be structured to support the new responsive design, and the current copy needs an update so it adheres to messaging and brand guidelines.

    I’m often asked which is the best order to approach the work: structure first and then rewrites, or the reverse? I didn’t used to have a strong opinion, because it seemed to me like a chicken-and-egg problem. If the project starts with structure, I’m building content models off of bad information. If, instead, we start with rewrites, the writers don’t know what pieces we need to fill the models, because the models don’t exist yet. It felt like both directions were equally fraught, and I didn’t have any strong reasons to recommend one over the other.

    (Note that I’m not talking about starting without the editorial foundations of a project: understanding the business goals, establishing a message architecture, and knowing what the work is supposed to accomplish are core pieces of any project. I’m talking instead about rewriting poor content—editing and creating new copy based on those foundations.)

    Structure the content first, then do rewrites

    I recently finished up the second phase of a project that we organized to focus on structure first, and reasons to stick with this approach piled up in my lap like turkeys going to roost. I think that a structure-first approach does make sense for the majority of my projects, and here’s why.

    Content models are based on what content is for, not what it says

    On this particular project, the existing copy was horrible. Jargony, clichéd, and almost stubbornly unhelpful. How could I build a useful content model off of bad content?

    As I was working, I realized that the quality of the copy—even if it’s terrible—doesn’t really affect the models. I don’t build models off of the exact words in the content, but instead I build off of what purpose that copy serves. I don’t actually care if the restaurant description reads like teen poetry (sorry teens, sorry poets): it’s the restaurant description, and we need a short, teaser version and a long, full version. The banquet facilities should include well-lit photos taken within the last decade, and the captions should use the appropriate brand voice to describe how the rooms can be used. I don’t actually need to see decent photos or strong captions to build space for them into the models.

    Structure decisions influence development and design direction

    A complex content model will help inform all kinds of site decisions, from CMS choice to data formatting. Developers can make better architecture decisions when they have a sense of what kinds of relationships exist between content types, and designers can organize a pattern library that matches the granularity of the content model. The earlier the structure work is done, the easier it is to build integrated design and development plans.

    Cramming bad content into strong models is an incredibly compelling argument for editorial intervention

    When projects are focused on the structural aspects of the work—we want to recombine content for different channels, or make a clever responsive experience using structured fields—people often start out convinced that the current content is decent enough to do the job. “Sure, it could probably use some spiffing up, but that’s just not in the cards right now.”

    I have never seen a more effective argument for the importance of editorial work than taking existing copy and seeing how inadequately it fills a model that we’ve already agreed meets our business goals.

    A model I built recently had a content type for holding gushy snippets about the business’s amazing customer service. When we went to move the existing content into the new models, the only copy we could find to migrate amounted to “free ice water” and “polite employees.” We had already agreed that telling the story of the brand experience was a key job of the new website, and seeing how thoroughly their current content failed to do that was the kick in the pants they needed to find budget for an editorial assist.

    Content models are easy to iterate

    Waterfall isn’t a great match for content development any more than it is for design and code, so editorial rewrites often trigger adjustments to the content models. I may split one large field into two smaller ones, or the writers will find a place where I left out an important piece of content altogether. Refining the models is an expected part of the process.

    On projects where editorial rewriting has been done first, though, I often end up with copy that, although now written beautifully, has no place in the model. In the course of structuring the information, we combined two pages into one, or are reusing the same description in three places, and so the editorial effort that went into fixing that copy is thrown out before it ever sees the light of day. That’s discouraging, and can lead to content creators feeling like I don’t value their time or their work.

    What works for you?

    It’s nice to have some strong reasoning behind my structure-first leaning, but of course my experiences may not translate to your project needs at all.

    If you’ve worked on a project that organized structure work first, what advantages or drawbacks did that process uncover? From a design and development perspective, are there pros or cons to either direction that aren’t covered here?

    If you’re a writer, does creating copy within a content model free or stifle your best work? If you prefer to start with editorial rewrites, what are the hidden benefits for the structural side of the project?

    I believe there are real benefits to taking a structure-first approach to organizing content activities, and I’d love to hear how and if that works for your projects as well.

  • The Specialist-Generalist Balance 

    A couple of years ago I hit a crisis point. There was a distinct divide between disciplines at my company; I had been labeled a “backend developer,” and it was starting to feel restrictive. The label wasn’t wrong: I spent most of my working hours writing server-side code. I enjoyed it, and I was good at it—but it wasn’t all that I could do.

    I’ve always considered myself to have a fairly generalist skill set, and slapping a label on me meant that I wasn’t allowed to work on anything other than that which fell under my remit as a backend developer. I felt typecast. And, unfortunately, it’s not a divide found solely at that company; it’s ubiquitous across the industry.

    So what’s the problem?

    Consider the following project scenario: someone in marketing has an idea. They discuss it with a designer, who mocks it up in Photoshop. The designer hands it over to the front-end developer, who complains that the designer hasn’t considered how difficult X is to implement without major JavaScript hacking. She finishes her work and tosses it over to the backend developer, who flips out because the front-end developer hasn’t given a single thought to how Y is going to work with the company’s CMS.

    Sound familiar?

    Creating narrow groups of specialists divides teams and restricts the way we work together. In his article “Development is Design,” Brad Frost describes this divide as a fence over which specialists throw their respective pieces for other specialists to catch and run with. It’s not uncommon to see individual teams of “specialists” all sitting apart from each other. The larger the company grows, the more specialist stations are added, each with their own tasks to complete, and mostly working in isolation—isolation that fosters unhealthy environments, restricts collaboration, and creates silos.

    The key is to find the right balance of specialists and generalists on your team—to use both to their advantage and to nurture healthy, productive environments. Ultimately, the question is: how can experts collaborate better together?

    Balancing your team

    The appeal of generalists

    In my formative years, I worked as a developer at a small software agency. There was complete freedom—absolute trust and no red tape. If one of the live sites had a bug, I had free rein to jump on the live server and peruse the logs, or check the configuration file for errors. I was not only allowed, but often required to do anything and everything. It was such a small company that there simply were no specialists.

    In this way, I picked up some rudimentary design skills; I learned to navigate my way around a server and a database; and I became fairly confident in developing on the client-side.

    This type of generalist approach to developing websites clearly has advantages: generalists learn how each component works with the others. They develop an understanding and appreciation of the whole process. They’re also good at just getting things done; there’s no waiting around for a specialist to do the database work for you.

    Generalists can apply their hands to most things, but they’re never going to master everything. Sometimes, having someone who roughly knows their way around something just isn’t enough.

    If you have a rock band made up of members who can play “Smoke On The Water” on every instrument, but you don’t have individuals who can belt out a Slash solo, or drum like John Bonham, then you’re never going to play to a sold-out house.

    Making the most of specialists

    Specialists are the experts in their field. They have spent their careers honing their skills on a given subject, and so it stands to reason that they’re going to be better at it than someone who doesn’t have their experience.

    But misusing them will result in barriers to strong team collaboration. For example, once, at a large software company, I was tasked with investigating why our team’s build had broken. I identified that the problem was a missing dependency reference in the build definition. So, easy fix, right? Just pull up the build definition and fix the dependencies—until I realized I didn’t have access. I couldn’t edit the build definition directly, and was told I needed a “configuration specialist” to implement the fix.

    What should have been a quick edit ended up taking hours while I waited for a specialist on another team to fix a problem that I knew how to solve. Unfortunately, this is a common scenario: rather than collaborating with the rest of the company, insular groups of specialists are given sole ownership over particular tasks.

    Specialists are best placed in roles where they work alongside other team members, rather than separately. As Henrik Kniberg from Spotify says, “It’s like a jazz band—although each musician is autonomous and plays their own instrument, they listen to each other.”

    Tear down the walls

    Removing obstacles to a high performance culture is how innovation happens throughout an organization.
    Adrian Crockroft, Netflix

    Collaboration is the ultimate goal when forming a team, since it allows ideas to flow freely and encourages innovation. Creating specialist groups with total ownership and little to no cross-team communication will erect unnecessary barriers to collaboration. So how do we identify and remove these barriers?

    Open up bottlenecks

    I once worked with a company where the generalist development team outnumbered the specialists by fifteen to one. When developers required alterations to an automated build, they had to submit a ticket for a specialist to address. At one point, developers were submitting tickets faster than specialists could pick them up—resulting in a workflow bottleneck.

    If the developers had been able to manage the automated builds themselves, the bottleneck could have been avoided. The knowledge held in the configuration team could have been shared among the developers, creating a more generalist approach and eliminating a silo.

    To identify and open up your own bottlenecks, ask yourself:

    • What part of the process is the slowest, and why?
    • Are you relying on a single person to do all of your front-end development? Why?
    • Are there any other people in the team who have similar skills, or show an aptitude for learning those skills?
    • Do restrictive job titles prevent people from benefiting from each other’s skills and expertise?

    Encourage communication

    I’ve seen companies where software testers and developers were entirely independent teams. Testers were often only engaged at the end of the development process, when they received a test module based on the original requirements. But requirements can and do change during the development process—which, when teams operate completely independently, can lead to a lot of misunderstandings and lost productivity.

    Including the testers throughout the development process would have improved communication and performance. Instead, project releases suffered as a consequence of the teams’ separation.

    There are many ways to limit these kinds of divisions and foster communication on teams:

    • Try to arrange the workspace so project teams can sit together. If they can’t sit together, then make sure that they have at least one conversation about the project every day.
    • Remote working is a privilege, but it’s only possible if you make yourself available for discussions. A huge benefit of working in an office is being able to wander over to a colleague’s desk and just ask them something; remote working can make people seem unreachable. If you must work remotely, then make sure your colleagues feel comfortable contacting you.
    • Scrum is a great tool for encouraging communication, especially the daily stand-up, during which each team member describes what they’re working on and any problems they need help with.

    Fill in the skill gaps

    Does your team lack the skill necessary to complete a project or deliver it efficiently? Is the team unfamiliar with a particular approach or technology? Do they lack the confidence required to successfully overcome a problem? Use specialists as a means to train your staff:

    • Bring in a specialist from elsewhere in the company or, if the skills don’t exist internally, hire a consultant.
    • Don’t allow specialists to solve the problem in isolation. Give your team members the opportunity to work closely with them, to learn from their experience, and to begin building the skills they lack.
    • Encourage your specialists to conduct workshops. Workshops are also a nice way to build an interactive relationship between specialists and generalists; they open communication and foster a knowledge-sharing environment.

    Promote knowledge-sharing

    I once worked in a team that made a point of identifying silos. We were encouraged to work on the whole system and no single developer owned a specific area, though people had their preferences—I gravitated more towards client-side, while a colleague favored web services.

    When I admitted that I was unfamiliar with how the company’s internal web services functioned because I hadn’t worked on them for so long, my colleague and I decided to alternate between client-side and web-service work during the next sprint, thus sharing our knowledge.

    There are many ways to promote this kind of knowledge-sharing, which is fundamental to innovation and a collaborative culture.

    Brown-bags

    At my current company, we hold regular brown-bag lunches—everyone brings their own lunch to eat while a colleague gives an informal talk on a topic that they’re interested in. Brown-bags often spawn interesting discussions among participants: I can recall a few occasions where a technical feature or procedure has made its way into our formal processes following a fervent brown-bag.

    Scott Hanselman at Microsoft suggests that companies “host technical brown-bags at least twice a month and encourage everyone to present at least every year.” It’s a good opportunity to encourage a healthy debate among colleagues with whom you don’t necessarily collaborate on a regular basis.

    Guilds

    In his article “Scaling Agile at Spotify with Tribes, Squads, Chapters and Guilds” (PDF), Henrik Kniberg defines a guild as “a group of people that want to share knowledge, tools, code, and practices.” Spotify uses guilds to bridge gaps between teams across the organization. For example, a developer is likely to encounter a problem that another developer in the organization has already solved. Where’s the sense in duplicating work?

    Forming a guild allows common solutions to be communicated. It’s an opportunity to share experiences among teams.

    At my current company, each team has at least one tester; the testers also belong to a separate QA guild, which allows them to pool their knowledge. It has been a big success: testing procedures have been standardized across the teams, and technologies like Selenium have been introduced into the test stack.

    Internal open-source models

    Limit the perception of ownership by introducing internal open-source models. Give everyone the ability to contribute to your source code or designs by replacing ticket-based systems with a model similar to GitHub’s pull requests. If you’re competent and comfortable making a change to a codebase that sits within another team’s “area,” then why shouldn’t you? The other team can act as curators of the project by reviewing any code submissions and feedback—but guess what? Now you’re collaborating!

    Hack days

    Are the projects you’re working on feeling a little stale? Try entering a competition as a company, or use a hack day to get ideas moving again:

    • Arrange a company-wide Ludum Dare, where the best game at the end of the hack day wins.
    • You don’t even need to restrict it to a day. Spotify holds regular hack weeks. You might even end up with something you can present to the business or a client.
    • The National Health Service holds annual hack days in the UK, which local digital professionals are encouraged to attend. They work to solve the problem presented by NHS doctors and staff with whatever technology they have at hand. It’s incredibly empowering, and an amazing opportunity to give back to such an important organization.

    Hack days don’t have to be IT-related; encourage people outside of the development team to take part by following the NHS model. Hack days allow people to work with colleagues they wouldn’t normally work with, in a situation where fresh ideas are encouraged and innovation is rewarded.

    Go forth and collaborate

    Strong collaboration is crucial to building a successful team—and collaboration is fostered by breaking down barriers. Make good use of your specialists by integrating them with your generalists and positioning them to guide, teach, and instill passion in your teams.

  • A New Way to Listen 

    To develop empathy, you need to understand a person’s mind at a deeper level than is usual in your work. Since there are no telepathy servers yet, the only way to explore a person’s mind is to hear about it. Words are required. A person’s inner thoughts must be communicated, either spoken aloud or written down. You can achieve this in a number of formats and scenarios.

    Whether it is written or spoken, you are after the inner monologue. A recounting of a few example scenarios or experiences will work fine. You can get right down to the details, not of the events, but of what ran through this person’s mind during the events. In both written and spoken formats, you can ask questions about parts of the story that aren’t clear yet. Certainly, the person might forget some parts of her thinking process from these events, but she will remember the parts that are important to her.

    A person’s inner thought process consists of the whys and wherefores, decision-making and indecision, reactions and causation. These are the deeper currents that guide a person’s behavior. The surface level explanations of how things work, and the surface opinions and preferences, are created by the environment in which the person operates—like the waves on the surface of a lake. You’re not after these explanations, nor preferences or opinions. You’re interested in plumbing the depths to understand the currents flowing in her mind.

    To develop empathy, you’re also not after how a person would change the tools and services she uses if she had the chance. You’re not looking for feedback about your organization or your work. You’re not letting yourself ponder how something the person said can improve the way you achieve goals—yet. That comes later. For developing empathy, you are only interested in the driving forces of this other human. These driving forces are the evergreen things that have been driving humans for millennia. These underlying forces are what enable you to develop empathy with this person—to be able to think like her and see from her perspective.

    This chapter is about learning how to listen intently. While the word “listen” does not strictly apply to the written word, all the advice in this chapter applies to both spoken and written formats.

    This is a different kind of listening

    In everyday interactions with people, typical conversation does not go deep enough for empathy. You generally stay at the level where meanings are inferred and preferences and opinions are taken at face value. In some cultures, opinions aren’t even considered polite. So, in everyday conversation, there’s not a lot to go on to understand another person deeply. To develop empathy, you need additional listening skills. Primarily, you need to be able to keep your attention on what the person is saying and not get distracted by your own thoughts or responses. Additionally, you want to help the speaker feel safe enough to trust you with her inner thoughts and reasoning.

    There’s virtually no preparation you can do to understand this person in advance. There are no prewritten questions. You have no idea where a person will lead you in conversation—and this is good. You want to be shown new and interesting perspectives.

    You start off the listening session with a statement about an intention or purpose the person has been involved with. In formal listening sessions, you define a scope for the session—something broader than your organization’s offerings, defined by the purpose a person has. For example, if you’re an insurance company, you don’t define the scope to be about life insurance. Instead, you make it about life events, such as a death in the family.1 Your initial statement would be something like, “I’m interested in everything that went through your mind during this recent event.” For listening sessions that are not premeditated, you can ask about something you notice about the person. If it’s a colleague, you can ask about what’s on her mind about a current project.

    Fall into the Mindset

    How often do you give the person you’re listening to your complete attention? According to Kevin Brooks, normally you listen for an opening in the conversation, so you can tell the other person what came up for you, or you listen for points in the other person’s story that you can match, add to, joke about, or trump.2

    It feels different to be a true listener. You fall into a different brain state—calmer, because you have no stray thoughts blooming in your head—but intensely alert to what the other person is saying. You lose track of time because you are actively following the point the other person has brought up, trying to comprehend what she means and if it relates to other points she’s brought up. Your brain may jump to conclusions, but you’re continually recognizing when that happens, letting it go, and getting a better grip on what the speaker really intends to communicate. You’re in “flow,” the state of mind described by Mihaly Csikszentmihalyi.3 You are completely engaged in a demanding and satisfying pursuit.

    It’s a different frame of mind. You don’t want to be this focused on someone else all the time—you have to do your own thinking and problem-solving most of the time. But when needed, when helpful, you can drop into this focused mindset.

    Explore the Intent

    Developing empathy is about understanding another human, not understanding how well something or someone at work supports that person. Set aside this second goal for a bit later. For the time being, shift your approach to include a farther horizon—one that examines the larger purposes a person is attempting to fulfill.

    The key is to find out the point of what the person is doing—why, the reason, not the steps of how she does it. Not the tools or service she uses. You’re after the direction she is heading and all her inner reasoning about that direction. You’re after overarching intentions, internal debates, indecision, emotion, trade-offs, etc. You want the deeper level processes going through her mind and heart—the things that all humans think and feel, no matter if they are old or young, or you are conducting the session 500 years ago or 500 years in the future. These are the details that will allow you to develop empathy. Collecting a shallow layer of explanation or preferences does not reveal much about how this person reasons.

    To remind the speaker that you’re interested in answers explaining what is going on in her mind and heart, ask questions like:

    • “What were you thinking when you made that decision?”
    • “Tell me your thinking there.”
    • “What was going through your head?”
    • “What was on your mind?”

    If you suspect there might be an emotional reaction involved in her story that she hasn’t mentioned yet, ask: “How did you react?” Some people ask, “How did that make you feel,” but this question can introduce some awkwardness because it can sound too much like a therapist. Additionally, some people or industries eschew talking about “feelings.” Choose the word that seems appropriate for your context.

    Avoid asking about any solutions. A listening session is not the place for contemplating how to change something. Don’t ask, “Can you think of any suggestions…?” If the speaker brings up your organization’s offering, that’s fine—because it’s her session. It’s her time to speak, not yours. But don’t expand upon this vein. When she is finished, guide her back to describing her thinking during a past occurrence.

    Make Sure You Understand

    It is all too easy to make assumptions about what the speaker means. You have your own life experience and point of view that constantly influence the way you make sense of things. You have to consciously check yourself and be ready to automatically ask the speaker:

    • “What do you mean?”
    • “I don’t understand. Can you explain your thinking to me?”

    Keep in mind that you don’t have the speaker’s context or life experience. You can’t know what something means to her, so ask. It takes practice to recognize when your understanding is based on something personal or on a convention.

    Sometimes, you will probe for more detail about the scene, but there’s nothing more to say, really. These kinds of dead-ends will come up, but they’re not a problem. Go ahead and ask this kind of “please explain what you mean” question a lot, because more often than not, this kind of question results in some rich detail.

    You don’t need to hurry through a listening session. There’s no time limit. It ends when you think you’ve gotten the deeper reasoning behind each of the things the speaker said. All the things the speaker thinks are important will surface. You don’t need to “move the conversation along.” Instead, your purpose is to dwell on the details. Find out as much as you can about what’s being said. Ignore the impulse to change topics. That’s not your job.

    Alternatively, you might suspect the speaker is heading in a certain direction in the conversation, and that direction is something you’re excited about and have been hoping she’d bring up. If you keep your mind open, if you ask her to explain herself, you might be surprised that she says something different than what you expected.

    It’s often hard to concede you don’t understand something basic. You’ve spent your life proving yourself to your teachers, parents, coworkers, friends, and bosses. You might also be used to an interviewer portraying the role of an expert with brilliant questions. An empathy listening session is completely different. You don’t want to overshadow the speaker at all. You want to do the opposite: demonstrate to her that you don’t know anything about her thinking. It’s her mind, and you’re the tourist.

    Sometimes it’s not a matter of assumptions, but that the speaker has said something truly mystifying. Don’t skip over it. Reflect the mystifying phrase back to the speaker. Ask until it becomes clearer. Don’t stop at your assumption. Teach yourself to recognize when you’ve imagined what the speaker meant. Train a reflexive response in yourself to dig deeper. You can’t really stop yourself from having assumptions, but you can identify them and then remember to explore further.

    Another way to explain this is that you don’t want to read between the lines. Your keen sense of intuition about what the speaker is saying will tempt you to leave certain things unexplored. Resist doing that. Instead, practice recognizing when the speaker has alluded to something with a common, casual phrase, such as “I knew he meant business” or “I looked them up.” You have a notion what these common phrases mean, but that’s just where you will run into trouble.

    If you don’t ask about the phrases, you will miss the actual thinking that was going through that person’s mind when it occurred. Your preconceived notions are good road signs indicating that you should dwell on the phrase a little longer, to let the speaker explain her thought process behind it.

    Footnotes

    • 1. If you’re a researcher, it helps to know that listening sessions are a form of generative research that is person-focused rather than solution-focused. Thus, it’s easy to remember to keep them from dwelling on how solutions might work for people.
    • 2. This was my epiphany from the UX Week 2008 workshop (PDF) by Kevin Brooks, PhD. Sadly, Kevin passed away from pancreatic cancer in 2014.
    • 3. Mihaly Csikszentmihalyi, widely referenced psychologist and author, Flow: The Psychology of Optimal Experience, New York: Harper Collins, 1991, and Finding Flow: The Psychology of Engagement with Everyday Life, New York: Harper Collins, 1997, plus four other book titles on Flow. Also see his TED Talk and YouTube presentations.
  • On Our Radar: In the Key of F 

    Welcome to a new kind of blog post from the A List Apart staff, where we share stories and ideas that caught our eye. This week was all about Fs—no, not the kind you mutter every time your boss says “put it above the fold.” Our staff’s been buzzing about Flipboard, feminism, and facilitating great events. Also, feelings—so many feelings.

    Here’s what’s on our radar:

    Tim Murtaugh, technical director:
    Flipboard has finally released a web-based version of their platform, and it’s generating some interesting conversations. Their choice to use canvas to render the entire site gives them total control over the user experience, but leaves the site completely inaccessible. Faruk Ateş expressed dismay, while Allen Tan offered another perspective.

    Lisa Maria Martin, issues editor:
    “Which Women in Tech?” by Nicole Sanchez was a great read—and it made me realize, quite shamefully, how little I’ve thought about my own role in building truly inclusive events. This was a clarion call to make sure my diversity work is intersectional, and to remember to “give up the mic wherever possible.”

    Jeffrey Zeldman, founder and publisher:
    “How to Organize a Conference: 18 Amazingly Useful Tips” by Louis Rosenfeld is exactly what you think it is. From “validate the need” to “programming is curation and design,” it’s 18 pieces of golden advice solicited from the founders of 18 great web, design, and UX conferences.

    A gif about our feelings:

    A gif of a cat looking unamused by a blizzard outside.
    We’re excited to share these links with you!
    Have we mentioned that many of our staff members are in Boston right now!

    What about you?
    What stories are drawing your attention? Got any posts, pictures, or code demos you want to share? (We love sharing!) Tweet us what’s on your radar—and what should be on ours.

  • Ask Dr. Web with Jeffrey Zeldman: The Love You Make 

    In our last installment, we talked about what to do when your work satisfies the client, but doesn’t accurately reflect your abilities, e.g. how do you build a portfolio out of choices you wouldn’t have made? This time out, we’ll discuss choices you can (and should) make for yourself, free of any client-imposed restrictions.

    As an employer, how important do you feel open source contributions are in a modern portfolio?
    Dip My Toe
    In your opinion, what is best way to present your work online today? Sites like Dribbble? or custom portfolio? or something else?
    All A-Tizzy


    Dear Dip and Tizzy,

    The best thing any web designer or developer can do is learn to write and speak. The heyday of blogging may be over, but that’s no reason not to create a personal site where you share your best ideas (and occasionally, your biggest frustrations) as a professional.

    Design and development use different parts of the mind than verbal expression does. Spending day after day in Photoshop or Coda can get you into a wonderfully productive and inspired groove. But growth comes when you step away from that familiar, comforting environment where you already know you shine, and practice articulating ideas, arguments, and rationales about what you do and why.

    Daring to speak—unblocking your inner voice—can be scary, but it’s worth it. Only by writing my thoughts and speaking publicly do I actually understand what I’m thinking; only by sharing those verbalized thoughts with others can I begin to see their broader implications. The Web Standards Project would not have existed—and the web would be a very different place—if those of us who co-founded it hadn’t spent almost as much time articulating our ideas about the web as we did creating websites. And the same is true for everyone who works to improve our medium by sharing their ideas today.

    By daring to publicly speak and write, you will become better at selling your ideas to tough clients, better at evangelizing methodologies or causes to your peers, better at thinking and therefore at doing, and better at those all-important job interviews. I’m a sucker for design talent, but I’ve never hired anyone, however gifted, if they couldn’t talk, couldn’t argue, couldn’t sell, couldn’t put their passion into words a client could understand.

    I’ve also never hired a designer or developer who didn’t have a blog or some equally meaningful and living web presence. I hired Jason Santa Maria in 2004 because of a blog post he wrote, and over a decade later, we still work together on meaningful projects like A Book Apart (the book arm of the magazine you’re now reading). Moreover, I’ve never hired anyone who didn’t have a personal web presence of some kind—be it a blog or something more unexpected. Don’t get me wrong: communities like Dribbble are fantastic for sharing glimpses of your work, learning from others, and building a following. If you’re an illustrator, a Dribbble or Behance page and a personal portfolio will suffice. If you’re an exceptionally gifted illustrator, one whose work leaps off the screen, I might not even need that personal portfolio—Dribbble or Behance will be enough.

    But if you design, develop, or project manage websites and applications, or do other UX, strategy, or editorial work for the web, you need a voice—and a blog is a terrific place to start building one. (And once you’re comfortable writing on your blog, start reaching out to industry publications.)

    The other thing that really helps you stand apart from your peers is contributing to someone else’s project, or starting your own. If you’re a developer, I should be able to find you on Github; if you’re a designer, start or contribute to a project like Fonts In Use.

    You don’t have to believe in karma to know that, in this field at least, the more you put out, the more you get back. Even if you have the misfortune to work for a series of less-than-stellar clients, or at a shop or company that doesn’t promote your best work, you must never let those circumstances define you. As a designer, you are responsible for what you put out into the world. If your job sucks, design something for yourself; if everything you build is hidden behind corporate firewalls, contribute code to an open source project, link to it from a personal site, and write about it on your blog. That’s how others will discover and appreciate you. Rich Ziade’s studio designed million-dollar projects for banking institutions, and I never saw or heard of one of them. (Secrecy comes with that turf.) But I met Rich, and became his friend and fan, after he and his team released Readability, an app dedicated to un-sucking the online reading experience.

    Don’t wait for someone to offer you a dream job or a dream project. Shake what your momma gave you: create something, pay it forward.

    How do I know this advice is good for your career and our community? A List Apart began as a side-project of mine, back when I was designing less-than-stellar websites for clients I couldn’t sell good work to. And the rest, I believe, you know.

    Hope this helps, and see you again soon in a future installment of “Ask Dr Web.”

    Have a question about professional development, industry culture, or the state of the web? This is your chance to pick Jeffrey Zeldman’s brain. Send your question to Dr. Web via Twitter (#askdrweb), Facebook, or email.

  • A List Apart: On Air 

    We keep busy here at A List Apart: publishing articles, columns, and blog posts; sharing our forays into open source; and coming up with features like email notifications for new content.

    Something was still missing, though.

    We’ve always prided ourselves more on our focus on the designer and developer community than on our technical acumen. Anyone can help make a better website, but we consider it a unique privilege—and responsibility—to be able to help developers become better developers. So, we’re trying something brand new: community-focused events where our readers can get to know A List Apart’s authors and staff.

    Our goal is to bring the feel of a local web development meetup to the web. We want to combine the best and brightest voices from ALA’s past with new voices and new perspectives on designing and building for the web. We want to discuss the events of the day—or hour. We want Q&A sessions where you ask us the tough questions; we want to know what questions don’t yet have answers, so we can figure them out with you.

    If this piques your interest, well, I have some good news: we’re kicking things off right away. We’ve assembled some of the best minds in web performance to talk about every facet of building smaller, faster websites—from getting buy-in at every level of your organization to the steps we can incorporate into our day-to-day work.

    Designing for Performance: Can We Have it All?

    Thursday, February 26, 2015
    1:00 – 2:00 p.m. EST

    Building a faster, leaner web means contending with a number of challenges, not all of which are strictly technical. “But,” your CEO argues, “our massive, high-resolution images are worth the wait.” How do we manage those kinds of expectations? How do we get our teams—and our bosses—as excited about building performant websites as we are? Most important, though: where do we get started, and how?

    Our panelists are here to answer all these questions for you, and then some. Get the details and register now.

    Lara Hogan

    Lara Hogan champions performance as a part of the overall user experience, and is the author of Designing for Performance (O’Reilly, 2014). She believes in striking a balance between aesthetics and speed, and in building performance into company culture. Lara speaks and tweets, and is currently senior engineering manager of performance at Etsy.

    Scott Jehl

    Scott Jehl releases projects on GitHub that focus on accessible, sustainable, and performance-oriented practices for cross-device development. He speaks at conferences around the world and recently authored Responsible Responsive Design (A Book Apart, 2014). Scott is a web designer and developer at Filament Group; his clients include the Boston Globe, LEGO, Global News, and eBay. He tweets early and often.

    Yesenia Perez-Cruz

    Yesenia Perez-Cruz is a Philadelphia-based designer working at Intuitive Company. She is also a speaker at international conferences, in demand for talks specializing in balancing performance with design, cross-team collaboration, and how to be flexible with your design process. She honed her design and user experience skills while working at Happy Cog for clients like Zappos, MTV, and Jose Garces. She also acts as an acquisitions scout for A List Apart.

    And as your humble moderator, I’ll be doing my best to stay out of their way.

    More events are in development, and we’ll be sure to keep you updated. We’re looking forward to talking with all of you.

  • Lyza Danger Gardner on Building the Web Everywhere: What Will Save Us from the Dark Side of CSS Pre-Processors? 

    Writing CSS by hand for a site or app of any considerable size seems quaint these days, in the way that shaping a piece of wood with an adze seems quaint. Admirable, perhaps, but even if it gives you a tangible connection to the exact outcome, the vestigial quirks, limitations, and tedium of that workflow make it feel archaic.

    Until a few years ago, this direct method was our only real option. We managed CSS by hand, and it got complicated and crazypants. So when pre-processors started showing up—Sass, LESS, Stylus—we clutched at them, giddy and grateful like sleep-deprived parents.

    Pre-processors to the rescue!

    Their fans know that pre-processors do a lot of stuff. Variables, functions, nesting and calculations are part of the pre-processor assortment, but there’s often also support for concatenation, minification, source maps, output formatting. Sass feels like an authoring tool, framework, configuration manager, transform and build tool in one. They’ve become very popular—especially Sass, the juggernaut. Huzzah! Such power!

    Pre-processors… to the rescue?

    Yet as with other powerful things, Sass has a dark side. Its potential malevolence tends to manifest when it’s wielded without attention or deep understanding. In the recent article A Vision for our Sass, Felicity Evans points out some of the ways unmindful use of Sass can result in regrettable CSS.

    Pre-processors have a way of keeping us at arm’s length from from the CSS we’re building. They put on us a cognitive burden to keep up on what’s evolving in CSS itself along with the tricks we can pull off specific to our pre-processor. Sure, if we’re intrepid, we can keep on top of what comes out the other end. But not everyone does this, and it shows.

    Overzealous use of the @extend feature in Sass can create bloated stylesheet files bobbing in a swamp of repeated rules. Immoderate nesting can lead to abstruse, overlong, and unintentionally overspecific selectors. And just because you can use a Sass framework like Compass to easily whip up something artful and shiny, it doesn’t guarantee that you have any sort of grip on how your generated CSS actually works.

    Pre-processors… FTL?

    Diabolical output is one risk, and yet there are additional ways pre-processors can trip us up.

    Working with a pre-processor means writing source in its Domain-Specific Language (DSL). (You could also pen your source using entirely vanilla CSS, but that would be pretty pointless, as the power of pre-processing comes from operations on variables, mixins, and other features written in their particular syntax.) You feed this source to the pre-processor and out comes CSS ready for browsers. You couldn’t take your source and use it in a browser. It’s not ready yet.

    That means that the source is not entirely portable. So choosing a particular pre-processor may be a long-term commitment—Sass and other pre-processors can create a certain amount of lock-in.

    On a conceptual level, the breadth of pre-processors’ scope is significant enough that it can insinuate itself into the way we think and design. In that sense, it’s not a tool but a system. And this can get under the skin of people—especially devs—who thrive on separation of concerns.

    This is beginning to sound like an argument to ditch Sass and its brethren and return to the homespun world of handcrafted CSS. And yet that is a false dichotomy: pre-processors or nothing. There are other tools for managing your CSS, and I’m especially hopeful for a new (ish) category of tools called post-processors.

    Post-processors to the rescue!

    In contrast to pre-processors’ distinct syntaxes, post-processors typically feed on actual CSS. They can act like polyfills, letting you write to-spec CSS that will work someday and transforming it into something that will work in browsers today. Ideal CSS in, real-life CSS out.

    You may already be using a post-processor alongside your pre-processor without being aware of it. The popular autoprefixer tool is in fact a post-processor, taking CSS and adding appropriate vendor prefixes to make it work in as many browsers as possible.

    Many post-processors—especially those written with a plugin approach—do only one specific thing. One might polyfill for rem units. Another might autogenerate inline image data. You can pick and choose the modular plugins you need to transform your CSS.

    Post-processors typically edge out their pre- brethren in build-time speediness.

    And because they can be used as modular chunks, they can serve a balm for the aforementioned separation of concerns violations.

    With this kind of authoring, we have a built-in necessity to stay current on the way new specs express themselves in actual CSS syntax, and that means post-processors’ transformations aren’t as inscrutable. Plugin authoring, too, is pegged to the same specs. Everyone is marching to the same, standards-driven beat.

    This is starting to feel like outright post-processor boosterism, isn’t it?

    Except.

    Post-processors… to the rescue?

    The case for post-processors isn’t entirely coherent. There isn’t even any consensus about the definition. The way I’m explaining post-processors is my own interpretation. Don’t take it as gospel.

    Real-world implementations don’t help to clear the picture, either. Several modules written using postcss, a JavaScript framework for post-processors, involve custom syntax that doesn’t align with the definition I’m outlining here (valid CSS in, valid CSS out). By my definition, myth.io would be a post-processor, but is described by its maintainers as a pre-processor. Maybe post-processors aren’t even a thing, or only exist in my fevered, idealistic imagination.

    Post-processors may hold more appeal to certain members of the web-building audience. The appeal of shaving some milliseconds off build has more clout with some than others. Modularity is one thing, but pre-processors can do so many things. It’s hard to wean from something that serves us so well.

    Taking a path paved with lean, modular post-processing plugins involves sacrifices. No more nesting. Instead of an endless horizon of mixin possibilities, you may be bound to CSS spec realities, like calc or CSS variables. One promising framework for rolling out post-processing plugins is postcss, but it’s young yet and its documentation is in a correspondingly awkward adolescent phase.

    Knowing your craft to the rescue!

    Remember that thing I said earlier about false dichotomies? Gotta remember that, because pre- and post-processors aren’t mutually exclusive.

    We happily use both in my office. Some of our more dev-y designers have taken a shine to the post-processing philosophy, while other designers remain pleased with the all-in-one, intuitive oomph Sass gives them. Both are right.

    Though each camp might have a different approach to tools, the important commonality they share is a deep understanding of the CSS that comes out the other side. Neither set of tools is a crutch for ignorance. Know your craft.

    Although there are different ways to get there, a thoughtful understanding of CSS is a prerequisite for continued success in building great things for the web. Whether you’re one to meditatively chip with an adze along a raw CSS stylesheet or you prefer to run it through a high-tech sawmill, you’re always better off understanding where you’re starting from and where you’re trying to go.

  • Style Guide Generator Roundup 

    Style guides are a living document of code that detail all the various elements and coded modules of your application. The term “pattern library” is often used when talking about these types of guides—Brad Frost has a great piece on differentiating between the various types of style guides. To learn more about creating style guides see either Creating Style Guides here on A List Apart or Front-end Style Guides by Anna Debenham.

    In the past I’ve been quite old fashioned and done the HTML for style guides by hand, but with a new project I wanted to try out a style guide generator. Style guide generators bring at least some automation to the creation of your guide, so that you don’t have to do everything by hand. They may actually create the guide itself using a process built on a task runner, Node, or a particular language. My client requested something straightforward, something that would have some type of auto generation. So, off into the land of generators I went.

    Because they generate portions of the guide itself, maintenance is hopefully easier. Generators aren’t a silver bullet, they’re not completely automated, but even just getting part of the process done for you can make life easier down the road.

    There are a lot of different kinds of generators out there. Many of them are based on workflows or a particular tool, and some of them have been ported over to several different workflows, so I’m breaking these down by workflow, but you’ll see several mentioned in the various categories.

    Node.js

    There are several generators that run off of Node, so if you like Node, you have some choices. KSS has a Node version, and there’s also Pattern Primer, StyleDocco, and StyleDown.

    Two of these, KSS for Node and Pattern Primer, are just ports of other generators that run on something other than Node. But StyleDocco and StyleDown both were written in Node.

    KSS, StyleDocco, and StyleDown all use a combination of comments and markdown that are in your CSS files (or whatever files your CSS uses, such as Sass, etc.). For example:

    /**
    * Button:
    * `.btn` - the main button style
    *
    *     @example
    *     button.btn Button
    */
    

    This is using StyleDown for the generation, just as an example—they all vary a bit in what you put into the files.

    I found in my research that all of these were easy to get up and running, the most complicated being Pattern Primer, because you actually break up the HTML into partials, whereas the others use just the comments to guide what markup will be generated in the final file.

    Some of these, such as StyleDown and StyleDocco, come with some nice out-of-the-box styling for the guide itself, so you can get something looking quite nice in very little time.

    Gulp and Grunt

    There are several style guide generators that can be used with Gulp or Grunt task runners to get up and going. Some of these are the very same as mentioned above using Node, so it is just versioned to work with the particular task runner.

    So here’s a list of the ones I’ve already discussed, but just made for Gulp or Grunt:

    But there are some generators that are unique to Gulp or Grunt and one of them looks quite amazing. Fabricator creates a great looking UI toolkit, so I recommend checking it out if you’re looking for a robust solution and you use Gulp.

    Grunt also has several generators that are not on Gulp, so here are some more to check out for that workflow:

    Using a task runner, much like Node, can be great for a quick, lightweight solution. For certain projects, many of these could be a great solution, especially if you’re in client services and will be handing off the code at the end. These (or the Node solutions) would be fairly easy for them to get up and running as well.

    Ruby

    The generators using Ruby (or PHP for that matter) are often quite robust, generating something more akin to Fabricator, with navigation, nice styling, and flexibility. If you are already working on an app in Ruby, they make even more sense, but the style guides can be done as a stand alone as well.

    Hologram was developed by Trulia and has become a great solution for generating guides. It relies on YAML and markdown comments in your CSS to generate a fantastic style guide. It has a great templating system with some basic styles and navigation that makes the generated guide easy to use.

    KSS, mentioned above in the Node section, uses the views in whatever your framework of choice is to get the guide generated. Because it is a bit more wrapped up in the framework of the application itself, it may not be as quick to get it up and running, but once you’ve done the work, it could be a great add-on to your current application and help you keep your UI in order.

    The Living Style Guide is another tool that runs off a Ruby gem, but uses just Sass and markdown to compile it all together. You write markdown partials of each module, which is then translated into the markup for the guide itself. The gem then takes all the markdown templates and converts it to HTML and creates the guide. It runs off a gem, but it doesn’t have to be used with a Ruby project, it can run in other projects as well and you can also integrate it into Rails.

    There is a version of The Living Style Guide in Gulp as well, if you would prefer to try that instead of the Ruby version.

    PHP

    Unlike many of the other generators already discussed, for these you’ll need to have a local server running to put it all together and see it in action. But some of these solutions have been around for a while and used in great projects.

    Barebones, is more than just a style guide generator, but does include one. Using partials and includes, the modules and components can get included on the style guide page.

    Pattern Primer runs in a similar way to Barebones. The patterns are partials of HTML and then compiled into the main index page. One nice feature of the Pattern Primer is that the patterns are already in the GitHub repo, so you have a great starting point for all the various different elements you quite possibly would be using in your site or application. In addition, you may have already noticed this, but this concept has been ported over to several other workflows.

    Pattern Lab generates a static site of the various patterns and modules used in a site and it is quite robust. It uses mustache templates, with JavaScript for viewing and PHP for building, but once it’s built, it’s static, which is really nice.

    Style Guide Boilerplate is another PHP-based generator, but it runs on a server, much like Pattern Primer. There are some initial patterns to help you see how everything works and you can go from there creating your own snippets to include in your final guide.

    That’s a lot of different choices for how to generate your style guide. I’m sure there are even more possibilities, so if I missed a tool that you really like, please let us know in the comments. You can also check out styleguides.io for even more information on style guides, and share additional generators or tools you use when creating style guides there via GitHub or on their form if you aren’t comfortable doing GitHub pull requests.

    If you are just getting started with style guides, then you should definitely check out styleguides.io. There’s a lot of information there, but the resources with stars are a good place to start. It can seem daunting with all the information there is out there, but just getting started and creating your first guide will show you how wonderful they can be in a workflow—hopefully one of these tools will make generating your first guide just a bit easier.

  • Reframing Accessibility for the Web 

    We need to change the way we talk about accessibility. Most people are taught that “web accessibility means that people with disabilities can use the Web”—the official definition from the W3C. This is wrong. Web accessibility means that people can use the web.

    Not “people with disabilities.” Not “blind people and deaf people.” Not “people who have cognitive disabilities” or “men who are color blind” or “people with motor disabilities.” People. People who are using the web. People who are using what you’re building.

    We need to stop invoking the internal stereotypes we have about who is disabled.

    We need to recognize that it is none of our business why our audience is using the web the way they’re using it.

    We can reframe accessibility in terms of what we provide, not what other people lack. When we treat all of our users as whole people, regardless of their abilities, then we are able to approach accessibility as just another solvable—valuable—technical challenge to overcome.

    Who are these “people with disabilities,” anyway?

    Let’s talk about stereotypes and prejudices.

    First, we need to acknowledge that most of us have a bias blind spot. Even if we think we don’t have stereotypes about “people with disabilities,” there’s a good chance that we do and just don’t realize it. (If you don’t, great for you! Play along anyway.)

    By definition, a stereotype is “a widely held but fixed and oversimplified image or idea of a particular type of person or thing.” Negative stereotypes of people with disabilities are common, and are a symptom of ableism: the belief that able-bodied people are the norm, and that people with disabilities should either strive to “be normal” or keep their distance.

    A blogger who goes by the name “Bookwormblues” describes this phenomenon in the essay, “I am not broken: the language of disability.”

    We seem to live in a world where the able-bodied among us are considered normal, and everyone else must strive to attain that level. That thinking floods the books we read, the way we view others, how we talk to each other, and the words we use. This mindset sets a ridiculous bar for people who, for whatever reason, might require an atypical way to get from point A to point B. The thing about ableism is that it’s everywhere, and it’s incredibly common, and we don’t even realize it.

    Let’s look at the language we use to describe people who need accessible websites.

    A number of influential websites, including WebAIM and Wikipedia, begin their discussions of accessibility issues with a categorized list of disabilities we need to develop for: visual, auditory, physical, speech, cognitive, and neurological. Granted, if you have no experience with what accessibility means or who it affects, this is a good starting point. Still, we need to recognize that we start our accessibility conversations by categorizing the ways “they” are not “us.”

    When we hold a prejudicial view toward a group we are not members of, we tend to see “them” as more alike than the groups we are in. In other words, we see the groups that “we” belong to as full of diverse individuals, but “they” are all alike. This is called out-group homogeneity.

    We have simplified the out-group of “people with disabilities” in terms of long-term, severe, life-altering circumstances. We tend to think of them as having obvious outward cues: the white cane and glasses of the blind, the wheelchair for the motor-impaired, the altered speech and huge hearing aids for the deaf. The long-suffering family members standing by, caretaking. The “superhuman” inspiration of a successful person with a disability.

    This is wrong, too.

    I can think of at least 26 ways to define a need for accessibility, many of which are invisible, temporary, or off the usual radar of accessibility scenarios.

    WebAIM discusses a phenomenon where a large group of people are asked if any of them have a visual disability. Very few audience members say that they do, even when a sizable number of them wear glasses or contacts. Despite the fact that they (and I) wear assistive technology to see, we don’t see ourselves as “people with a disability.”

    It may be more effective to see our differing levels of ability as a spectrum instead of a setting. There are people who will always self-identify as having a disability. There are other people who will never see themselves as disabled, despite needing accessibility technology such as glasses, canes, or track balls. In between, there are infinite combinations of needs, some of which last for mere moments, and others which last for the life of the person.

    If we make the choice to consider everyone “a person on the ability spectrum” instead of separating the “able-bodied” from the “disabled,” we stop treating people with different abilities as members of an out-group, and we start treating them as part of our own diverse in-group.

    Note: If you ask a large group of people with different kinds of disabilities what they want to be called, you will get a large number of answers. Some prefer “people with disabilities,” some prefer “disabled people,” some prefer their specific situation be called out, some would rather not mention it at all. For this essay, I chose “people with disabilities” because it’s what my friends call themselves. As always, you should ask a person what the prefer, and respect them by using it.

    Why do we fixate on justifying the existence of people with disabilities?

    Tell me if you’ve heard this one before. A big web design change is about to go through, and someone on your team has just discovered a bug that will cause problems with accessibility. One of the decision-makers in charge of the budget asks, “Well, how important is it? I mean, how many blind people do we have using the site, anyway?”

    I’ve had many otherwise totally reasonable people justify not spending money to repair an accessibility issue because there aren’t enough of “them” to make it worthwhile. Strangely, these same people have no issue with spending more money on “expert users” or making their applications “feature-rich,” as long as it’s done for the “primary customer”—never realizing that a subset of those primary customers are people with disabilities. As uncomfortable as it makes us feel to admit it, when we make the decision not to support people with disabilities just because it’s expensive or hard, we’re being ableist.

    Our customers, our users, are all people. They all have the same customer needs—whatever those customer needs are. They all have money that spends the same way. They are all just like you, except in the ways that they are not. They all deserve equal amounts of our respect. The only difference between these groups of people is the attitude we take when serving them.

    Frankly, it’s none of our damn business why someone wants your website to work without a keyboard or a mouse, or on a screen reader or a Braille output. When you walk into the local grocery store, nobody greets you at the door and tells you that the store won’t work for you because you’re wearing glasses. Neither you nor your business should expect your audience to step up and voluntarily tell you what accessible technology they use, or why, or anything about their medical history, just so you can sell them socks, or a mutual fund, or a house.

    So let’s build accessible websites, and let’s talk about accessibility—but let’s talk about it in terms of feature sets and technology, not the quantity (or value) of one set of users over another. Let’s teach our business contacts and our managers and our tech leads and our Scrum masters that we’re building software for all of our users, and we’re no more going to give them a lousy experience because they have a disability than we would for their race, creed, or gender.

    Reframing accessibility as a technology challenge

    We cannot afford to let stereotypes and prejudices alter the perceived value of our audiences. Let’s stop quantifying people and instead start quantifying experiences. Let’s justify accessibility by putting the emphasis on the technology instead of the users.

    Let’s start by expanding our view of what can connect to the web. Start with hardware, the interface between the people and the bits. What might people be using? What combinations may occur? This research is as simple as building a spreadsheet.

    Creating a test matrix for accessibility

    Here’s how to methodically build a test matrix that defines an accessibility strategy by combining accessibility scenarios with other testing types. The result is that you’ll no longer need to justify your testing of accessibility issues based on the relative size or merits of the audience any more than you justify testing different screen sizes or different browsers. This matrix makes accessibility testing one more factor in the testing you do normally.

    1. Read this list of input devices for computers.
    2. Read this list of output devices for computers.
    3. Create a matrix of these devices in your favorite spreadsheet format. (If you normally write your test cases in a list or other format instead of a matrix, that’s fine too! Creating the matrix and then transferring the results into your other format will probably make the transition easier.)
    4. A testing matrix showing computer inputs on the vertical axis and outputs on the horizontal axis.

       

    5. Cautiously determine the likelihood that a combination exists.

      • I have yet to hear of a laser rangefinder-input and plotter printer-output web-surfing technique, so that can be removed (at least until the makers read this article).
      • You may be tempted to cross out that Wii Remote and TV combination, but to do so is to ignore the growing use of consoles as web devices.
      • I thought mouse-input and screen reader-output didn’t make sense together and shouldn’t be in the test matrix, but then I discovered Find the Invisible Cow.
      • Remember that this is not a list of what your users use today; it’s a list of what exists. If your current website is so inaccessible that screen readers don’t work, well, of course your data is going to show you have no screen reader users. You drove them off!
    6. Look for opportunities to combine other elements of your test cases, such as different operating systems, browsers, or screen resolutions. It’s important to cover as much ground as you can with as few test cases as possible.
    7. Once your matrix is set, keep it as your template, and simply make a fresh copy from this each time you run the tests on a website.
    8. For each application on your site, identify whether it works, doesn’t work, or wouldn’t apply, noting the results in the spreadsheet.

    For example, if I were reading an article on a news site, with an accompanying video, I might see test results that look like this:

    A testing matrix that's been filled out with the results for an example website.

    Applying the matrix to personas

    This method can be easily integrated into personas or Agility user stories by ensuring that at least one of every block makes it into the description of at least one of your personas.

    For example, here’s a persona for a processing supervisor at a fictional pharmaceutical organization:

    Harold is a Processing Manager member who has been with PharmaCo for sixteen years. He started out as a Processing Associate and worked his way up through the ranks, so he knows the roles and responsibilities of his team members as well as they do. Harold’s primary responsibility is to ensure that his team members are getting the mail queues emptied as quickly as possible. His secondary responsibility is helping his team shape their careers and move up in the company ranks, just as he did. He’s also responsible for two projects within the department, and he’s a member of the Diversity Committee.

    Harold spends most of his time in meetings. His team reaches him via email or chat if they need something. One week out of every four he’s assigned to handle QA rejected items and discuss them with the affected crew members. These arrive to him through WebApp3, but he has to use WebApp2 to contact the team member affected. Because Harold is on-the-go, he uses a tablet with a touchscreen as his primary computer.

    Harold began losing his vision about six years ago, but he’s not completely blind. He has the screen text on his tablet zoomed all the way up, but he’s prone to headaches if he tries to read for too long. Harold uses the screen reader built into the tablet to read him long emails or messages. He carries headphones with him so that he doesn’t disrupt other employees in his area.

    The key here is that the personas are not extra “disabled people” personas, they’re personas of people with real scenarios and real needs who, coincidentally, are using the accessibility features we built into our sites. Because every persona has a hook back into the test matrix, the temptation to cut Harold from the budget is much harder to take—cutting Harold means cutting the testing for WebApp3 to WebApp2, cutting the testing for the tablet screen size, and cutting the testing for a screen reader.

    Checking content against the matrix

    Finally, this method provides extra checks and balances against your content strategy. It simultaneously pushes your writers toward “concise, appropriate, useful content” and provides extra opportunities to review the content in multiple formats. It does so by ensuring that you view your content in different contexts—different layouts, different formats, different delivery methods. We see our content in a different light through each one. These reviews give us the opportunity to identify problems that a single review in a single location would not.

    If the content doesn’t make sense when it’s heard, then it probably isn’t clear when it’s read. If the content doesn’t make sense without its images, then it may need to be rewritten or rescripted.

    Testing beyond the obvious

    You will quickly find (as I did) that building accessible websites is more difficult than inaccessible ones. Semantic HTML, well-designed standards-compliant pages, strong information architecture, a clear content voice, a good test suite, and a thorough understanding of accessibility issues are all required to make accessible websites. Hopefully, these are all goals you’re already striving for, and increasing your site’s accessibility is just one more reason to pursue them.

    Using a testing matrix and personas that include accessibility issues will also help prevent the “tunnel vision” that sometimes develops if we’re looking at only the most obvious accessibility issues. I once worked with a team to create an educational video about expenses, and we knew the videos would require transcripts to be accessible. We built the transcript, set it in the site architecture, and even added the charts and graphs from the video to the transcript so that it was easy to understand. Then, late in the project, a colorblind co-worker pointed out that he couldn’t differentiate the colors in the charts. We’d addressed the most obvious concern, and still managed to produce an inaccessible design. We’d fallen into the trap of thinking that “only the deaf” would have problems with a video.

    People over process, unless process enables people

    Accessibility isn’t about defining your audience and building to their needs. Accessibility is a trait of the website itself. The website is either usable on this hardware and software, or it isn’t—and as I’ve shown, this is something you can build into your testing process today. If your audience can’t access the information they need on the majority of input and output combinations, then you’re failing to meet their needs.

    It’s important that we center our discussions of accessibility on the hardware and software, instead of on the audiences that use that hardware and software. Our current process of defining our audiences’ needs based on their physical limitations too quickly degrades into the process of creating stereotypes, categorizing our audience members, and then deciding whose needs to satisfy based on the finances of the task.

    No person should be treated like their value to your business is based on the expense of building software that uses input and output methods you didn’t think of the first time. No person should have to justify why they’re using the hardware or software that allows them to be successful.

    People are people. They come in many shapes and forms and abilities. Computer interfaces are input and output hardware. They help people communicate with software. Websites are software that help people accomplish their goals, regardless of the hardware and software combination, regardless of the shapes and forms of their people. That is accessibility.

  • The Role of the Web, an Excerpt from Understanding Context 

    A big reason why digital networks became so ubiquitous was the advent of the World Wide Web. The Web became the petri dish in which the culture of “being digital” explosively grew. The Web meant that we didn’t have to worry about what server we were on or to which directories we had access. It meant that we could just make links and think about structure later.

    The principle driving the original development of the Web was to add a protocol (HTTP) to the Internet that facilitated open sharing. In the phrasing of its creators—in the Web’s founding document—its purpose was “to link and access information of various kinds as a web of nodes in which the user can browse at will.”1 When you give people the capability to create environments with more ease and flexibility than before, they will use it, even beyond its intended boundaries.

    The Web has now become something that has far outstripped what we see in dedicated “web browsers” alone. The characteristics of hyperlinks that once were only about linking one metaphorical “page” to another are now fueling all manner of APIs for easy, fluid syndication and mashing-up of information from many different sources. The spirit of the hyperlink means everything can be connected out of context to everything else. We can link enterprise resource management platforms with loading docks, map software with automobiles, and radio frequency ID (RFID) chips injected into pet dogs that include the dog’s records in licensing databases. Even our sneakers can broadcast on a global network how far we run, for anyone to see. The Web is now more a property of human civilization than a platform. It is infrastructure that we treat as if it were nature, like “shipping” or “irrigation.” HTTP could be retired as a network layer tomorrow, but from now on, people will always demand the ability to link to anything they please.

    Additionally, these technologies have allowed us to create a sort of space that’s made of bits, not atoms. This space is full of places that aren’t just supplementary or analog versions of physical environments; they are a new species of place that we visit through the glowing screens of our devices. Writing about one of those places—YouTube—cultural anthropologist Michael Wesch describes how users sitting in front of a webcam struggle to fully comprehend the context of what they’re doing when communicating on “the most public space in the world, entered from the privacy of our own homes”:

    The problem is not lack of context. It is context collapse: an infinite number of contexts collapsing upon one another into that single moment of recording. The images, actions, and words captured by the lens at any moment can be transported to anywhere on the planet and preserved (the performer must assume) for all time. The little glass lens becomes the gateway to a black hole sucking all of time and space—virtually all possible contexts—in upon itself.2

    The disorienting lack of pre-Web context one faces on YouTube is not confined to videos. We’re spending more and more of our lives inhabiting these places, whether it’s Facebook or a corporate intranet. If we measure reality by where meaningful human activity takes place, these places are not merely “virtual” anymore. They are now part of our public infrastructure.

    The contextual untethering the Web brought to computer networks is now leaking out into our physical surroundings. Structures we assume have stable meanings from day to day are shot through with invisible connections and actions that change those meanings in ways we often don’t understand. We live among active digital objects that adjust our room temperature, run our economies, decide on our financial fitness, route our trains and car traffic, and advise us where we should eat and sleep.

    As Rob Kitchin and Martin Dodge explain in Code/Space: Software and Everyday Life (MIT Press), “Software is being embedded in material objects, imbuing them with an awareness of their environment, and the calculative capacities to conduct their own work in the world with only intermittent human oversight.”3 These digital agents introduce rules of cause-and-effect into our environment that happen beyond our immediate perception, like a lever that switches far-away railroad tracks. Or, even more puzzling, we might pull a lever that does something different each time, based on some algorithm; or we watch as the algorithm pulls the lever itself, based on its own mysterious motivations.

    At the center of all this disruption is how we understand basic elements of our environment: What place am I in? What objects does it contain, and how do they work? Who am I, and who can see me, and what I am doing? What used to be clear is now less so.

    Case Study: Facebook Beacon

    Some of the infrastructure we take for granted now was almost unimaginable only a decade ago. And perhaps no digital “place” is more ubiquitous in more people’s lives than Facebook. With billions of registered users, it’s become the “telephone network” of social interaction online.

    Back in 2007, Facebook launched a service it called Beacon, which tracked what users purchased on participating non-Facebook sites, and published that information to the recently introduced News Feeds seen by their Facebook “friends.” It took many people by surprise, and sparked a major controversy regarding online privacy.

    Facebook is an especially powerful example of context disruption, partly because of how it has shape-shifted the sort of place it is since it began as a closed network for Harvard students alone.

    In fact, much of Facebook’s architectural foundation was structured based on the assumption that a user’s network would be limited to people she had already met or could easily meet on her campus. The intrinsic cultural structures of one’s college provided natural boundaries that Facebook re-created in code form.

    Over time, Facebook grew rapidly to include other schools, then businesses, and then finally it was opened to the entire Web in 2006. Yet, it wasn’t until much later that it introduced any way of structuring one’s contacts into groups beyond the single bucket of “Friends,” as if everyone you could connect to was the equivalent of someone you met during freshman orientation.

    So, for users who had started their Facebook memberships back when their Friends included only their classmates, the sudden shift in context was often disorienting. With pictures of college parties still hanging in their galleries—meant for a social context of peers that would understand them—they were suddenly getting friend invitations from coworkers and family members. Facebook had obliterated the cultural boundaries that had normally kept these facets of one’s personality and personal life comfortably separate.

    Before Beacon, the introduction of the News Feed had already caused a lot of concern when users realized it was tracking what they did within Facebook itself and publishing an ongoing status report of those activities to their friends. Actions and changes that had once been quiet adjustments to their profile had been turned into News, published to everyone they knew.

    Take, for instance, changes in relationship status. Breaking up with a partner is an intimate, personal event that one might prefer to treat with some subtlety and care. Facebook’s structure made it seem users were changing relationship status within a particular place, separate from other places. Consequently, it was horrifying to discover that changing the setting in a drop-down list in one’s personal profile was simultaneously announcing it to everyone he knew. Facebook broke the expectations of cause-and-effect that people bring to their environment.

    Just as users were getting used to how the News Feed worked, Beacon launched, publishing information about actions users were taking outside of Facebook. Suddenly, Facebook was indiscriminately notifying people of purchases (books about personal matters, medicines for private maladies, or surprise gifts for significant others) and other actions (playing on a video game site during the workday; signing up for a dating site), with confusing contextual clues about what was going on. For example, Figure 2–1 shows a small opt-out pop-up window that the system used, which was easy to overlook. In addition, it quickly defaulted to “Yes” and disappeared if you didn’t acknowledge it in time.4

    Screen capture of a Beacon opt-in message notifying the user that Fandango is sending purchase information to Facebook.
    Figure 2.1: The small Beacon opt-in message that would appear in the lower corner of the screen (from MoveOn.org).
       

    Unlike one’s Facebook profile, this was not information that was already available to your friends; this was information that, in the physical dimension, has always been assumed to be at least implicitly contained within a “store” or “site.”

    Graphic showing the Fandango site and Facebook as non-overlapping circles that users perceive as separate places.
    Figure 2.2: The user perceives the Fandango site as a separate environmental place and might not notice a small, ambient opt-in message.
       

    The result? User revolt, widespread controversy, and the eventual dismantling of the Beacon program. And to top it off, it prompted a $9.5 million class-action lawsuit that was finally settled in February 2013.5

    Facebook has notoriously and publicly struggled with these issues of place confusion since its founding. But what is true of Facebook is just as true of nearly every networked environment. Although Beacon was the metaphorical equivalent of having networked cameras and data feeds for your every action available for public consumption, that breakdown of context is no longer merely metaphorical. As our every action and purchase is increasingly picked up by sensors, cameras, brand-loyalty databases, and cloud-connected smartphones, Beacon’s misstep seems almost primitive in comparison.

    Footnotes

  • Laura Kalbag on Freelance Design: How Big is Big Enough to Pick On? 

    I’m a firm believer in constructive criticism. As I said in a previous column, being professional in the way we give and receive criticism is a large part of being a designer.

    However, criticism of the work has to be separated from criticism of the person. It can be all too easy to look at your own work and think “This is rubbish, so I’m rubbish, ” or have somebody else say “This isn’t good enough” and hear “You’re not good enough. ” Unfortunately, it’s also easy to go from critical to judgmental when we’re evaluating other people’s work.

    Being able to criticize someone’s work without heaping scorn on them constitutes professionalism. I’ve occasionally been guilty of forgetting that: pumped up by my own sense of self-worth and a compulsion to give good drama to my followers on social networks, I’ve blurted unconstructive criticism into a text field and hit “send. ”

    Deriding businesses and products is a day-to-day occurrence on Twitter and Facebook, one that’s generally considered acceptable since real live individuals aren’t under attack. But we should consider that businesses come in all sizes, from the one-person shop to the truly faceless multinational corporation.

    As Ashley Baxter wrote, we tend to jump on social networks as a first means of contact, rather than attempting to communicate our issues privately first. This naming and shaming perhaps stems from years of being let down by unanswered emails and being put on hold by juggernaut corporations. Fair enough: in our collective memory is an era when big business seemingly could ignore customer service without suffering many repercussions. Now that we as consumers have been handed the weapon of social media, we’ve become intent on no longer being ignored.

    When we’re out for some online humiliation, we often don’t realize how small our targets can be. Some businesses of one operate under a company name rather than a personal name. And yet people who may approach a customer service issue differently if faced with an individual will be incredibly abusive to “Acme Ltd. ” Some choice reviews from an app I regularly use:

    Should be free

    Crap. Total rip off I want my money back

    Whoever designed this app should put a gun to there [sic] head. How complicated does if [sic] have to be…

    In the public eye

    We even have special rules that allow us to rationalize our behavior toward a certain class of individual. Somehow being a celebrity, or someone with many followers, means that cruel and unconstructive criticism doesn’t hurt—either because we mix up the special status of public figures in matters of libel with emotional invincibility, or because any hurt is supposed to be balanced out by positivity and praise from fans and supporters. Jimmy Kimmel’s Celebrities Read Mean Tweets shows hurt reactions veiled with humor. Harvard’s Q Guide allows students to comment anonymously on their professors and classes, so even Harvard profs get to read mean comments.

    Why do we do it?

    We love controversial declarations that get attention and give us all something to talk about, rally for, or rally against. Commentators who deliver incisive criticism in an entertaining way become leaders and celebrities.

    Snarky jokes and sarcastic remarks often act as indirect criticisms of others’ opinions of the business. It might not be the critic’s intention from the beginning, but that tends to be the effect. No wonder companies try so hard to win back public favor.

    Perhaps we’re quick to take to Twitter and Facebook to complain because we know that most companies will fall all over themselves to placate us. Businesses want to win back our affections and do damage control, and we’ve learned that we can get something out of it.

    We’re only human

    When an individual from a large company occasionally responds to unfair criticism, we usually become apologetic and reassure them that we have nothing personal against them. We need to remember that on the other side of our comments there are human beings, and that they have feelings that can be hurt too.

    If we can’t be fair or nuanced in our arguments on social media, maybe we should consider writing longform critical pieces where we have more space and time for thoughtful arguments. That way, we could give our outbursts greater context (as well their own URLS for greater longevity and findability).

    If that doesn’t sound worthwhile, perhaps our outbursts just aren’t worth the bandwidth. Imagine that.

  • Variable Fonts for Responsive Design 

    Choosing typefaces for use on the web today is a practice of specifying static fonts with fixed designs. But what if the design of a typeface could be as flexible and responsive as the layout it exists within?

    The glass floor of responsive typography

    Except for low-level font hinting and proof-of-concept demos like the one Andrew Johnson published earlier this week, the glyph shapes in modern fonts are restricted to a single, static configuration. Any variation in weight, width, stroke contrast, etc.—no matter how subtle—requires separate font files. This concept may not seem so bad in the realm of print design, where layouts are also static. On the web, though, this limitation is what I refer to as the “glass floor” of responsive typography: while higher-level typographic variables like margins, line spacing, and font size can adjust dynamically to each reader’s viewing environment, that flexibility disappears for lower-level variables that are defined within the font. Each glyph is like an ice cube floating in a sea of otherwise fluid design.

    The “glass floor” of responsive typography
    The continuum of responsive design is severed for variables below the “glass floor” in the typographic hierarchy.

    Flattening of dynamic typeface systems

    The irony of this situation is that so many type families today are designed and produced as flexible systems, with dynamic relationships between multiple styles. As Erik van Blokland explained during the 2013 ATypI conference:

    If you design a single font, it’s an island. If you design more than one, you’re designing the relationships, the recipe.

    Erik is the author of Superpolator, a tool for blending type styles across multiple dimensions. Such interpolation saves type designers thousands of hours by allowing them to mathematically mix design variables like weight, width, x-height, stroke contrast, etc.

    Superpolator allows type designers to generate variations of a typeface mathematically by interpolating between a small number of master styles.

    The newest version of Superpolator even allows designers to define complex conditional rules for activating alternate glyph forms based on interpolation numbers. For example, a complex ‘$’ glyph with two vertical strokes can be automatically replaced with a simplified single-stroke form when the weight gets too bold or the width gets too narrow.

    Unfortunately, because of current font format limitations, all this intelligence and flexibility must be flattened before the fonts end up in the user’s hands. It’s only in the final stages of font production that static instances are generated for each interpolated style, frozen and detached from their siblings and parent except in name.

    The potential for 100–900 (and beyond)

    The lobotomization of dynamic type systems is especially disappointing in the context of CSS—a system that has variable stylization in its DNA. The numeric weight system that has existed in the CSS spec since it was first published in 1996 was intended to support a dynamic stylistic range from the get-go. This kind of system makes perfect sense for variable fonts, especially if you introduce more than just weight and the standard nine incremental options from 100 to 900. Håkon Wium Lie (the inventor of CSS!) agrees, saying:

    One of the reasons we chose to use three-digit numbers [in the spec for CSS font-weight values] was to support intermediate values in the future. And the future is now :)

    Beyond increased granularity for font-weight values, imagine the other stylistic values that could be harnessed with variable fonts by tying them to numeric values. Digital typographers could fine-tune typeface specifics such as x-height, descender length, or optical size, and even tie those values to media queries as desired to improve readability or layout.

    Toward responsive fonts

    It’d be hard to write about variable fonts without mentioning Adobe’s Multiple Master font format from the 1990s. It allows smooth interpolation between various extremes, but the format was abandoned and is now mostly obsolete for typesetting by end-users. We’ll get back to Multiple Master later, but for now it suffices to say that—despite a meager adoption rate—it was perhaps the most widely used variable font format in history.

    More recently, there have been a number of projects that touch on ideas of variable fonts and dynamic typeface adjustment. For example, Matthew Carter’s Sitka typeface for Microsoft comes in six size-specific designs that are selected automatically based on the size used. While the implementation doesn’t involve fluid interpolation between styles (as was originally planned), it does approximate the effect with live size-aware selections.

    The Sitka type system by Matthew Carter for Microsoft
    The Sitka type family, designed by Matthew Carter, automatically switches between optical sizes in Microsoft applications. From left to right: Banner, Display, Heading, Subheading, Text, Small. All shown at the same point size for comparison. Image courtesy of John Hudson / Tiro Typeworks.

    There are also some options for responsive type adjustments on the web using groups of static fonts. In 2014 at An Event Apart Seattle, my colleague Chris Lewis and I introduced a project, called Font-To-Width, that takes advantage of large multi-width and multi-weight type families to fit pieces of text snugly within their containers. Our demo shows what I call “detect and serve” responsive type solutions: swapping static fonts based on the specifics of the layout or reading environment.

    One of the more interesting recent developments in the world of variable font development was the the publication of Erik van Blokland’s MutatorMath under an open source license. MutatorMath is the interpolation engine inside Superpolator. It allows for special kinds of font extrapolation that aren’t possible with MultipleMaster technology. Drawing on masters for Regular, Condensed, and Bold styles, MutatorMath can calculate a Bold Condensed style. For an example of MutatorMath’s power, I recommend checking out some type tools that are utilizing it, like the Interpolation Matrix by Loïc Sander.

    Loïc Sander’s Interpolation Matrix tool harnesses the power of Erik van Blokland’s MutatorMath

    A new variable font format

    All of these ideas seem to be leading to the creation of a new variable font format. Though none of the aforementioned projects offers a complete solution on its own, there are definitely ideas from all of them that could be adopted. Proposals for variable font formats are starting to show up around the web, too. Recently on the W3C Public Webfonts Working Group list, FontLab employee Adam Twardoch made an interesting proposal for a “Multiple Master webfonts resurrection.”

    And while such a thing would help improve typographic control, it could also improve a lot of technicalities related to serving fonts on the web. Currently, accessing variations of a typeface requires loading multiple files. With a variable font format, a set of masters could be packaged in a single file, allowing not only for more efficient files, but also for a vast increase in design flexibility.

    Consider, for example, how multiple styles from within a type family are currently served, compared to how that process might work with a variable font format.


    Static fonts vs. variable fonts
     

    With static fonts

    With a variable font

    *It is actually possible to use three masters to achieve the same range of styles, but it is harder to achieve the desired glyph shapes. I opted to be conservative for this test.

    **This table presumes 120 kB per master for both static and variable fonts. In actual implementation, the savings for variable fonts compared with static fonts would likely be even greater due to reduction in repeated/redundant data and increased efficiency in compression.

    Number of weights3Virtually infinite
    Number of widths2Virtually infinite
    Number of masters64*
    Number of files61
    Data @ 120 kB/master**720 kB480 kB
    Download time @ 500 kB/s1.44 sec0.96 sec
    Latency @ 100 ms/file0.6 sec0.1 sec
    Total load time2.04 sec1.06 sec

    A variable font would mean less bandwidth, fewer round-trips to the server, faster load times, and decidedly more typographic flexibility. It’s a win across the board. (The still-untested variable here is how much time might be taken for additional computational processing.)

    But! But! But!

    You may feel some skepticism about a new variable font format. In anticipation of that, I’ll address the most obvious questions.

    This all seems like overkill. What real-world problems would be solved by introducing a new variable font format?

    This could address any problem where a change in the reading environment would inform the weight, width, descender length, x-height, etc. Usually these changes are implemented by changing fonts, but there’s no reason you shouldn’t be able to build those changes around some fluid and dynamic logic instead. Some examples:

    • Condensing the width of a typeface for narrow columns
    • Subtly tweaking the weight for light type on a dark background
    • Showing finer details at large sizes
    • Increasing the x-height at small sizes
    • Adjusting the stroke contrast for low resolutions
    • Adjusting the weight to maintain the same stem thickness across different sizes
    • Adjusting glyphs set on a circle according to the curvature of the baseline. (Okay, maybe that’s pushing it, but why should manhole covers and beer coasters have all the fun?)

    Multiple Master was a failure. What makes you think variable fonts will take off now?

    For starters, the web now offers the capability for responsive design that print never could. Variable fonts are right at home in the context of responsive layouts. Secondly, we are already seeing real-world attempts to achieve similar results via “detect and serve” solutions. The world is already moving in this direction with or without a variable font format. Also, the reasons the Multiple Master format was abandoned include a lot of political and/or technical issues that are less problematic today. Furthermore, the tools to design variable typefaces are much more advanced and accessible now than in the heyday of Multiple Master, so type designers are better equipped to produce such fonts.

    How are we supposed to get fonts that are as compressed as possible if we’re introducing all of this extra flexibility into their design?

    One of the amazing things about variable fonts is that they can potentially reduce file sizes while simultaneously increasing design flexibility (see the “Static fonts vs. variable fonts” comparison).

    Most interpolated font families have additional masters between the extremes. Aren’t your examples a bit optimistic about the efficiency of interpolation?

    The most efficient variable fonts will be those that were designed from scratch with streamlined interpolation in mind. As David Jonathan Ross explained, some styles are better suited for interpolation than others.

    Will the additional processing power required for interpolation outweigh the benefits of variable fonts?

    Like many things today, especially on the web, it depends on the complexity of the computation, processing speed, rendering engine, etc. If interpolated styles are cached to memory as static instances, the related processing may be negligible. It’s also worth noting that calculations of comparable or higher complexity happen constantly in web browsers without any issues related to processing (think SVG scaling and animation, responsive layouts, etc). Another relevant comparison would be the relatively minimal processing power and time required for Adobe Acrobat to interpolate styles of Adobe Sans MM and Adobe Serif MM when filling in for missing fonts.

    But what about hinting? How would that work with interpolation for variable fonts?

    Any data that is stored as numbers can be interpolated. With that said, some hinting instructions are better suited for interpolation than others, and some fonts are less dependent on hinting than others. For example, the hinting instructions are decidedly less crucial for “PostScript-flavored” CFF-based fonts that are meant to be set at large sizes. Some new hinting tables may be helpful for a variable font format, but more experimentation would be in order to determine the issues.

    If Donald Knuth’s MetaFont was used as a variable font model, it could be even more efficient because it wouldn’t require data for multiple masters. Why not focus more on a parametric type system like that?

    Parametric type systems like MetaFont are brilliant, and indeed can be more efficient, but in my observation the design results they bear are decidedly less impressive or useful for quality typography.

    What about licensing? How would you pay for a variable font that can provide a range of stylistic variation?

    This is an interesting question, and one that I imagine would be approached differently depending on the foundry or distributor. One potential solution might be to license ranges of stylistic variation. So it would cost less to license a limited weight range from Light to Medium (300–500) than a wide gamut from Thin to Black (100–900).

    What if I don’t need or want these fancy-pants variable fonts? I’m fine with my old-school static fonts just the way they are!

    There are plenty of cases where variable fonts would be unnecessary and even undesirable. In those cases, nothing would stop you from using static fonts.

    Web designers are already horrible at formatting text. Do we really want to introduce more opportunities for bad design choices?

    People said similar things about digital typesetting on the Mac, mechanical typesetting on the Linotype, and indeed the whole practice of typography back in Gutenberg’s day. I’d rather advance the state of the art with some growing pains than avoid progress on the grounds of snobbery.

    Okay, I’m sold. What should I do now?

    Experiment with things like Andrew Johnson’s proof-of-concept demo. Read up on MutatorMath. Learn more about the inner workings of digital fonts. Get in touch with your favorite type foundries and tell them you’re interested in this kind of stuff. Then get ready for a future of responsive typography.

  • Matt Griffin on How We Work: The People are the Work 

    Not long ago at the Refresh Pittsburgh meetup, I saw my good friend Ben Callahan give his short talk called Creating Something Timeless. In his talk, he used examples ranging from the Miles Davis sextet to the giant sequoias to try to get at how we—as makers of things that seem innately ephemeral—might make things that stand the test of time.

    And that talk got me thinking.

    Very few of the web things I’ve made over the years are still in existence—at least not in their original state. The evolution and flux of these things is something I love about the web. It’s never finished; there’s always a chance to improve or pivot.

    And yet we all want to make something that lasts. So what could that be?

    For me, it’s not the things I make, but the experience of making them. Every project we’ve worked on at Bearded has informed the next one, building on the successes and failures of its predecessors. The people on the team are the vessels for that accumulated experience, and together we’re the engine that makes better and better work each time.

    From that perspective it’s not the project that’s the timeless work, it’s us. But it doesn’t stop there, either. It’s also our clients. When we do our jobs well, we leave our clients and their teams more knowledgeable and capable, more empowered to use the web to further the goals of their organization and meet the needs of their users. So how do we give our clients more power to—ultimately—help themselves?

    Not content (kənˈtent) with content (ˈkäntent)

    Back in 2008 (when we started Bearded), one of our differentiators was that we built every site on a CMS. At the time, many agencies had not-insignificant revenue streams derived from updating their clients’ site content on their behalf.

    But we didn’t want to do that work, and our clients didn’t want to pay for it. Building their site on a CMS and training them to use it was a natural solution. It solved both of our problems, recurring revenue be damned! It gave our clients power that they wanted and needed.

    And there are other things like this that gnaw at me. Like site support.

    Ask any web business owner what they do for post-launch site support, and you’re likely to get a number of different answers. Most of those answers, if we’re honest with ourselves, will have a thread of doubt in their tone. That’s because none of the available options feel super good.

    We’ll do it ourselves!

    For years at Bearded we did our own site support. When there were upgrades, feature changes, or (gasp!) bugs, we’d take care of it. Even for sites that had launched years ago.

    But this created a big problem for us. We were only six people, and only three of us could handle those sorts of development tasks. Those three people also had all the important duties of building the backend features for all our new client projects. Does the word bottleneck mean anything to you? Because, brother, it does to me!

    Not only that but, just like updating content, this was not work we enjoyed (nor was it work our clients liked paying for, but we’ll get to that later).

    We’ll let someone else do it!

    The next thing we did was find a development partner that specialized in site support. If you’re lucky enough to find a good shop like this (especially in your area) hang on to them, my friend! They can be invaluable.

    This situation is great, because it instantly relieved our bottleneck problem. But it also put us in a potentially awkward position, because it relied on someone else’s business to support our clients.

    If they started making decisions that I didn’t agree with, or they went out of business, I’d be in trouble and it could start affecting my client relationships. And without healthy client relationships, you’ve got nothing.

    But what else is there to do?

    We’ll empower our clients!

    For the last year or two, we’ve been doing something totally different. For most of our projects now, we’re not doing support—because we’re not building the whole site. Instead we’ve started working closely with our client’s internal teams, who build the site with us.

    We listen to them, pair with them, and train them. We bring them into our team, transfer every bit of knowledge we can throughout the whole project, and build the site together. At the end there’s no hand-off, because we’ve been handing off since day one. They don’t need site support because they know the site as well as we do, and can handle things themselves.

    It’s just like giving our clients control of their own content. We give them access to the tools they need, lend them our expertise, and give them the guidance they’ll need to make good decisions after we’re gone.

    At the end of it, we’ve probably built a website, but we’ve also done something more valuable: we’ve helped their team grow along with us. Just like us, they’re now better at what they do. They can take that knowledge and experience with them to their next projects, share that knowledge with other team members, and on, and on, and on.

    What we develop is not websites, it’s people. And if that’s not timeless work, what is?

     

  • Thoughtful Modularity 

    I spent most of the first week of December down at NASA’s Kennedy Space Center for the launch of Orion, NASA’s next-generation spacecraft. As part of NASA Social, I was lucky enough to get some behind-the-scenes tours, and to talk with scientists, engineers, astronauts, and even the Administrator himself.

    The day before launch, there was a two-hour event featuring the leaders of various NASA departments, with the discussion centered on Orion’s future missions—including the first (of hopefully many) crewed journeys to Mars. William Gerstenmaier, NASA’s Associate Administrator for Human Exploration and Operations, had some interesting comments about the technology that will get us there (55 minutes into the event):

    “Things will change over time. To think we know all the technology that will be in place and exactly how things will work, to try to project that 20 years in the future, that’s not a very effective approach. You need to be ready and if some new technology comes online or a new way of doing business is there, we’re ready to adapt, and we haven’t built an infrastructure that’s so rigid it can’t adapt and change to those pieces.”

    This is quite a shift in strategy for NASA. The Apollo and the Shuttle programs were riddled with rigidity. One engineer I talked with said that contractors received more than 12,000 requirements for the Shuttle’s development. Orion has just over 300—a clear move toward flexibility.

    It’s not only the unpredictable political minefield that NASA plays in pushing them to modularity, it’s the fact that the final pieces making a crewed mission to Mars possible are still in the works. If something revolutionary is learned about the habitability of Mars from rovers there over the next few years, NASA needs to be able to incorporate those findings into the plans.

    Most of our projects operate on a smaller scale than a mission to Mars, but there’s a lot we can learn from this approach. NASA isn’t just planning for modularity in terms of how things interface with each other, they’re planning for it at the core level of each component. Rather than stopping at a common docking mechanism (a common API, of sorts), they’re building rockets, landers, and habitation modules that can be modified as breakthroughs are made.

    In a lot of ways, we’re already doing this in our work, whether we realize it or not. The rising focus on design systems and pattern libraries shows that we’ve got a knack for breaking something down to its smallest components. By building up from those small components, we’re able to swap out anything, big or small, at any time. If a button style isn’t working, it can be changed painlessly. If the entire header needs to be reconsidered after user testing, it’s self-contained enough to not disturb the rest of the site.

    With the help of new-age content management systems like Craft, we’re able to be more modular by decoupling the front-end interface from the way data is stored and managed on the backend. That means that either side can be upgraded, rewritten, or changed entirely, without the other being dependent on it.

    Tim Berners-Lee has been talking about modularity as a central principle of the web for quite a while:

    We must design for new applications to be built on top of [the web]. There will be more modules to come, which we cannot imagine now.

    NASA’s strategies echo his thoughts on the web: build modularly for the future while realizing that the future is unknown. New discoveries will be made, new things will be built, and technology will improve.

    If you think authentication is about to undergo a major revolution with the rise of cheap biometrics, you may plan and build your system differently than assuming it will always be password based. The goal is to build in an open-ended way, so as things change and progress, those innovations can be implemented with ease.

    That’s not an impossible task, either. Just take a look at Amazon—they haven’t had a major, sitewide redesign in more than a decade, and the industry has changed in massive ways. I’m sure the behind-the-scenes infrastructure has changed, but users haven’t been aware of it. Amazon has been tweaking and iterating for years, improving their product to their customers’ satisfaction.

    The implementation details of our work will always be in flux, but the goals of our product will remain the same. Be modular in implementation so that what you build can reap the benefits of the future.

     

  • A Vision for Our Sass 

    At a recent CSS meetup, I asked, “Who uses Sass in their daily workflow?” The response was overwhelmingly positive; no longer reserved for pet projects and experiments, Sass is fast becoming the standard way for writing CSS.

    This is great news! Sass gives us a lot more power over complex, ever-growing stylesheets, including new features like variables, control directives, and mixins that the original CSS spec (intentionally) lacked. Sass is a stylesheet language that’s robust yet flexible enough to keep pace with us.

    Yet alongside the wide-scale adoption of Sass (which I applaud), I’ve observed a steady decline in the quality of outputted CSS (which I bemoan). It makes sense: Sass introduces a layer of abstraction between the author and the stylesheets. But we need a way to translate the web standards—that we fought so hard for—into this new environment. The problem is, the Sass specification is expanding so much that any set of standards would require constant revision. Instead, what we need is a charter—one that sits outside Sass, yet informs the way we code.

    To see a way forward, let’s first examine some trouble spots.

    The symptoms

    One well-documented abuse of Sass’s feature-set is the tendency to heavily nest our CSS selectors. Now don’t get me wrong, nesting is beneficial; it groups code together to make style management easier. However, deep nesting can be problematic.

    For one, it creates long selector strings, which are a performance hit:

    body #main .content .left-col .box .heading { font-size: 2em; }

    It can muck with specificity, forcing you to create subsequent selectors with greater specificity to override styles further up in the cascade—or, God forbid, resort to using !important:

    body #main .content .left-col .box .heading  [0,1,4,1]
    .box .heading  [0,0,2,0]

    Comparative specificity between two selectors.

    Last, nesting can reduce the portability and maintainability of styles, since selectors are tied to the HTML structure. If we wanted to repeat the style heading for a box that wasn’t in the leftcol, we would need to write a separate rule to accomplish that.

    Complicated nesting is probably the biggest culprit in churning out CSS soup. Others include code duplication and tight coupling—and again, these are the results of poorly formed Sass. So, how can we learn to use Sass more judiciously?

    Working toward a cure

    One option is to create rules that act as limits and reign in some of that power. For example, Mario Ricalde uses an Inception-inspired guideline for nesting: “Don’t go more than four levels deep.”

    Rules like this are especially helpful for newcomers, because they provide clear boundaries to work within. But few universal rules exist; the Sass spec is sprawling and growing (as I write this, Sass is at version 3.4.5). With each new release, more features are introduced, and with them more rope with which to hang ourselves. A rule set alone would be ineffective.

    We need a proactive, higher-level stance toward developing best practices rather than an emphasis on amassing individual rules. This could take the form of a:

    • Code standard, or guidelines for a specific programming language that recommend programming style, practices, and methods.
    • Framework, or a system of files and folders of standardized code, which can be used as the foundation of a website.
    • Style guide, or a living document of code, which details all the various elements and coded modules of your site or application.

    Each approach has distinct advantages:

    • Code standards provide a great way of unifying a team and improving maintainability across a large codebase (see Chris Coyier’s Sass guidelines).
    • Frameworks are both practical and flexible, offering the lowest barrier to entry and removing the burden of decision. As every seasoned front-end developer knows, even deciding on a CSS class name can become debilitating.
    • Style guides make the relationship between the code and the output explicit by illustrating each of the components within the system.

    Each also has its difficulties:

    • Code standards are unwieldy. They must be kept up-to-date and can become a barrier to entry for new or inexperienced users.
    • Frameworks tend to become bloated. Their flexibility comes at a cost.
    • Style guides suffer from being context-specific; they are unique to the brand they represent.

    Unfortunately, while these methods address the technical side of Sass, they don’t get to our real problem. Our difficulties with Sass don’t stem from the specification itself but from the way we choose to use it. Sass is, after all, a CSS preprocessor; our Sass problem, therefore, is one of process.

    So, what are we left with?

    Re-examining the patient

    Every job has its artifacts, but problems arise if we elevate these by-products above the final work. We must remember that Sass helps us construct our CSS, but it isn’t the end game. In fact, if the introduction of CSS variables is anything to go by, the CSS and Sass specs are beginning to converge, which means one day we may do away with Sass entirely.

    What we need, then, is a solution directed not at the code itself but at us as practitioners—something that provides technical guidelines as we write our Sass, but simultaneously lifts our gaze toward the future. We need a public declaration of intentions and objectives, or, in other words, a manifesto.

    Sass manifesto

    When I first discovered Sass, I developed some personal guidelines. Over time, they formalized into a manifesto that I could then use to evaluate new features and techniques—and whether they’d make sense for my workflow. This became particularly important as Sass grew and became more widely used within my team.

    My Sass manifesto is composed of six tenets, or articles, outlined below:

    1. Output over input
    2. Proximity over abstraction
    3. Understanding over brevity
    4. Consolidation over repetition
    5. Function over presentation
    6. Consistency over novelty

    It’s worth noting that while the particular application of each article may evolve as the specification advances, the articles themselves should remain unchanged. Let’s cover each in a little more depth.

    1. Output over input

    The quality and integrity of the generated CSS is of greater importance than the precompiled code.

    This is the tenet from which all the others hang. Remember that Sass is one step in the process toward our goal, delivering CSS files to the browser. This doesn’t mean the CSS has to be beautifully formatted or readable (this will never be the case if you’re following best practices and minimizing CSS), but you must keep performance at the forefront of your mind.

    When you adopt new features in the Sass spec, you should ask yourself, “What is the CSS output?” If in doubt, take a look under the hood—open the processed CSS. Developing a deeper understanding of the relationship between Sass and CSS will help you identify potential performance issues and structure your Sass accordingly.

    For example, using @extend targets every instance of the selector. The following Sass

    .box {
    	background: #eee;
    	border: 1px solid #ccc;
    
    	.heading {
    	  font-size: 2em;
    	}
    }
    
    .box2 {
    	@extend .box;
    	padding: 10px;
    }


    compiles to

    .box, .box2 {
      background: #eee;
      border: 1px solid #ccc;
    }
    .box .heading, .box2 .heading {
      font-size: 2em;
    }
    
    .box2 {
      padding: 10px;
    }

    As you can see, not only has .box2 inherited from .box, but .box2 has also inherited from the instances where .box is used in an ancestor selector. It’s a small example, but it shows how you can arrive at some unexpected results if you don’t understand the output of your Sass.

    2. Proximity over abstraction

    Projects should be portable without over-reliance on external dependencies.

    Anytime you use Sass, you’re introducing a dependency—the simplest installation of Sass depends on Ruby and the Sass gem to compile. But keep in mind that the more dependencies you introduce, the more you risk compromising one of Sass’s greatest benefits: the way it enables a large team to work on the same project without stepping on one another’s toes.

    For instance, along with the Sass gem you can install a host of extra packages to accomplish almost any task you can imagine. The most common library is Compass (maintained by Chris Epstein, one of Sass’s original contributors), but you can also install gems for grid systems, and frameworks such as Bootstrap, right down to gems that help with much smaller tasks like creating a color palette and adding shadows.

    These gems create a set of pre-built mixins that you can draw upon in your Sass files. Unlike the mixins you write inside your project files, a gem is written to your computer’s installation directory. Gems are used out-of-the-box, like Sass’s core functions, and the only reference to them is via an @include method.

    Here’s where gems get tricky. Let’s return to the scenario where a team is contributing to the same project: one team member, whom we’ll call John, decides to install a gem to facilitate managing grids. He installs the gem, includes it in the project, and uses it in his files; meanwhile another team member—say, Mary—pulls down the latest version of the repository to change the fonts on the website. She downloads the files, runs the compiler, but suddenly gets an error. Since Mary last worked on the project, John has introduced an external dependency; before Mary can do her work, she must debug the error and download the correct gem.

    You see how this problem can be multiplied across a larger team. Add in the complexity of versioning and inter-gem-dependency, and things can get very hairy. Best practices exist to maintain consistent environments for Ruby projects by tracking and installing the exact necessary gems and versions, but the simplest approach is to avoid using additional gems altogether.

    Disclaimer: I currently use the Compass library as I find its benefits outweigh the disadvantages. However, as the core Sass specification advances, I’m considering when to say goodbye to Compass.

    3. Understanding over brevity

    Write Sass code that is clearly structured. Always consider the developer who comes after you.

    Sass is capable of outputting super-compressed CSS, so you don’t need to be heavy-handed in optimizing your precompiled code. Further, unlike regular CSS comments, inline comments in Sass aren’t outputted to the final CSS.

    This is particularly helpful when documenting mixins, where the output isn’t always transparent:

    // Force overly long spans of text to truncate, e.g.:
    // @include truncate(100%);
    // Where $truncation-boundary is a united measurement.
    
    @mixin truncate($truncation-boundary){
        max-width:$truncation-boundary;
        white-space:nowrap;
        overflow:hidden;
        text-overflow:ellipsis;
    }

    However, do consider which parts of the your Sass will make it to the final CSS file.

    4. Consolidation over repetition

    Don’t Repeat Yourself. Recognize and codify repeating patterns.

    Before you start any project, it’s sensible to sit down and try to identify all the different modules in a design. This is the first step in writing object-oriented CSS. Inevitably some patterns won’t become apparent until you’ve written the same (or similar) line of CSS three or four times.

    As soon as you recognize these patterns, codify them in your Sass.

    Add variables for recurring values:

    $base-font-size: 16px;
    $gutter: 1.5em;

    Use placeholders for repeating visual styles:

    %dotted-border { border: 1px dotted #eee; }

    Write mixins where the pattern takes variables:

    //transparency for image features
    @mixin transparent($color, $alpha) {
      $rgba: rgba($color, $alpha);
      $ie-hex-str: ie-hex-str($rgba);
      background-color: transparent;
      background-color: $rgba;
      filter:progid:DXImageTransform.Microsoft.gradient(startColorstr=#{$ie-hex-str},endColorstr=#{$ie-hex-str});
      zoom: 1;
    }

    If you adopt this approach, you’ll notice that both your Sass files and resulting CSS will become smaller and more manageable.

    5. Function over presentation

    Choose naming conventions that focus on your HTML’s function and not its visual presentation.

    Sass variables make it incredibly easy to theme a website. However, too often I see code that looks like this:

    $red-color: #cc3939; //red
    $green-color: #2f6b49; //green

    Connecting your variables to their appearance might make sense in the moment. But if the design changes, and the red is replaced with another color, you end up with a mismatch between the variable name and its value.

    $red-color: #b32293; //magenta
    $green-color: #2f6b49; //green

    A better approach is to name these color variables based on their function on the site:

    $primary-color: #b32293; //magenta
    $secondary-color: #2f6b49; //green

    Presentational classes with placeholder selectors

    What happens when we can’t map a visual style to a functional class name? Say we have a website with two call-out boxes, “Contact” and “References.” The designer has styled both with a blue border and background. We want to maximize the flexibility of these boxes but minimize any redundant code.

    We could choose to chain the classes in our HTML, but this can become quite restrictive:

    <div class="contact-box blue-box">
    <div class="references-box blue-box">

    Remember, we want to focus on function over presentation. Fortunately, using the Sass @extend method together with a placeholder class makes this a cinch:

    %blue-box {
    	background: #bac3d6;
    	border: 1px solid #3f2adf;
    }
    
    .contact-box {
    	@extend %blue-box;
    	...
    }
    .references-box {
    @extend %blue-box;
    	...
    }

    This generates the following CSS, with no visible references to %blue-box anywhere, except in the styles that carry forward.

    .contact-box,
    .references-box {
    	background: #bac3d6;
    	border: 1px solid #3f2adf;
    }

    This approach cuts references in our HTML to presentational class names, but it still lets us use them in our Sass files in a descriptive way. Trying to devise functional names for common styles can have us reaching for terms like base-box, which is far less meaningful here.

    6. Consistency over novelty

    Avoid introducing unnecessary changes to the processed CSS.

    If you’re keen to introduce Sass into your workflow but don’t have any new projects, you might wonder how best to use Sass inside a legacy codebase. Sass fully supports CSS, so initially it’s as simple as changing the extension from .css to .scss.

    Once you’ve made this move, it may be tempting to dive straight in and refactor all your files, separating them into partials, nesting your selectors, and introducing variables and mixins. But this can cause trouble down the line for anyone who is picking up your processed CSS. The refactoring may not have affected the display of anything on your website, but it has generated a completely different CSS file. And any changes can be extremely hard to isolate.

    Instead, the best way to switch to a Sass workflow is to update files as you go. If you need to change the navigation, separate that portion into its own partial before working on it. This will preserve the cascade and make it much easier to pinpoint any changes later.

    The prognosis

    I like to think of our current difficulties with Sass as growing pains. They’re symptoms of the adjustments we must make as we move to a new way of working. And an eventual cure does exist, as we mature in our understanding of Sass.

    It’s my vision that this manifesto will help us get our bearings as we travel along this path: use it, change it, or write your own—but start by focusing on how you write your code, not just what you write.

  • Live Font Interpolation on the Web 

    We all want to design great typographic experiences. We also want to serve users on an increasing range of devices and contexts. But today’s webfonts tie our responsive sites and applications to inflexible type that doesn’t scale. As a result, our users get poor reading experiences and longer loading times from additional font weights.

    As typographers, designers, and developers, we can solve this problem. But we’ll need to work together to make webfonts more systemized and context-aware. Live webfont interpolation—the modification of a font’s design in the browser—exists today and can serve as an inroad for using truly responsive typography.

    An introduction to font interpolation

    Traditional font interpolation is a process used by type designers to generate new intermediary fonts from a series of master fonts. Master fonts represent key archetypal designs across different points in a font family. By using math to automatically find the in-betweens of these points, type designers can derive additional font variants/weights from interpolation instead of designing each one manually. We can apply the same concept to our webfonts to serve different font variants for our users. For example, the H letter (H glyph) in this proof of concept (currently for desktop browsers) has light and heavy masters in order to interpolate a new font weight.

    An interpolated H glyph using 50 percent of the light weight and 50 percent of the black weight. There can be virtually any number of poles and axes linked to combinations of properties, but in this example everything is being interpolated at once between two poles.

    Normally these interpolated type designs end up being exported as separate fonts. For example, the TheSans type family contains individual font files for Extra Light, Light, Semi Light, Plain, SemiBold, Bold, Extra Bold, and Black weights generated using interpolation.

    Individual font weights generated from interpolation from the TheSans type family.

    Interpolation can alter more than just font weight. It also allows us to change the fundamental structure of a font’s glyphs. Things like serifs (or lack thereof), stroke contrast/direction, and character proportions can all be changed with the right master fonts.

    A Noordzij cube showing an interpolation space with multiple poles and axes.

    Although generating fonts with standard interpolation gives us a great deal of flexibility, webfont files are still static in their browser environment. Because of this, we’ll need more to work with the web’s responsiveness.

    Web typography’s medium

    Type is tied to its medium. Both movable type and phototypesetting methods influenced the way that type was designed and set in their time. Today, the inherent responsiveness of the web necessitates flexible elements and relative units—both of which are used when setting type. Media queries are used to make more significant adjustments at different breakpoints.

    An approximation of typical responsive design breakpoints.

    However, fonts are treated as another resource that needs to be loaded, instead of a living, integral part of a responsive design. Changing font styles and swapping out font weights with media queries represent the same design compromises inherent in breakpoints.

    Breakpoints set by media queries often reflect the best-case design tradeoffs—often during a key breakpoint, like collapsing the navigation under a menu icon. Likewise, siloed font files often reflect best-case design tradeoffs—there’s no font in between The Mix Light and The Sans SemiLight.

    Enter live webfont interpolation

    Live webfont interpolation just means interpolating a font on the fly inside the browser instead of being exported as a separate file resource. By doing this, our fonts themselves can respond to their context. Because type reflows and is partially independent of a responsive layout, there’s less of a need to set abrupt points of change. Fonts can adhere to bending points—not just breaking points—to adapt type to the design.

    Live interpolation doesn’t have to adhere to any specific font weight or design.

    Precise typographic control

    With live font interpolation, we can bring the same level of finesse to our sites and applications that type designers do. Just as we take different devices into account when designing, type designers consider how type communicates and performs at small sizes, low screen resolutions, large displays, economical body copy, and everything in between. These considerations are largely dependent on the typeface’s anatomy, which requires live font interpolation to be changed in the browser. Properties like stroke weight and contrast, counter size, x-height, and character proportions all affect how users read. These properties are typically balanced across a type family. For example, the JAF Lapture family includes separate designs for text, display, subheads, and captions. Live font interpolation allows a single font to fit any specific role. The same font can be optimized for captions set at .8em, body text set at 1.2em, or H1s set at 4.8em in a light color.

    JAF Lapture Display (top) and JAF Lapture Text (bottom). Set as display type at 40 pixels, rendered on Chrome 38. Note how the display version uses thinner stroke weights and more delicate features that support its sharp, authoritative character without becoming too heavy at larger sizes. (For the best examples, compare live type in your own device and browser.)

    JAF Lapture Text. Set as body copy at 16 pixels, rendered on Chrome. Note how features like the increased character width, thicker stroke weights, and shorter ascenders and descenders make the text version more appropriate for smaller body copy set in paragraph blocks.

    JAF Lapture Display. Set as body copy at 16 pixels, rendered on Chrome.

    Live font interpolation also allows precise size-specific adjustments to be made for the different distances at which a reader can perceive type. Type can generally remove finer typographic details at sizes where they won’t be perceived by the reader—like on far-away billboards, or captions and disclaimers set at small sizes.

    Adaptive relationships

    Live font interpolation’s context-awareness builds inherent flexibility into the font’s design. A font’s legibility and readability adjustments can be linked to accessibility options. People with low vision who increase the default text size or zoom the browser can get type optimized for them. Fonts can start to respond to combinations of factors like viewport size, screen resolution, ambient light, screen brightness, and viewing distance. Live font interpolation offers us the ability to extend great reading experiences to everyone, regardless of how their context changes.

    Live font interpolation on the web today

    While font interpolation can be done with images or canvas, these approaches don’t allow text to be selectable, accessible via screen readers, or crawlable by search engines. SVG fonts offer accessible type manipulation, but they currently miss out on the properties that make a font robust: hinting and OpenType tables with language support, ligatures, stylistic alternates, and small caps. An SVG OpenType spec exists, but still suffers from limited browser support.

    Unlike SVG files, which are made of easily modifiable XML, font file formats (ttf, otf, woff2, etc.) are compiled as binary files, complicating the process of making live changes. Sets of information describing a font are stored in tables. These tables can range from things like a head table containing global settings for the font to a name table holding author’s notes. Different font file formats contain different sets of information. For example, the OpenType font format, a superset of TrueType, contains additional tables supporting more features and controls (per Microsoft’s OpenType spec):

    • cmap: Character to glyph mapping
    • head: Font header
    • hhea: Horizontal header
    • hmtx: Horizontal metrics
    • maxp: Maximum profile
    • name: Naming table
    • OS/2: OS/2 and Windows-specific metrics
    • post: PostScript information

    For live webfont interpolation, we need a web version of something like ttx, a tool for converting font files into a format we can read and parse.

    Accessing font tables

    Projects like jsfont and opentype.js allow us to easily access and modify font tables in the browser. Much like a game of connect-the-dots, each glyph (the glyp table in OpenType) is made up of a series of points positioned on an x-y grid.

    A series of numbered points on an H glyph. The first x-y coordinate set determines where the first point is placed on the glyph’s grid and is relative to the grid itself. After the first point, all points are relative to the point right before it. Measurements are set in font design units.

    Interpolation involves the modification of a glyph to fall somewhere between master font poles—similar to the crossfading of audio tracks. In order to make changes to glyphs on the web with live webfonts, we need to compare and move individual points.

    The first points for the light and heavy H glyph have different x coordinates, so they can be interpolated.

    Interpolating a glyph via coordinates is essentially a matter of averaging points. More robust methods exist, but aren’t available for the web yet.

    Other glyph-related properties (like xMin and xMax) also must be interpolated in order to ensure the glyph bounding box is large enough to show the whole glyph. Additionally, padding—or bearings, in font terminology—can be added to position a glyph in its bounding box (leftsidebearing and width properties). This becomes important when considering the typeface’s larger system. Any combination of glyphs can end up adjacent to each other, so changes must be made considering their relationship to the typeface’s system as a whole.

    Glyph properties. Both xMin/xMax and advancewidth must be scaled in addition to the glyph’s coordinate points.

    Doing it responsibly

    Our job is to give users the best experience possible—whether they’re viewing the design on a low-end mobile device, a laptop with high resolution, or distant digital signage. Both poorly selected and slowly loading fonts hinder the reading experience. With CSS @fontface as a baseline, fonts can be progressively enhanced
    with interpolation where appropriate. Users on less capable devices and browsers are best served with standard @fontface fonts.

    After the first interpolation and render, we can set a series of thresholds where re-renders are triggered, to avoid constant recalculations for insignificant changes (like every single change in width as the browser is resized). Element queries are a natural fit here (pun intended) because they’re based at the module level, which is where type often lives within layouts. Because information for interpolation is stored with JavaScript, there’s no need to load an entirely different font—just the data required for interpolation. Task runners can also save this interpolation data in JavaScript during the website or application build process, and caching can be used to avoid font recalculations when a user returns to a view a second time.

    Another challenge is rendering interpolated type quickly and smoothly. Transitioning in an interpolated font lined up with the original can minimize the visual change. Other techniques, like loading JavaScript asynchronously, or just caching the font for next time if the browser cannot load the font fast enough, could also improve perceived performance.

    As noted by Nick Sherman, all these techniques illustrate the need for a standardized font format that wraps everything up into a single sustainable solution. Modifying live files with JavaScript serves only as an inroad for future font formats that can adapt to the widely varied conditions they’re subjected to.

    Fonts that interpolate well

    Like responsive design, font interpolation requires considerations for the design at both extremes, as well as everything in the middle. Finch—the typeface in these examples—lends itself well to interpolation. David Jonathan Ross, Finch’s designer, explains:

    Interpolation is easiest when letter structure, contrast, and shapes stay relatively consistent across a family. Some typeface designs (like Finch) lend themselves well to that approach, and can get by interpolating between two extremes. However, other designs need more care and attention when traversing axes like weight or width. For example, very high-contrast or low-contrast designs often require separately-drawn poles between the extremes to help maintain the relationship between thick and thin, especially as certain elements are forced to get thin, such as the crossbar of the lowercase ’e’. Additionally, some designs get so extreme that letter shape is forced to change, such as replacing a decorative cursive form of lowercase ’k’ with a less-confusing one at text sizes, or omitting the dollar sign’s bar in the heaviest weights.

    Finch’s consistency across weights allows it to avoid a complex interpolation space—there’s no need for additional master fonts or rules to make intermediate changes between two extremes.

    Glyphs also don’t have to scale linearly across an axis’s delta. Guidelines like Lucas De Groot’s interpolation theory help us increase the contrast between near-middle designs, which may appear too similar to the user.

    A call to responsive typography

    We already have the tools to make this happen. For example, jsfont loads a font file and uses the DataView API to create a modifiable font object that it then embeds through CSS @fontface. The newer project opentype.js has active contributors and a robust API for modifying glyphs.

    As font designers, typographers, designers, and developers, the only way to take advantage of responsive typography on the web is to work together and make sure it’s done beautifully and responsibly. In addition to implementing live webfont interpolation in your projects, you can get involved in the discussion, contribute to projects like opentype.js, and let type designers know there’s a demand for interpolatable fonts.

  • Nishant Kothary on the Human Web: Logically Speaking 

    Whether you’re arguing for a design decision, or making the case for hiring another developer for your team, the advice I’ve heard over and over is that if you use logic (backed by user research or other data), you will prevail. I’ve rarely found that to be true in the real world, and that’s what I want to talk about today.

    But first, some math. You probably recognize this equation:

    result = target ÷ context

    It was introduced by Ethan in his 2009 article, Fluid Grids, and laid the foundation for the movement we now know as Responsive Web Design.

    If you remember your high-school algebra (or you’re into math), you’ll recognize Ethan’s equation as linear:

    y = m × x + b

    where y = result, m = target, x = 1/context, and b = 0

    It’s hard to overstate the applications of linear equations in computing. They form the basis of linear algebra, without which we wouldn’t be reading this column on a computer screen. Actually, the computer wouldn’t even exist.

    Yet the elegant little nuggets of logic that are linear equations only work when we take certain mathematical concepts for granted.

    For instance, that addition is commutative:

    a + b = b + a

    Or, that multiplication is associative:

    ( a × b ) × c = a × ( b × c )

    Or that most profound algebraic axiom, reflexivity:

    a = a

    Without these foundational laws of mathematics and logic, linear equations are about as useful as a dog taking a selfie. Not quite as entertaining, though.

    Dog selfie
    Reflexive dog is reflexive.

    What’s truly important to realize is that we all share precisely the same concepts related to mathematics. In the universe of mathematics, if you have an a and I have an a, they are both exactly the same and they are equal to each other. Thanks to this guarantee, planes can fly, iPhones can ring, and of course, websites can respond (browser inconsistencies notwithstanding).

    What we forget is that the certainty that “one thing always means the same thing no matter what” disappears almost entirely in the real world.

    And that’s why, like a linear equation in a universe without reflexivity, arguments backed only by logic tend to land on deaf ears in the real world, where each one of us is governed by our own unique and personal laws of logic; where the speaker’s a quite literally can be the listener’s wtf.

    This is not to say that you should forgo logic in making your case. On the contrary, base your case on it. But if you’re not incorporating the most essential element of effective persuasion—an understanding of the other person’s universe, no matter how illogical it may seem to you—don’t be surprised when your case falls flat.

    Ironically, that’s the only logical outcome.

  • Pinpointing Expectations 

    In my work as a front-end developer, I’ve come to realize that expectations, and how you handle them, are one of the most integral parts of a project. Expectations are tricky things, especially because we don’t talk about them very much.

    Somehow, we always expect other people to just know what we’re thinking. Expectations have a tendency to shift and change, too. Sometimes during the course of a project, as you learn, research, and work, expectations change because of new knowledge gained while working. Other times an outside influence changes, say a competitor comes out with a new feature or product, which could cause the goals and expectations of your project to change as well. 

    Not talking about expectations causes a lot of headaches throughout a project. We aren’t mind readers, but clients and colleagues often expect us to be. Even when expectations aren’t articulated, there’s often frustration when you don’t meet them. This is why showing your work as often as possible and talking about it as you go can be a helpful way to make sure things are living up to expectations.

    So how do we handle this? We have to try as hard as we can to draw out the expectations at the beginning, learning what’s expected so that we can be prepared to meet those goals. We also have to check in throughout the project to see if things have changed.

    Recently, I was on a project that ran over by several months, dragging on longer than I and, I think, the client expected. I was getting a bit antsy. When would we wrap up? What was going on?

    As a freelancer, my schedule is important and things that throw it out of whack are hard on me. Sticking up for myself isn’t always easy, but the client’s schedule changed over the course of the project and it was my job to figure out how to make the project end successfully. I did the email thing, I asked all the questions, and frankly, I pushed a bit. After several emails, and some explanations on both sides, things were sorted out in a way that worked for everyone.

    This wasn’t a huge issue, but it could have grown larger if not acknowledged, talked about, and handled. Often it’s the small issues that can snowball into bigger ones down the road, so handling them early on saves everyone a lot of grief. Below, I go into more detail on how to get a handle on expectations early so issues either don’t come up, or they don’t blow up into something unmanageable.

    Managing expectations

    At the beginning of the project I ask for a detailed scope. The goal of this is to have everyone spell out what the end of the project looks like. When I’m done with my work, what will that work look like, what will the final project consist of? Ultimately, what is my deliverable?

    If the scope of a project is tricky to define, I ask a lot of questions to get us there:

    • What is the goal of the project?
    • What do you hope I’ll have done at the end?
    • How will we know it’s done?
    • How often will we meet to discuss the project when it’s in process?
    • Do you have a workflow you prefer for projects?
    • Are there milestones along the way, midpoints in the project and what are they?
    • What is the design process and how does the development team fit into that?
    • How finalized do designs need to be before starting to work in code?
    • Do you iterate and do designing in code or not?

    These are questions I ask of my clients, but they can also be useful discussion starters when working on new projects within teams as well.

    As a front-end developer, my final deliverable is often a template or page of a website, finished and ready for launch or integration. It could also be a report on ways to improve CSS for performance and maintainability, or a style guide and a cleaned-up codebase to show how the guide helped trim down the file sizes. Getting not only the final deliverable established, but also the process for getting there, helps everyone know not only what will be done, but how it will get done.

    Since I write code, I also make sure I know about the coding standards for the shared repo. I want to make sure I write, test, and do anything else the client expects, so that my code conforms to their standards when the project is over.

    When the expectations are unrealistic given the limits of code and timeline, I’m honest about limitations. We can do a lot with code, but we can’t do everything. Also, sometimes requests may be bad for accessibility or usability, so I’m not afraid to speak up and voice this to the team.

    If the project is longer than a week or two, I try very hard to send updates, making sure I’m communicating where we are in regards to the expectations outlined in the scope and contract. Often, a regular call or video chat will do. Should I start to get the feeling that things have changed (you know, that awkward email exchange or tense video call), then it’s my job to ask about it. To have successful projects we have to be willing to have the hard conversations. Sometimes, a quick email asking if everything is OK is enough, other times it takes another phone call or two to sort through things.

    I’m the first to admit it: some of this is hard. I sit at home in my office and worry at times. But whenever I’ve taken the bull by the horns and just asked what was going on, it’s always been worth it. Many times it proved to be something small, but other times it meant a course change for the project which saved time and effort on everyone’s part.

    To avoid small issues snowballing or larger issues cropping up, have a good plan at the beginning of every project for how to handle expectations. You need to first establish what they are by asking a lot of questions, even the obvious ones, and then make sure you communicate frequently along the way. Hopefully things won’t change too much, but you’ll be ready to deal with them when they do.