EW Resource

Newsfeeds

There's a huge resource of Newsfeeds on the Net which brings up-to-date information and content on a wide range of subjects.

Here are just a few relating to web development.



A List Apart: The Full Feed
  • Nishant Kothary on the Human Web: “Buy Him A Coffee” 

    My first job out of college was as a program manager. Program Manager is one of those job titles that sounds important because it implies that there exists a Program, and you have been anointed to Manage it. Who doesn’t want to be boss!

    As with all impressive-sounding things, program management job descriptions are littered with laughable bullets like:

    Must be proficient at influencing others without authority.

    Which may as well be written as:

    Life is.

    Or:

    Thing is Thing.

    Pretty much every freshman PM ignores that qualification, and interviewers rarely test for it. We take for granted that the ability to influence people is important (true), and that we are all acceptably good at it (false).

    For most of us, the first time our ability to influence people is truly tested is at our first job. And most of us fail that first test.

    When I first realized I was terrible at influencing people, I projected the problem outward and saw it as a product of the environment I worked in. “It’s not me, it’s them,” I’d tell my friends at work and my management chain. As I wrote in my first column, my boss would say to me, “It is what it is.” This would instantly make me want to either have at the world with an axe or drive my Outback straight up into the North Cascades, hike until I ran into a grizzly, give her cub a wet willy, and submit to the fateful paw of death.

    I also blamed my nature. If you are to believe the results of the informal quiz I took in Susan Cain’s Quiet: The Power of Introverts in a World That Can’t Stop Talking, my score of 18/20 suggests I am as introverted as they come. And while I come across as an extrovert now—behavior I’ve practiced over years—nothing about interacting with people feels natural to me. This is not to say that introverts (or I) dislike people. It’s more like like what Dr. Seuss said about children, “In mass, [they] terrify me.”

    My first breakthrough came when a colleague at work saw me having a particularly difficult struggle to convince an individual from another team to expedite some work for mine, and suggested, “Buy him a coffee.” The kind of advice that feels like it fell out of a Dale Carnegie book into an inspirational poster of two penguins holding hands. PENGUINS DON’T EVEN HAVE HANDS. But I did it anyway because I was at my wit’s end.

    I met him at Starbucks, and picked up the tab for his latte. We grabbed some chairs and awkwardly, wordlessly stared at our coffees.

    Panicked at the mounting silence, I tried the first thing that came to mind. What I didn’t know then was that it’s a cornerstone technique of people who are good at influencing others: I asked him something about himself.

    “So, are you from Seattle?”

    “Indiana, actually.”

    “No way. I attended college in Indiana!”

    Soon enough, we realized we had far more in common than we’d expected; including cats that, judging by their attitudes, probably came from the same satanic litter. While I still wasn’t able to get him to commit to our team’s deadline, I did walk away with a commitment that he’d do his best to come close to it.

    More importantly, I’d inadvertently happened upon a whole new set of tools to help me achieve my goals. I didn’t realize it then, but I had just learned the first important thing about influencing people: it’s a skill—it can be learned, it can be practiced, and it can be perfected.

    I became aware of a deficit in my skillset, and eventually I started working on it proactively. It’s been a decade since that first coffee. While I’m still (and suspect, always will be) a work in progress, I have come a long way.

    You can’t learn how to influence people overnight, because (as is true for all sophisticated skills) there’s a knowledge component that’s required. It often differs from person to person, but it does take time and investment. Generally speaking, it involves filling gaps about your knowledge of humans: how we think, what motivates us, and as a result, how we behave. I keep a list of the books that helped me along the way, including Carnegie’s almost-century-old classic, How to Win Friends and Influence People. But as Carnegie himself wrote, “Knowledge isn’t power until it is applied.”

    What will ultimately decide whether you become someone who can influence others is your commitment to practice. Depending on your nature, it will either come easier to you, or be excruciatingly hard. But even if you’re an extrovert, it will take practice. There is no substitute for the field work.

    What I can promise you is that learning how to earn trust, be liked, and subsequently influence people will be a worthwhile investment not only for your career, but also for your life. And even if you don’t get all the way there—I am far from it—you’ll be ahead of most people for just having tried.

    So my advice to you is: instead of avoiding that curmudgeon at work, go buy them a coffee.

  • This week's sponsor: Craft 

    Time to look for a new CMS? Our sponsor Craft keeps the editing experience simple, flexible, and responsive.

  • Thinking Responsively: A Framework for Future Learning 

    Before the arrival of smartphones and tablets, many of us took a position of blissful ignorance. Believing we could tame the web’s inherent unpredictability, we prescribed requirements for access, prioritizing our own needs above those of users.

    As our prescriptions grew ever more detailed, responsive web design signaled a way out. Beyond offering a means of building device-agnostic layouts, RWD initiated a period of reappraisal; not since the adoption of web standards has our industry seen such radical realignment of thought and practice.

    In the five years since Ethan Marcotte’s article first graced these pages, thousands of websites have launched with responsive layouts at their core. During this time, we’ve experimented with new ways of working, and refined our design and development practice so that it’s more suited to a fluid, messy medium.

    As we emerge from this period of enlightenment, we need to consolidate our learning and consider how we build upon it.

    A responsive framework

    When we think of frameworks, we often imagine software libraries and other abstractions concerned with execution and code. But this type of framework distances us from the difficulties we face designing for the medium. Last year, when Ethan spoke about the need for a framework, he proposed one focused on our approach—a framework to help us model ongoing discussion and measure the quality and appropriateness of our work.

    I believe we can conceive this framework by first agreeing on a set of underlying design principles. You may be familiar with the concept. Organizations like GOV.UK and Google use them to define the characteristics of their products and even their organizations. Kate Rutter describes design principles as:

    …short, insightful phrases that act as guiding lights and support the development of great product experiences. Design principles enable you to be true to your users and true to your strategy over the long term. (emphasis mine)

    The long-term strategy of the web is to enable universal access to information and services. This noble goal is fundamental to the web’s continued relevance. Our design principles must operate in the service of this vision, addressing:

    • Our users: By building inclusive teams that listen to—and even work alongside—users, we can achieve wider reach.
    • Our medium: By making fewer assumptions about context and interface, focusing more on users’ tasks and goals, we can create more adaptable products.
    • Ourselves: By choosing tools that are approachable, simple to use, and open to change, we can elicit greater collaboration within teams.

    Reflecting diversity in our practice

    In surveying the landscape of web-enabled devices, attempting to categorize common characteristics can prove foolhardy. While this breadth and fluidity can be frustrating at times, device fragmentation is merely a reflection of human diversity and consumers exercising their right to choose.

    Until recently, empathy for consumers has largely been the domain of user experience designers and researchers. Yet while a badly designed interface can adversely effect a site’s usability, so can a poorly considered technology choice. We all have a responsibility to consider how our work may affect the resulting experience.

    Designing for everyone

    Universal design promotes the creation of products that are usable by anyone, regardless of age, ability, or status. While these ideas enjoy greater recognition among architects and product designers, they are just as relevant to our own practice.

    Consider OXO Good Grips kitchen utensils. In 1989, Sam Farber, inspired by his wife’s arthritis, redesigned the conventional vegetable peeler, replacing its metal handles with softer grips. Now anyone, regardless of strength or manual dexterity, could use this tool—and Farber’s consideration for aesthetics ensured broader appeal as well. His approach was applied to a range of products; Good Grips became an internationally recognized, award-winning brand, while Farber’s company, OXO, went on to generate significant sales.

    This work brought the concept of inclusive design into the mainstream. OXO remains an advocate of designing inherently accessible products, noting that:

    When all users’ needs are taken into consideration in the initial design process, the result is a product that can be used by the broadest spectrum of users. In the case of OXO, it means designing products for young and old, male and female, left- and right-handed and many with special needs.

    Many of the technologies and specifications we use already feature aspects of universal design. Beyond specifications like WAI-ARIA that increase the accessibility of dynamic interfaces, HTML has long included basic accessibility features: the alt attribute allows authors to add textual alternatives to images, while the object element allows fallback content to be provided if a particular media plug-in or codec isn’t available.

    Examples can also be found within the W3C and WHATWG. A key principle used in the design of HTML5 concerns itself with how authors should assess additions or changes to the specification. Called the priority of constituencies, it states that:

    In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

    We can use this prioritization when making choices on our own projects. While a client-side MVC framework might provide a degree of developer convenience, if it means users need to download a large JavaScript file before an application can be accessed, then we should look for an alternative approach.

    Bridging the gap

    When makers are attached to high-resolution displays and super-fast broadband connections, it can be difficult for them to visualize how users may experience the resulting product on a low-powered mobile device and an unreliable cellular network. The wider the gap between those making a product and those using it, the greater the likelihood that the former will make the wrong choice. We must prioritize getting closer to our users.

    User research and usability testing help us see how users interact with our products. Having different disciplines (developers, interface and content designers, product managers) participate can ensure this learning is widely shared. But we can always do more. Susan Robertson recently wrote about how spending a week answering support emails gave her new insights into how customers were using the application she was building:

    Rather than a faceless person typing away on a keyboard, users become people with names who want to use what you are helping to create.

    Having the entire team collectively responsible for the end experience means usability and accessibility considerations will remain key attributes of the final product—but what if that team is more inclusive, too? In her article “Universal Design IRL,” Sara Wachter-Boettcher notes that:

    [T]he best way to understand the audiences we design for is to know those audiences. And the best way to know people is to have them, with all their differences of perspective and background—and, yes, age and gender and race and language, too—right alongside us.

    Perhaps it’s no coincidence that as we learn more about the diversity of our customers, we’ve started to acknowledge the lack of diversity within our own industry. By striving to reflect the real world, we can build more empathetic and effective teams, and in turn, better products.

    Building on adaptable foundations

    By empathizing with users, we can make smarter choices. Yet the resulting decisions will need to travel across unreliable networks before being consumed by different devices with unknown characteristics. It’s hard to make decisions if you’re unable to predict the outcomes.

    By looking at websites through different lenses, we can uncover areas of constraint that offer the greatest degree of reach and adaptability. If an interface works on a mobile device, it’ll work on a larger display. If data can be interrogated when there’s no network, an unreliable connection will be of little hindrance. If content forms the basis of our design, information will be available regardless of styling. Optimizations based on more uncertain assumptions can be layered on afterwards, safe in the knowledge that we’ve provided fallbacks.

    Interface first

    In 2009, Luke Wroblewski asked us to consider how interfaces could take advantage of mobile device capabilities before thinking about their manifestation in desktop browsers. Mobile-first thinking encourages us to focus: phone displays leave little room for extraneous interface or content, so we need to know what matters most. By asking questions about which parts of an interface are critical and which are not, we can decide whether those non-critical parts are loaded conditionally or lazily—or perhaps not at all.

    Network first

    In 2013, in considering the realities of network reliability, Alex Feyerke proposed an offline-first approach. Rather than treat offline access as an edge case, we can create seamless experiences that work regardless of connectivity—by preemptively downloading assets and synchronizing data when online, and using aggressive caching and client-side computation when not. Others have suggested starting with URLs or an API-first approach, using these lenses to think about where content lives within a system. Each approach embraces the underlying fabric of the web to help us build more robust and resilient products.

    Content first

    In 2011, Mark Boulton signaled a move away from our canvas in approach, to one where layouts are designed from the content out. By defining visual relationships based on page elements, and using ratios instead of fixed values, we can imbue a page with connectedness, independent of its dimensions.

    Recognizing that having content available before we design a page can be an unreasonable request, Mark later suggested we consider structure first, content always. This fits in well with the Core Model, an idea first introduced by Are Halland at the IA Summit in 2007. By asking questions about a site’s core content—what task it needs to perform, what information it should convey—we can help clients think more critically about their strategic objectives, business goals, and user needs. Ida Aalen recently noted:

    The core model is first and foremost a thinking tool. It helps the content strategist identify the most important pages on the site. It helps the UX designer identify which modules she needs on a page. It helps the graphic designer know which are the most important elements to emphasize in the design. It helps include clients or stakeholders who are less web-savvy in your project strategy. It helps the copywriters and editors leave silo thinking behind and create better content.

    Sharing the toolbox

    Having empathized with our users and navigated an unpredictable medium, we need to ensure that our decisions and discoveries are shared across teams.

    As responsive design becomes embedded within organizations, these teams are increasingly collaborative and cross-functional. Previously well-defined roles are beginning to merge, the boundaries between them blurring. Job titles and career opportunities are starting to reflect this change too: see “full-stack developer” or “product designer.” Tools that were once the preserve of specific disciplines are being borrowed, shared, and repurposed; prototyping an animation may require writing JavaScript, while building a modular component library may require understanding visual language and design theories.

    If the tools used are too opaque, and processes difficult to adopt, then opportunities for collaboration will diminish. Make a system too complex, and onboarding new members of a team will become difficult and time-consuming. We need to constantly make sure our work is accessible to others.

    Considerate code

    The growing use of front-end style guides is one example of a maturing relationship between disciplines. Rather than producing static, bespoke layouts, designers are employing more systematic design approaches. Front-end developers are taking these and building pattern libraries and modular components, a form of delivery that fits in better with backend development approaches.

    Component-driven development has seen a succession of tools introduced to meet this need. Tools like Less and Sass allow us to modularize, concatenate, and minify stylesheets, yet they can also introduce procedural functionality into CSS, a language deliberately designed to be declarative and easier to reason with. However, if consideration is given to other members of the team, this new functionality can be used to extend CSS’s existing declarative feature set. By using mixins and functions, we can embed a design language within code, and propagate naming conventions that are understood by the whole team.

    Common conventions

    Quite often, problems of process are not a limitation of technology, but an unwillingness to apply critical thought. Trying to solve technology problems by employing more technology ignores the fact that establishing conventions can be just as helpful, and easier for others to adopt.

    The BEM naming methodology helps CSS styles remain scoped, encapsulated, and easier to maintain, yet this approach has no dependency on a particular technology; it is purely a set of documented conventions. Had we foreseen the need, we could have been using BEM in 2005. A similar convention is that of CSS namespaces, as advocated by Harry Roberts. Using single-letter coded prefixes means everyone working on a project can understand the purpose of different classes, and know how they should be used.

    A common complaint for those wishing to use software like preprocessors and task runners is that they often require knowledge of the command line. Tools tease new recruits with one-line install instructions, but the reality often involves much hair-pulling and shaving of yaks. To counter this, GitHub created Boxen, a tool that means anyone in their company can run a local instance of GitHub.com on their own computer by typing a single command. GitHub, and other companies like Bocoup and the Financial Times, also advocate using standard commands for installing, testing, and running new projects.

    Responsive principles, responsive to change

    Since responsive web design invited us to create interfaces that better meet the needs of users, it’s unsurprising that related discussion has increasingly focused on having greater consideration for users, the medium, and each other.

    If we want to build a web that is truly universal, then we must embrace its unpredictable nature. We must listen more closely to the full spectrum of our audience. We must see opportunities for optimization where we previously saw barriers to entry. And we must consider our fellow makers in this process by building tools that will help us navigate these challenges together.

    These principles should shape our approach to responsive design—and they, in turn, may need to adapt as well. This framework can guide us, but it, too, should be open to change as we continue to build, experiment, and learn.

  • Multimodal Perception: When Multitasking Works 

    Word on the street is that multitasking is impossible. The negative press may have started with HCI pioneer Clifford Nass, who published studies showing that people who identify as multitaskers are worse at context switching, worse at filtering out extraneous information, worse at remembering things over the short term, and have worse emotional development than unitaskers.

    With so much critical attention given to multitasking, it’s easy to forget that there are things our brains can do simultaneously. We’re quite good at multimodal communication: communication that engages multiple senses, such as visual-tactile or audio-visual. Understanding how we process mixed input can influence the design of multimedia presentations, tutorials, and games.

    When I began researching multimodal communication, I discovered a field brimming with theories. The discipline is still too new for much standardization to have evolved, but many studies of multimodality begin with Wickens’s multiple resource theory (MRT). And it’s that theory that will serve as a launch point for bringing multimodality into our work.

    Wickens’s multiple resource theory

    Luckily, Wickens saved us some heavy lifting by writing a paper summarizing the decades of research (PDF) he spent developing MRT. Its philosophical roots, he explains, are in the 1940s through 1960s, when psychologists theorized that time is a bottleneck; according to this view, people can’t process two things simultaneously. But, Wickens explains, such theories don’t hold up when considering “mindless” tasks, like walking or humming, that occupy all of a person’s time but nevertheless leave the person free to think about other things.

    Several works from the late 1960s and early 1970s redefine the bottleneck theory, proposing that what is limited is, in fact, cognitive processing power. Following this train of thought, humans are like computers with a CPU that can only deal with a finite amount of information at once. This is the “resource” part of MRT: the limitation of cognitive resources to deal with incoming streams of information. (MRT thus gives credence to the “mobile first” approach; it’s often best to present only key information up front because of people’s limited processing power.)

    The “multiple” part of the theory deals with how processing is shared between somewhat separate cognitive resources. I say somewhat separate because even for tasks using seemingly separate resources, there is still a cost of executive control over the concurrent tasks. This is again similar to computer multiprocessing, where running a program on two processors is not twice as efficient as running it on one, because some processing capacity must be allocated to dividing the work and combining the results.

    To date, Wickens and others have examined four cognitive resource divisions.

    Processing stage

    Perception and cognition share a structure separate from the structure used for responding. Someone can listen while formulating a response, but cannot listen very well while thinking very hard. Thus, time-based presentations need ample pauses to let listeners process the message. Video players should have prominent pause buttons; content should be structured to include breaks after key parts of a message.

    Visual channel

    Focal and ambient visual signals do not drain the same pool of cognitive resources. This difference may result from ambient vision seemingly requiring no processing at all. Timed puzzle games such as Tetris use flashing in peripheral vision to let people know that their previous action was successful—the row was cleared!—even while they’re focusing on the next piece falling.

    Processing code

    Spatial and verbal processing codes use resources based in separate hemispheres of the brain. This may account for the popularity of grid-based word games, which use both pools of resources simultaneously.

    Perceptual modes

    It’s easier to process two simultaneous streams of information if they are presented in two different modes—one visual and one auditory, for example. Wickens notes that this relative ease may result from the difficulties of scanning (between two visual stimuli) and masking (of one auditory stimulus by another) rather than from us actually having separate mental structures. Tower defense games are faster paced (and presumably more engaging) when accompanied by an audio component; players can look forward to the next wave of attackers while listening for warning signals near their tower base. Perceptual modes is the cognitive division most applicable to designing multimedia, so it’s the one we’ll look at further.

    A million and one other theories

    Now that we’ve covered Wickens’s multiple resource theory, let’s look at some of the other theories vying for dominance to explain how people understand multimodal information.

    The modality effect (PDF) focuses on the mode (visual, auditory, or tactile) of incoming information and states that we process incoming information in different modes using separate sensory systems. Information is not only perceived in different modes, but is also stored separately; the contiguity effect states that the simultaneous presentation of information in multiple modes supports learning by helping to construct connections between the modes’ different storage areas. An educational technology video, for instance, will be more effective if it includes an audio track to reinforce the visual information.

    This effect corresponds with the integration step of Richard Mayer’s generative theory of multimedia learning (PDF), which states that we learn by selecting relevant information, organizing it, and then integrating it. Mayer’s theory in turn depends upon other theories. (If you’re hungry for more background, you can explore Baddeley’s theory of working memory, Sweller’s cognitive load theory, Paivo’s dual-coding theory, and Penney’s separate stream hypothesis.) Dizzy yet? I remember saying something about how this field has too many theories…

    What all these theories point to is that people generally understand better, remember better, and suffer less cognitive strain if information is presented in multiple perceptual modes simultaneously. The theories provide academic support for incorporating video into your content, for example, rather than providing only text or text with supporting images (says, ahem, the guy writing only text).

    Visual-tactile vs. visual-auditory communication

    Theories are all well and good, but application is even better. You may well be wondering how to put the research on multimodal communication to use. The key is to recognize that certain combinations of modes are better suited to some tasks than to others.

    Visual-tactile

    Use visual-tactile presentation to support quick responses. It will:

    • reduce reaction time
    • increase performance (measured by completion time)
    • capture attention effectively (for an alert or notification)
    • support physical navigation (by vibrating more when you near a target, for example)

    Visual-auditory

    Use visual-auditory presentation to prevent errors and support communication. “Wait, visual-auditory?” you may be thinking. “I don’t want to annoy my users with sound!” It’s worth noting, though, that one of the studies (PDF) found that as long as sounds are useful, they are not perceived as annoying. Visual-auditory presentation will:

    Mode combination

    You might also select a combination of modes depending on how busy your users are:

    • Visual-tactile presentation is more effective with a high workload or when multitasking.
    • Visual-auditory presentation is more effective with a single task and with a normal workload.

    Multimodal tension

    A multimodal tug-of-war goes on between the split-attention effect and the redundancy effect. Understanding these effects can help us walk the line between baffling novices with split attention and boring experts with redundancy:

    • The split-attention effect states that sequential presentation in multiple modes is bad for memory, while simultaneous presentation is good. Simultaneity helps memorization because it is necessary to encode information in two modes simultaneously in order to store cross-references between the two in memory.
    • In contrast, presenting redundant information through multiple channels simultaneously can hinder learning by increasing cognitive load without increasing the amount of information presented. Ever try reading a long quote on a slide while a presenter reads the same thing aloud? The two streams of information undermine each other because of the redundancy effect.

    Which effect occurs is partially determined by whether users are novices or experts (PDF). Information that is necessary to a novice (suggesting that it should be presented simultaneously to avoid a split-attention effect) could appear redundant to an expert (suggesting that it should be removed to avoid a redundancy effect).

    Additionally, modality effects appear only when limiting visual presentation time. When people are allowed to set their own time (examining visual information after the end of the auditory presentation), studied differences disappear. It is thus particularly important to add a secondary modality to your presentation if your users are likely to be in a hurry.

    Go forth, multiprocessing human, and prosper

    So the next time you hear someone talking about how multitasking is impossible, pause. Consider how multitasking is defined. Consider how multiprocessing may be defined separately. And recognize that sometimes we can make something simpler to learn, understand, or notice by making it more complex to perceive. Sometimes the key to simplifying presentation isn’t to remove information—it’s to add more.

    And occasionally, some things are better done one after another. The time has come for you to move on to the next processing stage. Now that you’ve finished reading this article, you have the mental resources to think about it.

  • On Our Radar: Pretty Advanced Machine Learning 

    Forgive me for stating the obvious, but some really fascinating tech is coming out of newsrooms right now. This month, Shan Wang has already written two great pieces on different ways the New York Times is integrating Slack into their newsroom—introducing us to Blossom, the bot that helps editors decide which stories to promote on social media, and showing how the team used Slack as a tool for live-blogging the first Republican presidential debates.

    A photo spread of the current and former Knight-Mozilla Fellows.
    The 26 current and former Knight-Mozilla Fellows. Image credit: OpenNews, licensed under CC 3.0.

    If you read that and wished it could be your day job, now’s the time to apply for the Knight-Mozilla OpenNews Fellowship. The deadline is this Friday at midnight EDT. —Marie Connelly, blog editor

    Your weekend reading

    1. Sociologist Tressie McMillan Cottom caught my attention with her piece on the many meanings of faves—and why Twitter changing the icon from a star to a heart is more than a graphic tweak (tl;dr: hearts make people feel funny). I’ve been seeing this more and more recently, from a period-tracking app that puts hearts onto intercourse days to Twitter adding a rainbow-heart hashflag if you tweeted #pride or #lovewins after the U.S. marriage equality ruling. If you ask me, these little icons are a big deal: as an industry, we’re overstepping the emotional boundaries of users, and forgetting who’s in charge. I should get to decide how I feel about something, not my digital products. —Sara Wachter-Boettcher, editor-in-chief
    2. I have a confession to make: I’m not a big preprocessor fan. (BTW, have you heard about my kid-free lawn?) They absolutely have their place, but my projects mainly involve small teams working with non-technical clients, so simplicity is the name of my game. Chris Coyier’s recent post, “The Trouble With Preprocessing Based on Future Specs,” does a good job of spelling out some of the more esoteric concerns with the approach some preprocessors take, and the problem with trying to predict the future of web specifications. —Tim Murtaugh, technical director
    3. I love Lyza Gardner’s recent talk on the importance of generalists—especially when she says, “Generalists are the people you hand off work to, knowing they don’t know how to do it, and knowing they’re going to get it done anyway.” Anyone recognize themselves in that? —Rose Weisburd, columns editor
    4. We’re so used to hearing everyone sing the praises of Apple as an innovator in the tech and design fields. So when I came across this article about Don Norman slamming them for sacrificing usability in favor of appearance, I was immediately interested. Don makes some really good points, shedding a different light on the computer giant’s ease-of-use practices.—Erin Lynch, production manager
    5. Like many, I learned to code on the web by building GeoCities sites. Cameron’s World is a web collage built from pieces of archived GeoCities pages. Visiting Cameron’s World feels like jumping in a time machine to an era when the web was less polished—but much more personal. —Yesenia Perez-Cruz, acquisitions scout

    Your must-see hashtag

    #WOCinTech

    Overheard in ALA Slack

    “It is nice to hear that someone else also feels compelled to recoil from warmth and goodness, as a Dracula might recoil from garlics.”

    Your Thursday gif

    A gif of a cat attaching an iPad and falling off a table.
  • This week's sponsor: Proposify 

    Save time writing and designing your next proposal with help from our sponsor, Proposify.

  • Building to Learn 

    I’ve spent my fair share of the last 10 years in the web world learning new things. We all have; it’s the one constant in this industry: things will change and you will need to change with them.

    And for people just starting out on the web, building and learning is pretty much continual. This summer, as a friend learns to build things on the web for the first time, I’ve thought more about how I learned, how I got started in this webbish world in which I find myself day in and day out.

    If you’re just getting started on the web the biggest piece of advice I have for you is: build something that interests you. That may not sound that radical, but it’s truly one of the best ways I’ve found to learn something new, whether that’s here on the web or elsewhere in life. Many online code courses have you building something they’ve chosen, but if you don’t care about building their to-do list or whatever, you aren’t going to be as excited about the project. If you aren’t excited about it, you aren’t going to learn as much.

    When I was first learning HTML, CSS, and ColdFusion (yes, I learned ColdFusion as my first dynamic language to interact with a database), I made a small site about art. I went back to school to learn about the web because it was more marketable than the art degree I got in college, but art was something I was still very interested in and familiar with—plus, I had resources sitting around in my personal library for data and information. I remembered a small book on artwork through history I had on my shelf and used it as the basis for a small site. It was perfect because it gave me the database fields easily (such as the title, artist, date, and genre). The structure of the book made some of those decisions for me, so I could focus on the code. It was thrilling when I was done, when I could click around a site and pull up various art works. It wasn’t perfect, but I learned a lot about how to pull information from a database that’s still relevant in my work today.

    In addition to creating something that’s interesting to you, many folks suggest making something you would actually use in your day-to-day life. One thing I’ve seen is a meal planning application that someone made for themself and was able to use with their partner regularly. Maybe you’re interested in history and want to link publicly available photos to places in your city. What you make isn’t as important as the fact that you’re excited to make it. You want the finished product, so hopefully you’ll persevere through the tough parts.

    Of course, having to do something is also a great motivator for learning how to get it done. When I was on a project and we had to get a password meter working with JavaScript, a colleague coached me through it and I learned. When I was passed a new design to lay out and it seemed like flexbox would be a great solution, I learned flexbox as I got the site working in the browser.

    I’ve found when I start doing an online tutorial, if I’m not interested in what they’re having me build, I don’t learn as much. It’s not that I don’t understand what’s going on, but the concepts don’t stick with me. When I need to figure something out on the job or I’m building something that I want to use in my own life, though, I learn so much more. And when I need to use those same coding concepts that I learned to make that password meter in JavaScript, they come back to me more easily.

    I’ve already written about how learning new things in our industry can be overwhelming with all the options out there and all the change that is constantly happening. But when you slow down and focus on something you find useful or something you need to know how to do, you flip the equation. Instead of trying to get through a tutorial or lesson, you’re making something you want, which keeps you motivated as you figure out how to get there. When you need to remember those concepts down the road, it’s so much easier to retrace how you built a project you’re excited about and proud of instead of a cookie-cutter tutorial you can’t quite remember.

  • Rachel Andrew on the Business of Web Dev: Creating Process to Free up Time for Creativity 

    In The E-Myth Revisited, Michael Gerber encourages entrepreneurs to develop business systems, going as far as to suggest we should build our businesses as if we intended to create franchise operations. This idea of process and systems is a popular one—taken to an extreme by people such as Tim Ferris in The Four Hour Workweek. This complete systemization of business activities is rightly pushed back against by many people working in creative fields. The act of creating something new can’t be written out as a series of steps. The individual creator, the artist, is important. That role can’t be just anyone who is able to follow the instructions.

    On a recent Startups for the Rest of Us podcast, the discussion was about people versus process. This suggested that you can either value people and their creativity, or you can develop a very process-driven business, where you can slot in any person to step through the tasks. I think there is a middle ground. Documenting procedures for tasks that benefit from a structured approach can leave more flexibility when tackling creative work.

    What do we mean by process?

    When we turn a task into a process or system, we identify all of the steps we go through to perform that task and make that a checklist. My process for reconciling transactions in Xero is:

    1. Look at the incoming bank transaction.
    2. Find the receipt or invoice for this transaction.
    3. Add the details from the receipt into Xero.
    4. Check that, if tax has been applied, I have entered the correct rate, and that it is detailed on the receipt.
    5. Upload a PDF version of the receipt to Xero and also store it in Dropbox named with a reference for the account name it came out of.
    6. Mark the transaction as reconciled.

    This is not creative work, it’s a rote task. Following this practice around bank reconciliations has benefits:

    • I know that any reconciled transaction will have an associated invoice or receipt. If I need to produce that proof, I know where they are stored, as the process prevents mistakes caused by me forgetting what to do here.
    • It removes friction. I don’t need to expend any energy remembering what to do.
    • It makes it easy for me to outsource this task. If I’m bringing on board a new bookkeeper, I can explain this process and be able to see that it is being followed.

    By creating a process, I can get this task out of my head. Reconciling accounts isn’t my favorite job, but when I need to do it I can grab a coffee and step through each item on the checklist without spending any time thinking through it. I add these boring but important tasks to a special Context in OmniFocus that contains tasks to work through on those days where I’m having a hard job focusing for some reason.

    This is a fairly simple example, but the same method can be applied to more complex tasks in your business. For example, which steps need to be followed when you take on a project with a brand new client? You might include:

    • Creating a file (from a template) for their basic information, such as who to invoice, agreed terms and so on.
    • Sending them a contract to sign.
    • Saving the signed contract.
    • Sending an initial invoice.
    • Ensuring that payment terms have been agreed.
    • Setting them up on a collaboration tool, such as Basecamp or Slack.
    • Safely storing any login details you need for their hosting.

    Again, this is not creative work. How you go about it might differ from client to client. You might go through this list in person with one client and via email with another. Either way, the checklist ensures that you have agreed payment terms. It ensures you have a contract, and that you have received any assets they need to provide you up front (which could well save you time when you need something and the client is out of the office).

    Checklists and process can also help make jobs you find difficult far easier. Perhaps you dislike talking about money with new clients. Sending that email containing the estimate for a job can become an all-day procrastination affair! The checklist can take your mind off the potential result of the interaction and help you focus on performing the steps to get to that point.

    Leaving your brain free for creativity

    The power of process is that it can free up time and energy to do things that can never be reduced to a checklist. To-do lists stop your brain endlessly having to remember what you need to do today, and process documentation can work in the same way. You no longer need to remember what must happen to complete a certain task. Though often seen as a way to outsource parts of your business, putting process in place can also benefit the one- or two-person business.

    If you do go on to expand your business, having good processes can make it far easier to bring in permanent or temporary help. Those tried and tested checklists can be given to someone who can perform the rote tasks, leaving you to do the more interesting creative work.

    Most importantly, by not expending effort on the mundane you can leave more time free for the work you love to do—the work that can only be done by you. You can then feel free to put aside all thoughts of checklists and to-do lists and work in exactly the way you know enables your best accomplishments.

  • This week's sponsor: Bushel 

    Managing MacBooks and iPhones for your team? Our sponsor Bushel is here to help, so you can focus on work.

  • The Language of Modular Design 

    As many of us move away from designing pages toward designing systems, one concept keeps cropping up: modularity. We often hear about the benefits of a modular approach; modules are scalable, replaceable, reusable, easy to test, quick to put together—“They’re just like LEGO!”

    Modularity might appear to be a simple concept at first, but making it work for your team demands significant effort and commitment.

    The biggest challenges around modularity are all the decisions that need to be reached: when to reuse a module and when to design a new one, how to make modules distinct enough, how to combine them, how to avoid duplications with the modules other designers and teams create, and so on. When modularizing an existing design or building a new one, it’s not always clear where to begin.

    Start with language

    Language is fundamental to collaboration. In her book How to Make Sense of Any Mess, Abby Covert says that the biggest obstacle teams face is the lack of a shared language. To help establish that shared language, she suggests that we discuss, vet, and document our ontological decisions in the form of “controlled vocabularies.”

    In short, we should start with language, not interfaces.

    For about a year now, our team at FutureLearn, an open education platform, has been experimenting with a modular approach. I’d like to share a few ways we have tried to hone a shared language to help our team transition into modular design.

    Build a pattern library as a team

    One of our first experiments with modularity was an attempt to redesign our homepage. A visual designer created modular slices, and we then held a workshop where we tried to organize the modules into comps. That’s what we thought (perhaps naively) a “modular design process” would look like.

    Photograph of the team at FutureLearn organizing modules into comps.
    One of our first experiments with modularity.

    We ended up with three designs that eventually became fully functioning prototypes. But even though it was a useful exercise, the result we came up with wasn’t truly modular:

    • Modules weren’t clearly defined.
    • They didn’t have clear functions; the differences between them were often merely aesthetic.
    • We didn’t standardize and name them.
    • We didn’t put a lot of thought into how they would be reused.

    Ultimately, we decided not to use the resulting designs. These experiments were useful in propelling us into a modular mindset, but the defining step toward thinking modularly was going through the process of building a pattern library as a team, which took several months (and is still in progress).

    The atomic design methodology pioneered by Brad Frost served as our foundation. This is when we started looking closely at the UI, taking the interface apart, conducting inventories, and defining the core elements and patterns that we used to build new pages.

    Once we had a library, we were better prepared to think about design in terms of distinct reusable components. Until then—even after all of our experiments—we were still thinking in pages.

    Name things collaboratively, based on their high-level function

    Once you lay the foundation, it’s important to build on it by evolving the language as a team. An important part of that is naming the things you create.

    Imagine you have a simple component whose function is to persuade people to take a specific online course. What would you call it?

    Screenshot of a module promoting an online course in cyber security.
    UI component promoting an online course on FutureLearn.

    The name depends on the component’s function and how and where it appears in the interface. Some people, for example, might call it “image header” or “course promo.”

    James Britton, a well-known British educator, explains in Language and Learning that by conferring names on objects, we engage in a “process of bringing [them] into existence,” just like children who use language to “call into existence, to draw out of nothingness,” the world around them. Similarly, if an object in the interface doesn’t have a name—a name that makes sense to your team, and is known and used by people on your team—then it doesn’t exist as a concrete, actionable module to work with.

    Once you name an object, you shape its future. For example, if you give it a presentational name, its future will be limited, because it will be confined by its style. A functional name might work better. But functional names require more thought and are harder to arrive at, because function can be relative. For example, we almost called the component above “course poster” because it had an image of the course in the background and its function was to promote the course. It wasn’t a bad name (it was quite functional, in fact), but it was also limiting.

    Around the same time, for another project, a different designer introduced a component that looked (apart from minor variations in layout and typography) quite similar to our “course poster.”

    Screenshot of a module inviting learners to take part in a discussion on brain activity.
    UI component inviting learners to take part in a discussion.

    The function of the new component was to invite people to take part in a discussion. So the first thought that came to mind was to call it “discussion.” No one made a connection with “course poster” at first, because its name limited it to one specific function—promoting courses—and the function of “discussion” had little to do with it.

    If we had given those components the names we initially thought of (“course poster” and “discussion”), we would have ended up with two named modules that were almost identical but non-reusable. Such oversights can lead to duplications and inconsistencies—which undermine modularity.

    Although their functions may appear different at first, if you look at multiple uses of these components in context, or even imagine potential use cases, it’s easier to see that they both do analogous things: they serve as attention-seeking slices. They command the core calls to action on those pages. In other words: their high-level function is to focus users’ attention on the most important action.

    Screenshots of multiple billboard components in use.
    A billboard component in use.

    In the end, we created a single component called a “billboard.” Billboards are not restricted by their position on the page, or by their content, or by their appearance. They can appear either with an image as the background or as part of the content. What matters is the high-level function, and that this high-level function is understood in the same way by different people on the team.

    Screenshot of a billboard component with an image of a hamburger inserted between the headline and the call to action.
    Example of a billboard component with an image as part of the content.

    In the process of naming an element, you work out the function as a group and reach an agreement. It’s not so much about giving something a great name (although, of course, that’s an ideal to aspire to), but agreeing on the name. That determines how the element will be used and helps to ensure that it’s used consistently by everyone.

    Make design language part of everyday culture

    Naming things this way may take longer, at least initially, because it doesn’t yet feel habitual. It requires additional effort and commitment from the whole team to make the process more familiar.

    One way to make conversations about language happen is to create a physical space for them in the office.

    A photograph of two walls papered in printouts of modules and naming discussions.
    A space in our office where language conversations often take place.

    High-level functions are easier to define if you have the whole UI printed out and intelligible at a glance. It’s also a lot easier to spot any duplications and inconsistencies that way.

    Slack or other chat clients are also a viable way  to have these discussions. For example, it can help to post new elements you’ve just introduced, or existing ones that you suspect are inconsistent or potential duplicates, and then try to work out their function and find a suitable name. When thinking of names, it helps to have a common point of reference; our team often borrows terms from other industries, such as publishing or architecture, to give us ideas.

    Screenshot from a Slack discussion about potential names, such as 'bracket' or 'triglyph,' for a module with three content chunks.
    A typical naming discussion on Slack.

    Keeping design language alive by making it part of our day-to-day conversations, whether in-person or remote, plays a key role in maintaining modularity within our team.

    Define CSS architecture at the design stage

    Needless to say, reaching consensus can be difficult. For example, we may disagree on whether we should reuse an existing module, customize it for a specific context, or create a new component.

    Several articles have been written about styling UI components based on context. But Harry Roberts has suggested that “having to change the cosmetics of a component in a certain context is a Design Smell”—a sign that your design is failing. How do you prevent this from happening?

    What helps us is trying to standardize elements at the design stage, before building them—in other words, starting with design language. This means that developers need to understand why things are designed in a certain way, and designers need to know how the modules are built so that they can make tweaks without having to create a different version of the module.

    Before writing the CSS, it’s useful for designers and developers to understand the purpose of every element they create. They might start by asking lots of questions: “Will this module always be full width? Why? Will it always include those buttons? Is the typography likely to change? Is the image background critical to the design? Are the horizontal rules part of the molecule?”

    Answering these questions helps to ensure that components follow through on design intent. For example, you might choose to specify some styles at the atomic level, instead of at the organism level, to make it easier to change those properties without changing the module itself.

    Involve users in the design process

    Another critical aspect in establishing a shared understanding is involving people from different disciplines, as well as users, in the design process from the outset. When brainstorming and sketching together, we can’t help but talk about the design elements, so we inevitably make ontological decisions that help to strengthen and evolve the design language.

    Getting to the testing stage early is important, even if modules are presented as simple paper cards. Testing with ideas on cards is very different from our usual process, where we have a list of tasks and scenarios that we walk users through. Here, participants can pick up, move around, discuss, and scribble on the cards, actively becoming part of the design process. This gives us a chance to test our language choices and make sure that the functions we’ve defined make sense to our users.

    Three photographs of users reacting to large paper printouts of modules.
    User testing and participatory design with learners.

    Takeaways

    A well-established language foundation is a powerful tool that allows teams to synthesize their efforts around implementing modular design. But the way a shared design language evolves is a piecemeal, gradual, and organic process. Every person on the team plays a role in making it more coherent. Going through the process of building a pattern library as a team is an effective way to establish a language foundation. Using a solid methodology, like atomic design, can speed up the process.

    Naming things together is a useful habit for your team to develop, because in the process of trying to give something a name that makes sense, you work out its function and, most importantly, reach consensus. The agreed-upon name determines how the element will be built and encourages consistent usage across the team. As Abby Coverts writes, “If you don’t get agreement up front, prepare for more work later.”

    Make an effort to refer to the elements by the name you agreed on—no matter how strange this might sound in everyday conversations. It takes more effort initially to call something a “whisper box” (yes, we have an element called “whisper box”) rather than “that thing with the lines and an icon in the middle.” But until you start referring to an element by its proper name, it doesn’t exist in your modular system as a solid, actionable block. Every time you use the name you agreed on, you strengthen the element you call on and evolve your design language.

    Put your language to the test outside of your team by using it throughout the company, with other teams, and with users. It’s always interesting to see what sticks—it’s a real kick when someone outside the product team starts using the name, too.

    Finally, remember that no language (aside from a few exceptions) exists in isolation. By evolving and strengthening your design language, you have an opportunity to contribute to the larger language of the web, and to help make it more consistent and coherent for everyone.

  • Sharing Our Work: Testing and Feedback in Design 

    When I was a younger, less experienced designer, I was uncomfortable showing work that wasn’t “done.” I thought that design was something I should show only in its glorious, final state.

    Many designers and design processes suffer from the same isolation problem: we don’t show our work to our users until the very end—when they’re stuck with it. We treat research as a box to check off, rather than an asset to use as a design evolves. We rely on personal opinions to make decisions instead of validating them with the people using the product.

    However, the more we share our work in progress, using a variety of testing methods at every stage of design, the more input we can get from the people the design is for. Multiple research methods ensure that we receive diverse feedback; and more diverse feedback helps our products better meet our users’ needs.

    Learning to share

    When I first came to Etsy, I was surprised to learn how much their design process focuses on iteration and testing. Designers show Etsy’s buyers and sellers early versions of new designs to get direct feedback.

    This doesn’t just happen once, either—research is integrated throughout the entire design process, from small conceptual tests to working prototypes to fully functioning features. At each point in the design process, we ask specific questions to help us move forward to the next phase of work.

    To answer these questions, we use different research and testing methods, tailored to the type of feedback we’re looking for. Each type of research has strengths and weaknesses. If we limited ourselves to one type of research, like usability testing, we wouldn’t catch everything. Gaps in the feedback would leave us to build something that didn’t totally align with what our users need.

    Here is how we use research at Etsy at each point in the design process to solicit different types of feedback (and the surprises we encounter along the way!).

    Definition

    At the beginning of a project, we define what we’re going to build. To formulate a project goal, we start by looking at high-level business goals, research from past user testing, and data on current Etsy usage. The direction we pick in this phase dictates what the next few months of work will look like, so user validation is particularly important here.

    To help choose our path, we create low-fidelity mockups and do concept testing with Etsy users who fit the target audience for the project. Rather than invest a lot of engineering time up front on building something we might not use, we often test clickable prototypes, which, while clunky, are cheap to create and involve zero engineering time. They’re mockups of an unbuilt interface, which also means we’re able to throw in blue sky features. Focusing on the best-case scenario of the feature lets us test whether sellers grasp the concept—realism and constraints can come later.

    My team at Etsy, Shop Management, recently tested concepts for a promotional tool for sellers. We had a rough idea of what we wanted to build, but it was important to see if sellers understood the feature and its benefits before we went too far. We recruited sellers to remotely walk through our prototype, asking them:

    • “What’s the purpose of this screen?”
    • “Tell me about what you just did there.”
    • “What’s the value of this tool?”
    • “How would you use a tool like this for your shop?”
    • “How would you describe this tool to a friend?”

    Even though the format of these sessions is similar to how usability testing is conducted, we’re not focused on usability feedback at this early stage; we’re more concerned with solidifying a direction. There might be gaping holes or implausible scenarios; in one version of our clickable prototype, I forgot to account for the iOS keyboard! But mixing up details like that is okay when the questions we’re asking are broad. Instead of focusing on the interface, we’re asking participants about the idea. Validating our direction as early as possible sets us up for success down the road.

    Design

    Once the concept has been validated, we dive into designing the new feature. Design constraints come into play, and we’re now tasked with solving some of the details that we punted on in earlier conceptual versions. As more constraints are applied and we get deeper into uncovering the specifics of what we’re creating, the research becomes more focused on the interface itself. This is where usability testing becomes our best friend.

    Last year, the Shop Management team redesigned the core of Etsy’s seller tools, the Listings Manager. The redesign was much needed; the interface was showing its age, and so was the technology behind it. Many useful new features had been added to the Listings Manager over the years, but they were added as their own pages instead of being integrated into existing workflows. Nothing was optimized for mobile, despite increased traffic to Etsy from mobile devices. We needed to rearchitect the Listings Manager with sellers’ workflows and technology best practices in mind.

    Redesigning such an integral part of a seller’s toolset was going to be tricky, though. Sellers are very sensitive to change because these tools are what they use every single day to run their businesses. So we conducted usability testing every two weeks to make sure our design decisions matched the way sellers wanted to work. And we used task-oriented questions during these sessions, like asking sellers to:

    • perform an action that existed in the old design, like editing a listing
    • find a familiar feature, like “quick edit,” in a new location
    • go through a common flow, like finding a listing that had expired and renewing it
    • use new design paradigms, like a gear dropdown for performing actions

    We tried a few clever ways of redesigning the Listings Manager interface; for example, we added a sidebar for editing listings, so sellers wouldn’t have to go to an entirely new page. But the sidebar totally bombed—it required way too much scrolling and wasn’t as useful as we anticipated. It was painful for us to watch sellers struggle to use our prototype. Thanks to usability testing, we immediately ditched the sidebar and moved on to a more practical interface.

    Development

    After weeks of iteration, we have a solidified design that works end to end. The product has bugs and missing features, needs a lot of polish, and is months away from launch—but this is when we want Etsy users to start kicking the tires and using it as a part of their normal workflows.

    Usability testing is great for feedback on specific tasks and first impressions, but because it’s a simulated environment, it can’t provide the answer to every question. We might still be trying to validate whether we successfully hit the original goals we set for our project, and for that we need the feedback of people who use the product regularly.

    To catch these types of issues and to vet new features on Etsy, we created beta tester groups to opt-in specific buyers and sellers to early versions of our features. We call these “prototype groups,” and each one has a forum in which members can post feedback and talk to one another and to our product teams. The scale of prototype groups can range anywhere from a few dozen people to thousands; our largest prototype group to date has been 10,000 Etsy users. Having so many people use a pre-release version of a feature means that we’re able to catch edge cases, weird bugs, and gnarly user experience issues that bubble up before we release it to everyone.

    When we released the Listings Manager redesign to a prototype group, we wondered things like:

    • If we repeatedly got feedback on something in usability testing, were we able to successfully fix it, or was it still an issue?
    • Is it faster to edit listings in the new Listings Manager? If not, what are the biggest points of friction?
    • What tasks are sellers trying to perform when they switch back to use the old version of the Listings Manager? Why did they switch?

    We added some new image editing tools for sellers’ listing images, but noticed that the tool icons were crowding the images interface. Our solution for this was to roll the actions up into a small dropdown. When we put these updates through usability testing, nothing noteworthy came out about the new interface, so we moved forward with it.

    After sellers in the prototype group started using it, however, we saw consistent negative comments in the forums about the new dropdown. To create a new listing, sellers tend to copy an existing listing as a template, then edit attributes such as the photos and title. In the old design, editing photos was a straightforward flow, but the new dropdown added in two clicks, more reading, and extra mouse movement. We had created more friction for sellers adding new listings.

    The prototype group allowed us to catch issues like this because sellers were putting our product through realistic scenarios. We spent the next six months fixing problems that came directly out of the prototype group. Having a direct line of communication with our beta-testing sellers helped us find patterns, identify problems, and vet solutions before our full release.

    Release

    When a feature is fully functional and any design kinks we’ve uncovered have been smoothed over, we’ll often release it as an experiment: we’ll direct a portion of traffic to a different version of a page or a flow, then compare metrics like conversion or bounces. What people say anecdotally doesn’t always line up with what they actually do; data helps us understand how people are actually using a new interface.

    Etsy’s seller onboarding process is a great place to run experiments because new sellers don’t know how onboarding will work. We’re also able to analyze the long-term impact that onboarding has on a shop’s success. For example, our team noticed that it was taking sellers up to a month to complete the onboarding process and open up their shops, so we began a redesign project to decrease the amount of time it takes to open a shop on Etsy.

    At the onset of the project, we defined a number of metrics to determine the success of the redesign, including:

    • the time it took for a seller to go through the onboarding process
    • the percentage of shops that completed onboarding
    • key shop success metrics, such as the number of listings in a shop

    We designed another version of onboarding that was solely about getting the basics of a shop—shop name, items, payments—in place. Anything optional, like a decorative banner, return policy, or an About page, could be easily added after the shop opened. We were thrilled with the new interface and simple design, but we wanted proof that this was the right direction.

    It’s a good thing we tested the new onboarding against the old. When the results of the experiment came back, we saw that more people were completing the new onboarding (good!), but the number of listings per shop was down (bad!). We had over-optimized and made it too easy for new sellers to go through onboarding. The data from the experiment uncovered design flaws that we never would have found otherwise.

    Share early and often


    Becoming comfortable with showing unfinished design work isn’t easy. As designers, we’re tempted to want to exhibit control over our work. But by waiting until the very end, we’re assuming that our decisions are right. There’s so much that we can learn early on from the people who use our products. Ultimately, we want to be confident going into a launch that our users won’t feel surprised, confused, or ignored.

    Successful product launches are a direct result of continual research throughout a project. Using a variety of methods to get feedback at different points in the process will help surface a range of issues. Addressing these issues will bring your product that much closer to meeting your users’ needs. Don’t wait until design decisions are solidified to ask what your users think. If you ask questions at the right times along the way, you’ll be surprised by what you learn.

  • On Our Radar: Continued Change 

    The Ada Initiative, which has long supported women in technology through workshops, discussions, and networking, is shutting down this fall. We’re sad to see them go, but grateful for the valuable role they’ve played in our communities, particularly in advocating for codes of conduct.

    A woodcut image of Ada Lovelace.

    Best of luck to their founders and supporters in their new ventures, and to everyone who will carry on the organization’s work in other venues. Their training materials are (or soon will be) available under Creative Commons Attribution Sharealike licenses, and there are many resources for continued efforts in their heartfelt and gracious goodbye.

    Your weekend reading

    1. “It’s as though someone dumped a shipping container worth of LEGO on the floor and we’re working out what to make.” Ben Evans connects the dots on how smartphones change everything. —Jeffrey Zeldman, founder and publisher
    2. Planning and pricing complex software projects is hard, so I loved reading Darren Petersen’s take on how the Lullabot team estimates project budgets using ranges, confidence levels, and input from multiple people. I really appreciate that they shared their formula-filled spreadsheet so I can try out these concepts on my own projects! —Aaron Parkening, project manager
    3. In an article examining online accessibility for people with disabilities, s.e. smith calls the internet “one of our greatest post-ADA social failings.” New inventions like Dot, a Braille-based smartwatch, are fascinating and promising—but there’s a lot more work to be done (especially when we consider the range of disabilities that affect all of us). We need to think more inclusively from the start so that the internet is more than a “wasted promise.” —Lisa Maria Martin, issues editor
    4. “How come so few women are speaking at this conference?” The type community is starting to put pressure on this question—loudly and publicly. An important conversation unfolded recently on Twitter—the best place for such dialogue, in my opinion. —Caren Litherland, editor
    5. Last month, Chris Coyier invited people on Twitter to answer the question: “Front-end development is hard because _________.” The responses varied widely, and Geoff Graham helpfully brought them all together in a post on CSS-Tricks. —Anna Debenham, technical editor

    Your must-see hashtag

    #ilooklikeanengineer

    Overheard in ALA Slack

    “Look, maybe I am processing some feelings, okay? Maybe some browsers are emotionally unavailable.”

    Your Friday gif

    An animated gif from the movie 'Wet Hot American Summer,' showing a boy with glasses saying, 'You have definitely cast a level 5 charm spell on me.'
  • Lyza Danger Gardner on Building the Web Everywhere: The Tedium of Managing Code 

    There’s a place motivation goes to die for web developers. It’s when something we have to do is simultaneously very, very hard and very, very uninteresting. You know, the corner of Hard and Boring Streets in downtown Must-Ship-to-Clientsville. In my personal map of life, that’s the intersection where you’ll find anything that has to do with client-side JavaScript packaging and dependency management.

    I have some slides about this in recent talks. They always cause a stir. On one occasion, I had to pause for people to finish cheering. Not at me, but at the notion. There was, I think, relief and amusement at the shared recognition that bundling up and managing our client JavaScript is challenging and, at least for some of us, not our favorite thing to endure.

    What are we trying to do here?

    A first clue to the wide-flung nature of this beast is that I don’t even have a concise term for what I’m talking about. The summarized goal is that we need to get our various bits of JavaScript put together in a way that a browser can use it. This actually entails several things, as we’ll see, but it can feel like one general objective.

    We have left behind the rickety old days, when we wrote and downloaded scripts, tossed them in a directory and stuck them in <script> tags in our web pages. We’re making For-Real Things with JavaScript nowadays, and that comes with the baggage of needing to package, manage, and maintain our code, third-party code, and dependencies.

    Starting basic, our first need is to take our application code, package it into one (remember: staying basic here) file, and output it somewhere. Then we can reference the output package in a <script> tag.

    This may sound like an exercise in concatentation. But to get distracted by concatenation is to miss the actual thrust of what we need to do: make our code modular and handle dependencies.

    You can depend on this

    We write code, and as we write it, bits (I’m not going to say modules just yet because that comes in a minute—hang on) of our code need other bits of code from other places. Those needed things—dependencies—might be within our own codebase or external to it.

    A primary task is not just to smoosh all of our code together in a package, but to resolve and load the dependencies it needs as part of that process.

    That means we need to be able to reference the dependencies we need in a way the packaging tool understands, and the tool needs to know how to find them. Not only that, our code and the code of our dependencies needs to be modularized in the right shape or our tool will rage quit.

    Keeping it contained

    That is, our code and its dependencies need to use the appropriate module syntax. This is fine when everyone is in the same universe and gets along well, as in the case of pairing npm modules with the popular browserify tool.

    browserify can seem so simple and magical. require npm modules that you need in your code just like in node, then bundle it up and, whammo, it spits out a script that works in the browser. So far, so awesome.

    But code is modularized and written in different ways—AMD, UMD, CommonJS—or not at all. Some of it might be ES6 (JS 2015 to the cool kids), which we need to transpile.

    “Just shim it” and other things easier said than done

    There are methods for subduing or translating modules that are in the wrong shape for your packaging tool of choice—packaging tools can be extended or configured to shim wayward modules. But the overhead of managing for many different flavors of modules can be a tedious addition to an increasingly cumbersome process.

    Meanwhile, we have additional things to deal with. We also have to manage the source of code and dependencies. Not everything we need for the web comes from npm. Browser-targeted JavaScript has many sources: bower, CDNs, application code from your own repository, third-party code that isn’t managed at all. Fun.

    Another common scenario is including a core dependency from a CDN—a classic example is jQuery—within a <script> tag. We need to tell our tool that that dependency is already accounted for, and not to worry about it. And if we can get the config syntax right (grrrr, this one bites me every time), provide that jQuery dependency to our own modules as $ instead of jQuery. Et cetera.

    Yes, it’s all very possible

    At this point some, maybe many of you are squinting and thinking Come on, it’s not that hard. And, in the grand scheme of things, it’s not. I guess. It’s doable.

    But here’s the punchline. Or, at least, the point that makes me want to lie down on the floor for a while. Every single tool or system or combination of tools does each of the things I’ve talked about in a slightly different way. browserify, require.js, webpack, others. Whether that’s the expectation of AMD module syntax or a standalone config file or different command-line options, each solution has its own learning curve that proves remarkably unfun for me when I’d rather be, you know, implementing features. It sabotages my focus.

    And then we add more

    Any single aspect, like shimming for non-conformant module syntax, can be trivial in isolation, but typically I am at least one layer removed from the packaging by way of a build workflow abstraction. I’m usually looking at my packaging config through a murky lens of gulp or grunt. And often there are other bits at play, like a watch task to spawn packaging builds when relevant code has changed.

    It’s a telling sign that the browserify task in my most recent gulp workflows is the only one I don’t fully have a handle on—it’s sourced from a boilerplate example. At one point I went through the code, line by line, and added my own comments, as a learning exercise. For five minutes, I had the whole system glowing and complete in my head. Now I look at the code and it is, once again, soup.

    But, ES6!

    ES6 (JS 2015) is a significant update to the JavaScript language and has its own, built-in module syntax. And that is great! Especially if we could now go blow up all of the existing code in the world and start over.

    Just this morning I was pondering the readme on babel-loader, an ES6 module transpiler and loader. We’ve got a project using webpack and we want to write our own stuff for it in ES6. But now, here I am again. How do I configure webpack correctly vis-a-vis babel-loader? How can I be certain that I can import non-ES6 dependencies into my freshly-minted ES6 modules?

    The reality is that even when ES6 support becomes more widespread, there are going to be multiple, co-existing module syntaxes and package managers and unmanaged third-party code. The complexity is a sign that JavaScript has really come of age as the programming language of the web, but mastering this stuff takes some effort. Excuse me while I go off to debug the Uncaught ReferenceError: ufSkpO1xuIl19 is not defined exception that browserify just barked at me.

  • This week's sponsor: Booking.com 

    Love travel and design? Our sponsor Booking.comis looking for a new UX designer to join their team in Amsterdam. Apply today!

  • Love Your CMS. (No, Really!) 

    “Content management system.” The words are simple enough, but what exactly is a CMS? Is it a simple tool for editing a web page in a WYSIWYG box, or a robust system that keeps track of the historical versions of every sentence on the site? Is it a free piece of software you can install in an afternoon, or a massive purchase that involves contracts and licensing fees and schmoozy dinners with salespeople? Does it help you build a sleek responsive site, or does it thwart you at every turn?

    Tragically, the answer is that a CMS is, and does, all of these things. CMSes help and hinder; they inspire rapture and incite table-flipping. I’m thrilled to moderate the next ALA: On Air event, where Karen McGrane, Jeff Eaton, and Ryan Irelan will join me to discuss what they love about working in CMSes (administrative UX!), what drives them to frustration (decoupling!), and what meaty problems (integration with design systems!) they hope to dive into next.

    Event details

    This event is free and everyone is welcome—just sign up to receive the viewing instructions. Here are the details:

    Tuesday, August 25
    1–2 p.m. EDT
    via Google Hangout
    Register or get more details

    We’ll have 30 minutes of conversation between our panelists, and then open things up to questions from none other than YOU. We’ll also share the full video and transcript after the live show ends.

    Join our email list to get updates when new events are announced.

    Panelists

    Get More CMS Knowledge from Pantheon

    Our generous sponsor Pantheon wants you to love your CMS, too. That’s why they’ve created a platform for building, launching, and running Drupal and WordPress sites—all from a single, powerful dashboard.

    They’ve also put together a detailed guide to understanding one of the biggest trends in backend dev: the “headless CMS.” Don’t worry, it’s not a spooky story meant to scare off content editors. It’s an approach to decoupling your CMS interface from your front-end experience—and it can help you with everything from redesigning without re-implementing your CMS to finally curing your site of that bad case of div-itis.

    Learn more about going headless: how it works, what it takes to set up, and why you might want to try it. See Pantheon’s guide now.

  • Ask Dr. Web with Jeffrey Zeldman: If Ever I Should Leave You: Job Hunting For Web Designers and Developers 

    In our last installment, we discussed what to do when your boss is satisfied with third-party code that would make Stalin yak. This time out, we’ll discuss when, why, and how to quit your job.

    When is the right time to leave your first job for something new? How do you know you’re ready to take the plunge?
    Wet Behind The Ears

    Dear Wet Behind:

    From frying an egg to proposing marriage, you can never know for sure when it’s the right time to do anything—let alone anything as momentous as leaving your first job. First, search your heart: most times, you already know what you want to do. (Hint: if you’re thinking about leaving your job, you probably want to.) This doesn’t mean you should heedlessly stomp off to do what you want. Other factors must be carefully considered. But knowing what your heart wants is vital to framing a question that will provide your best answer.

    So ask yourself, do I want to leave? And if the answer is yes, ask yourself why. Are you the only girl in a boys’ club? Perhaps the only one with a real passion for the web? Are other folks, including your boss, dialing it in? Have you lost your passion for the work? Are you dialing it in? Is the place you work political? Do your coworkers or boss undervalue you? Have you been there two years or more without a raise or a promotion? Most vital of all, are you still learning on the job?

    Stagnation is fine for some jobs—when I was a dishwasher at The Earth Kitchen vegetarian restaurant, I enjoyed shutting off my brain and focusing on the rhythmic scrubbing of burnt pans, the slosh and swirl of peas and carrots in a soapy drain—but professionals, particularly web professionals, are either learning and growing or, like the love between Annie Hall and Alvy Singer, becoming a dead shark. If you’ve stopped learning on the job, it’s past time to look around.

    Likewise for situations where you face on-the-job discrimination. Or where you’re the only one who cares about designing and building sites and applications that meet real human needs, and of which you can truly be proud. Or where, after three years of taking on senior-level tasks, and making mature decisions that helped the company, you’re still seen as entry-level because you came in as an intern—and first impressions are forever. Or where you will never be promoted, because the person above you is young, healthy, adored by the owner, or has burrowed in like a tick.

    Some companies are smart enough to promote from within. These are the companies that tend to give you an annual professional development budget to attend conferences, buy books, or take classes; that encourage you to blog and attend meet-ups. Companies that ignore or actively discourage your professional growth are not places where you will succeed. (And in most cases, they won’t do that well themselves—although some bad places do attain a kind of financial success by taking on the same kinds of boring jobs over and over again, and hiring employees they can treat as chattel. But that ain’t you, babe.)

    It’s important, when answering these questions about your organization and your place within it, to be ruthlessly honest with yourself. If you work alongside a friend whose judgement you trust, ask her what she thinks. It is all too easy, as fallible human beings, to believe that we should be promoted before we may actually be ready; to think that people are treating us unfairly when they may actually be supporting and mentoring us; to ignore valuable knowledge we pick up on the job because we think we should be learning something different.

    If there’s no one at your workplace you can trust with these questions, talk to a solid friend, sibling, or love partner—one who is brave enough to tell you what you need to hear. Or check in with a professional—be they a recruiter, job counselor, yoga instructor, barista, or therapist. But be careful not to confide in someone who may have a vested interest in betraying your confidence. (For example, a recruiter who earns $100,000 per year in commissions from your company may not be the best person to talk to about your sense that said company grossly undervalues you.)

    Assuming you have legitimate reasons to move on, it’s time to consider those other factors: namely, have you identified the right place to move on to? And have you protected yourself and your family by setting aside a small financial cushion (at least three months’ rent in the bank) and lining up a freelance gig?

    Don’t just make a move to make a move—that’s how careers die. Identify the characteristics of the kind of place you want to work for. What kind of work do they do? If they are agencies, what do their former customers say about them? If friends work for them, what do they say about the place? What’s their company culture like? Do they boast a diverse workforce—diverse psychologically, creatively, and politically as well as physically? Is there a sameness to the kind of person they hire, and if so, will you fit in or be uncomfortable? If you’d be comfortable, might you be too comfortable (i.e. not learning anything new)? Human factors are every bit as important as the work, and, career-wise, more important than the money.

    If five of your friends work for your current employer’s biggest competitor, don’t assume you can walk across the street and interview with that competitor. The competitor may feel honor-bound to tell your boss how unhappy you are—and that won’t do you any good. Your boss might also feel personally betrayed if you take a job with her biggest competitor, and that might be burning a bridge.

    Don’t burn any bridges you don’t have to. After all, you never know who you might work for—or who you might want to hire—five years from now. Leaving on good terms is as important as securing the right next job. Word of mouth is everything in this business, and one powerful enemy can really hurt your career. More importantly, regardless of what they can do for or against your career, it’s always best to avoid hurting others when possible. After all, everyone you meet is fighting their own hard battle, so why add to their burdens?

    This isn’t to say you don’t have the right to work for anyone you choose who chooses you back. You absolutely have the right. Just be smart and empathetic about it.

    In some places, with some bosses, you can honestly say you’re looking for a new job, and the boss will not only understand, she’ll actually recommend you for a good job elsewhere. But that saintly a boss is rare—and if you work for one, are you sure you want to quit? Most bosses, however professional they may be, take it personally when you leave. So be discreet while job hunting. Once you decide to take a new job, let your boss know well ahead of time, and be honest but helpful if they ask why you’re leaving—share reasons that are true and actionable and that, if listened to, could improve the company you’re leaving.

    Lastly, before job hunting, line up those three months’ rent and that freelance gig. This protects you and your family if you work for a vindictive boss who fires employees he finds out are seeking outside jobs. Besides, having cash in the bank and a freelance creative challenge will boost your confidence and self-esteem, helping you do better in interviews.

    A good job is like a good friend. But people grow and change, and sometimes even the best of friends must part. Knowing when to make your move will keep you ahead of the curve—and the axe. Happy hunting!

  • This week's sponsor: Squarespace 

    Make a beautiful website with our sponsor, Squarespace. Keep it simple, or customize your HTML, CSS, and JavaScript with the Developer Platform—you can even get all your content through the JSON API. Get started today.

  • 2015 Summer Reading Issue 

    Summer is halfway over. Have you hid out for a day of reading yet? Grab a shady spot and a picnic blanket (or just park it in front of the nearest AC unit), turn off your notifications, and unwrap this tasty treat: our 2015 summer reader.

    Refresh your mind, heart, and spirit with this curated list of articles, videos, and other goodies from the recent past—from A List Apart and across the web.

    Which web do we want?

    Is the web “a place to connect knowledge, people, and cats,” or do “hordes threaten all that we have built for one another”? Where will native-versus-web fights end up? And why are we all here, doing this work, anyhow?

    From us

    From elsewhere

    Toward an inclusive industry

    This web is what we make of it. We can use it to insult strangers in a comments field, or to fight for greater fairness and opportunity in our world. We are inspired by those who choose the path of inclusion:

    From us

    From elsewhere

    Trying out new techniques

    Today’s code is so complex no individual can master it all—but that also means there’s always something new to learn…like these new, niche, or just plain cool techniques.

    From us

    Speeding up

    Big, lumbering websites, endless load times, and crappy experiences on mobile? No, thanks! Here’s to those in the trenches of performance, fighting the good fight.

    From us

    From elsewhere

    Accessibility for everyone

    “The power of the Web is in its universality,” Tim Berners-Lee once said—and that means working for all kinds of people, with all kinds of abilities. Let’s stop leaving accessibility for last, and instead start from a place that embraces the needs of all our users.

    From us

    From elsewhere

    Working better, together

    What comes after static comps and toss-it-over-the-wall processes? We’re still figuring that out—but one thing’s for sure: people are at the center.

    From us

    From elsewhere

    Becoming mentors

    The more we teach, the more we learn—and the more our industry benefits. Discover the joys of mentoring and the future of web education.

    From us

    From elsewhere

    Getting content right

    In the beginning was content—and it’s the core of every experience we design and build. Connect the right person to the right content at the right time using strategy, design, and writing.

    From us

    From elsewhere

    The evolution of type

    If not yet ubiquitous, sophisticated typography on the web is now at least possible. It continues to evolve apace—virtually anything we could do in print, we can now do on screens.

    From us

    From elsewhere

  • Mark Llobrera · Professional Amateurs: Memory Management 

    When I was starting out as a web designer, one of my chief joys was simply observing how my mentors went about their job—the way they prepared for projects, the way they organized their work. I knew that it would take a while for my skills to catch up to theirs, but I had an inkling that developing a foundation of good work habits was something that would stay with me throughout my career.

    Many of those habits centered around creating a personal system for organizing all the associated bits and pieces that contributed to the actual code I wrote. These days as I mentor Bluecadet’s dev apprentices, I frequently get asked how I keep all this information in my head. And my answer is always: I don’t. It’s simply not possible for me. I don’t have a “memory palace” like you’d see onscreen in Sherlock (or described in Hilary Mantel’s Wolf Hall). But I have tried a few things over the years, and what follows are a few habits and tools that have helped me.

    Extend your memory

    Remember this: you will forget. It may not seem like it, hammering away with everything so freshly-imprinted in your mind. But you will forget, at least long enough to drive you absolutely batty—or you’ll remember too late to do any good. So the trick is figuring out a way to augment your fickle memory.

    The core of my personal memory system has remained fairly stable over the years: networked notes, lots of bookmarks, and a couple of buffer utilities. I’ve mixed and matched many different tools on top of those components, like a chef trying out new knives, but the general setup remains the same. I describe some OS X/iOS tools that I use as part of my system, but those are not a requirement and can be substituted with applications for your preferred operating system.

    Networked notes

    Think of these as breadcrumbs for yourself. You want to be able to quickly jot things down, true—but more importantly, you have to be able to find them once some time has passed.

    I use a loose system of text notes, hooked up to a single folder in Dropbox. I settled on text for a number of reasons:

    • It’s not strongly tied to any piece of software. I use nvALT to create, name, and search through most of my notes, but I tend to edit them in Byword, which is available on both OS X and iOS.
    • It’s easily searchable, it’s extremely portable, and it’s lightweight.
    • It’s easily backed up.
    • I can scan my notes at the file system level in addition to within an app.
    • It’s fast. Start typing a word in the nvALT search bar and it whittles down the results. I use a system of “tags” when naming my files, where each tag is preceded by an @ symbol, like so: @bluecadet. Multiple tags can be chained together, for example: @bluecadet @laphamsquarterly. Generally I use anywhere from one to four tags per note. Common ones are a project tag, or a subject (say, @drupal or @wordpress). So a note about setting up Drupal on a project could be named “@bluecadet @drupal @projectname Setup Notes.txt.” There are lots of naming systems. I used this nvALT 101 primer by Michael Schechter as a jumping-off point, but I found it useful to just put my tags directly into the filename. Try a few conventions out and see what sticks for you.
    Notes naming system screenshot
    My file naming system for text notes.

    What do I use notes for? Every time I run into anything on a project, whether it’s something that confuses me, or something I just figured out, I put that in a note. If I have a commonly-used snippet for a project (say, a deploy command), then I put that in a note too. I try to keep the notes short and specific—if I find myself adding more and more to a note I will often break it out into separate notes that are related by a shared tag. This makes it easier to find things when searching (or even just scanning the file directory of all the notes).

    Later on those notes could form the basis for a blog post, a talk, or simply a lunch-and-learn session with my coworkers.

    Scratch pad

    I have one special note that I keep open during the day, a “scratch pad” for things that pop into my brain while I’m focusing on a specific task. (Ironically, this is a tip that I read somewhere and failed to bookmark). These aren’t necessarily things that are related to what I’m doing at that moment—in fact, they might be things that could potentially distract me from my current task. I jot a quick line in the scratch pad and when I have a break I can follow up on those items. I like to write this as a note in nvALT instead of in a notebook because I can later copy-and-paste bits and pieces into specific, tagged notes.

    Bookmarking: Pinboard

    So notes cover my stuff, but what about everyone else’s? Bookmarks can be extremely useful for building up a body of links around a subject, but like my text notes they only started to have value when I could access them anywhere. I save my bookmarks to Pinboard. I used to use Delicious, but after its near-death, I imported my links to Pinboard when a friend gave me a gift subscription. I like that Pinboard gives you a (paid) option to archive your bookmarks, so you can retrieve a cached copy of a page if link rot has set in with the original.

    Anything that could potentially help me down the line gets tagged and saved. When I’m doing research in the browser, I will follow links off Google searches, skim them quickly, and bookmark things for later, in-depth reading. When I’m following links off Twitter I dump stuff to Pocket, since I have Pinboard set to automatically grab all my Pocket articles. Before I enabled that last feature, I had some links in Pocket and some in Pinboard, so I had to look for things in two separate places.

    Whatever system you use, make sure it’s accessible from your mobile devices. I use Pinner for iOS, which works pretty well with iOS 8’s share sheets. Every few days I sit down with my iPad and sift through the links that are auto-saved from Pocket and add more tags to them.

    Buffers: clipboard history and command line lookup

    These last two tips are both very small, but they’ve saved me so much time (and countless keystrokes) over the years, especially given how often cut-and-paste figures into my job.

    Find a clipboard history tool that works for you. I suggest using the clipboard history in your launcher application of choice (I use Launchbar since it has one built in, but Alfred has one as part of its Powerpack). On iOS I use Clips (although it does require an in-app purchase to store unlimited items and sync them across all your devices). Having multiple items available means less time spent moving between windows and applications—you can grab several items, and then paste them back from your history. I’m excited to see how the recently-announced multitasking features in iOS 9 help in this regard. (It also looks like Android M will have multiple window support.) If you don’t use a launcher, Macworld has a fairly recent roundup of standalone Mac apps.

    If you use the command line bash shell, CTRL+R is your friend: it will allow you to do a string search through your recent commands. Hit CTLR+R repeatedly to cycle through multiple matches in your command history. When you deal with repetitive terminal commands like I do (deploying to remote servers, for instance), it’s even faster than copying-and-pasting from a clipboard history. (zsh users: looks like there’s some key bindings involved.)

    Finding your way

    I like to tell Bluecadet’s dev apprentices that they should pay close attention to the little pieces that form the “glue” of their mentor’s process. Developing a personal way of working that transcends projects and code can assist you through many changes in roles and skills over the course of your career.

    Rather than opting in to a single do-it-all tool, I’ve found it helpful to craft my own system out of pieces that are lightweight, simple, flexible, and low-maintenance. The tools I use are just layers on top of that system. For example, as I wrote this column I tested out two Markdown text editors without having to change how I organize my notes.

    Your personal system may look very different from the one I’ve described here. I have colleagues who use Evernote, Google Docs, or Simplenote as their primary tool. The common thread is that they invested some time and found something that worked for them.

    What’s missing? I still don’t have a great tool for compiling visual references. I’ve seen some colleagues use Pinterest and Gimmebar. I’ll close by asking: what are you using?

  • This week's sponsor: Bushel 

    If you manage and protect Apple devices at work, our sponsor Bushel is here to help make it easier.

  • Developing Empathy 

    I recently wrote about how to have empathy for our teammates when working to make a great site or application. I care a lot about this because being able to understand and relate to others is vital to creating teams that work well together and makes it easier for us to reach people we don’t know.

    I see a lot of talk about empathy, but I find it hard to take the more theory-driven talk and boil that down into things that I can do in my day-to-day work. In my last post, I talked about how I practice empathy with my team members, but after writing that piece I got to thinking about how I, as a developer in particular, can practice empathy with the users of the things I make as well.

    Since my work is a bit removed from the design and user experience layer, I don’t always have interactions and usability front of mind while coding. Sometimes I get lost in the code as I focus on making the design work across various screen sizes in a compact, modular way. I have to continually remind myself of ways I can work to make sure the application will be easy to use.

    To that end, there are things I’ve started thinking about as I code and even ways I’ve gone outside the traditional developer role to ensure I understand how people are using the software and sites I help make.

    Accessibility

    From a pure coding standpoint, I do as much as I can to make sure the things I make are accessible to everyone. This is still a work in progress for me, as I try to learn more and more about accessibility. Keeping the A11Y Project checklist open while I work means I can keep accessibility in mind. Because all the people who want to use what I’m building should be able to.

    In addition to focusing on what I can do with code to make sure I’m thinking about all users, I’ve also tried a few other things.

    Support

    In a job I had a few years ago, the entire team was expected to be involved with support. One of the best ways to understand how people were using our product was to read through the questions and issues they were having.

    I was quite nervous at first, feeling like I didn’t have the knowledge or experience to adequately answer user emails, but I came to really enjoy it. I was lucky to be mentored by my boss on how to write those support messages better, by acknowledging and listening to the people writing in, and hopefully, helping them out when I could.

    Just recently I spent a week doing support work for an application while my coworker was on vacation, reminding me yet again how much I learn from it. Since this was the first time I’d been involved with the app, I learned about the ways our users were getting tripped up, and saw pitfalls which I may never have thought about otherwise.

    As I’ve done support, I’ve learned quite a bit. I’ve seen browser and operating system bugs, especially on devices that I may not test or use regularly. I’ve learned that having things like receipts on demand and easy flows for renewal is crucial to paid application models. I’ve found out about issues when English may not be the users’ native language—internationalization is huge and also hard. Whatever comes up, I’m always reminded (in a good way!), that not everyone uses an application or computer in the same ways that I do.

    For developers specifically, support work also helps jolt us out of our worlds and reminds us that not everyone thinks the same way, nor should they. I’ve found that while answering questions, or having to explain how to do certain tasks, I come to realizations of ways we can make things better. It’s also an important reminder that not everyone has the technical know how I do, so helping someone learn to use Fluid to make a web app behave more like a native app, or even just showing how to dock a URL in the OS X dock can make a difference. And best of all? When you do help someone out, they’re usually so grateful for it—it’s always great to get the happy email in response.

    Usability testing

    Another way I’ve found to get a better sense of what users are doing with the application is to sit in on usability testing when possible. I’ve only been able to do this once, but it was eye opening. There’s nothing better than watching someone use the software you’re making, or in my case, stumble through trying to use it.

    In the one instance where I was able to watch usability testing, I found it fascinating on several levels. We were testing a mobile website for an industry that has a lot of jargon. So, people were stumbling not just with the application itself, but also with the language—it wasn’t just the UI that caused problems, but the words the industry uses regularly that people didn’t understand. With limited space on a small screen, we’d shortened things up too much, and it was not working for many of the people trying to use the site.

    Since I’m not doing user experience work myself, I don’t get the opportunity to watch usability testing often, but I’m grateful for the time I was able to, and I’m hopeful that I’ll be able to observe it again in the future. Like answering support emails, it puts you on the front lines with your users and helps you understand how to make things better for them.

    Getting in touch with users, in whatever ways are available to you, makes a big difference in how you think about them. Rather than a faceless person typing away on a keyboard, users become people with names who want to use what you are helping to create, but they may not think exactly the same way you do, and things may not work as they expect.

    Even though many of us have roles where we aren’t directly involved in designing the interfaces of the sites and apps we build, we can all learn to be more empathetic to users. This matters. It makes us better at what we do and we create better applications and sites because of it. When you care about the person at the other end, you want to write more performant, accessible code to make their lives easier. And when the entire team cares, not just the people who interact with users most on a day-to-day basis, then the application can only get better as you iterate and improve it for your users.

  • Nishant Kothary on the Human Web: The Dominey Effect: For the Love of the Web, Learn Swift 

    I don’t remember the exact moment I fell in love with the web, but I distinctly remember the website that had a lot to do with it: whatdoiknow.org. It was the personal website of Todd Dominey, a web designer from Georgia (I wrote that from memory without the help of Google).

    By most colloquial measures, whatdoiknow.org wasn’t anything spectacular. But to me, it felt perfect: fixed two-column layout, black text, white background, great typography, lovely little details, and good writing. And, it had this background tile—check it out here, compliments of Wayback Machine (“Give it a second!” to load)—that tapped into some primordial part of the brain that erupts in a dopamine fireworks show at the sight of such things. I’m sure Π is somehow involved.

    It was 2001 (maybe 2002?), I was in college, and I was considering transferring out of computer science into interactive media when I found Dominey’s site. I immediately knew I wanted to make sites like that (even if I wasn’t sure why), so I submitted my CODO documentation, and walked across campus to the Computer Graphics department.

    The universe pushed back, of course.

    “Inadvisable,” advised my academic advisor at the time, because “how can you make money designing things?” It was a different time: B.S.—Before Steve. User experience was in its third trimester, and we’d just started feeling its first kicks. The argument against transferring was effectively the same one faced by liberal arts majors when they tell their parents they are going to major in English and minor in Sociology. The data suggested that I would be far less attractive to employers. But I was drawn in.

    I had no choice but to succumb to the first, but certainly not the last, Dominey moment of my professional life.

    And thus I was introduced to HTML and CSS. It was love at first sight. But unlike a lot of kids who found their home with standards-based front end web design, I’d just walked into a hyperlinked world of Dominey moments. And over the years, I clicked—maybe “tapped” is the more appropriate word for our generation—and followed many of them.

    HTML & CSS naturally led me to the world of graphic design. Photoshop and Illustrator entered my life and elated me. Then I wanted a blog. Moveable Type. This in turn led to CGI scripting, i.e. ASP and PHP. JavaScript entered the party as DHTML. And eventually, Flash—which renewed my interest in programming and mathematics, so that I went back to my old CS department and convinced them to let me finish the degree I’d abandoned while I finished up the other one.

    One by one, the domineys were falling.

    What’s fascinating about these moments, in hindsight, is they were inextricably linked. And much like the web, even when the links disappeared into the horizon as I moved to the next, they affected my career trajectory for the better. It feels magical that my ability to produce letterpress business cards (a Dominey moment) could have any bearing on my public relations skills for convincing the web community that Internet Explorer had had a heart transplant (a part of one of my past jobs). But there’s nothing really magical about that, is there?

    After all, feeling excitement for something new, learning it, getting somewhat good at it, and broadening your horizons can positively affect your career, no matter what you do (h/t to every post written about the benefits of side projects).

    Signal vs. signal

    All that said, the highs from experiencing these moments were inevitably followed by their characteristic comedowns: a mixture of fear, challenge, prejudice, and even dogma. Take my foray into Flash, for instance.

    For a standards-based web guy like me, embracing Flash felt like an either-or proposition as I looked around for mentorship. This phenomenon was a byproduct of the Flash vs. Web debate. You just didn’t come across web standardistas who were equally and openly into Flash—Shaun Inman and Dominey (who created SlideShowPro, a ubiquitous Flash-based slideshow app for a time) were prominent exceptions. Unsurprisingly, what Gruber writes about Apps vs. Web applies almost word for word to Flash vs. Web: “If you expand your view of ‘the web’ from merely that which renders inside the confines of a web browser to instead encompass all network traffic sent over HTTP/S, the explosive growth of native mobile apps is just another stage in the growth of the web. Far from killing it, native apps have made the open web even stronger.”

    When you take these sort of necessary but paralyzing debates and couple them with the insecurity you feel in answering the tap of a Dominey moment, it doesn’t take much to talk yourself out of it. And that problem never really goes away.

    Even today, having quit my job to go out on my own, the pushback is strong. What has changed though, thanks to a healthy amount of experience, reading, thinking, and counsel, is my ability to negotiate the arguments from others and myself as I embrace my next moment, inspired by the ongoing app revolution and the pleasure I derive from developing in Apple’s new language: Swift.

    My ability to steer around the doldrums of doubt wasn’t always there, though. Long ago, and often along the way as well, I needed a little nudge from a friend or mentor to get me going.

    So finally, on the topic of apps and Swift: let me give you a quick nudge.

    A Swift nudge

    If you’re a web programmer (or a budding programmer of any kind, or just someone who wants to get into programming), and looking at an app on your device (Instagram, Pinterest, Paper, and iMovie come to mind for me) made you think, “I want to build something like that,” I have this to say to you: learn Swift today.

    Don’t think twice about it.

    This must seem like a bizarre recommendation to make to a group of “people who make websites.” But I’ve never quite taken that tagline literally, and I know far too many of you feel the same (whether you’ll admit it in public or not). We are people who make the web, and as luck would have it we are people who particularly understand designing for mobile.

    After immersing myself in it for a year, I find Swift to be deeply web in its soul. From its expressive, functional syntax and its interpretive playgrounds to its runtime performance and recent foray into open source, Swift is the web developer’s compiled language (with the mature and convenient safeguards characteristic of the compiled environment).

    Everything you need to get you started is here. It’s free. And the community is amazing.

    As you may expect, the universe will push back on you embracing this moment. It will manifest in myriad ways—from the age old question of web vs. native to debates about Swift performance, syntax, and Apple’s intentions.

    Ignore that pushback. (Or don’t: do your due diligence, and form your own opinion.)

    But whatever you do, if you’ve got that thumping feeling, don’t ignore it. And try to not forget that you’re here because long ago you too embraced a Dominey moment.

    As far as I can tell, it’s worked out pretty well for all of us.

    Footnotes

    • 1. Full disclosure: I do not know nor have I ever met Todd Dominey (but I’d buy him a drink anytime).