[syndicated profile] csstricks_feed

Posted by Andrés Galante

Una Kravets is absolutely right. In modern CSS development, there are so many things to learn. For someone starting out today, it's hard to know where to start.

Here is a list of things I wish I had known if I were to start all over again.

1. Don't underestimate CSS

It looks easy. After all, it's just a set of rules that selects an element and modifies it based on a set of properties and values.

CSS is that, but also so much more!

A successful CSS project requires the most impeccable architecture. Poorly written CSS is brittle and quickly becomes difficult to maintain. It's critical you learn how to organize your code in order to create maintainable structures with a long lifespan.

But even an excellent code base has to deal with the insane amount of devices, screen sizes, capabilities, and user preferences. Not to mention accessibility, internationalization, and browser support!

CSS is like a bear cub: cute and inoffensive but as he grows, he'll eat you alive.

  • Learn to read code before writing and delivering code.
  • It's your responsibility to stay up to date with best practice. MDN, W3C, A List Apart, and CSS-Tricks are your source of truth.
  • The web has no shape; each device is different. Embrace diversity and understand the environment we live in.

2. Share and participate

Sharing is so important! How I wish someone had told me that when I started. It took me ten years to understand the value of sharing; when I did, it completely changed how I viewed my work and how I collaborate with others.

You'll be a better developer if you surround yourself with good developers, so get involved in open source projects. The CSS community is full of kind and generous developers. The sooner the better.

Share everything you learn. The path is as important as the end result; even the tiniest things can make a difference to others.

  • Learn Git. Git is the language of open source and you definitely want to be part of it.
  • Get involved in an open source project.
  • Share! Write a blog, documentation, or tweets; speak at meetups and conferences.
  • Find an accountability partner, someone that will push you to share consistently.

3. Pick the right tools

Your code editor should be an extension of your mind.

It doesn't matter if you use Atom, VSCode or old school Vim; the better you shape your tool to your thought process, the better developer you'll become. You'll not only gain speed but also have an uninterrupted thought line that results in fluid ideas.

The terminal is your friend.

There is a lot more about being a CSS developer than actually writing CSS. Building your code, compiling, linting, formatting, and browser live refresh are only a small part of what you'll have to deal with on a daily basis.

  • Research which IDE is best for you. There are high performance text editors like Vim or easier to use options like Atom or VSCode.
  • Pick up your way around the terminal and learn CLI as soon as possible. The short book "Working the command line" is a great starting point.

4. Get to know the browser

The browser is not only your canvas, but also a powerful inspector to debug your code, test performance, and learn from others.

Learning how the browser renders your code is an eye-opening experience that will take your coding skills to the next level.

Every browser is different; get to know those differences and embrace them. Love them for what they are. (Yes, even IE.)

  • Spend time looking around the inspector.
  • You'll not be able to own every single device; get a BrowserStack or CrossBrowserTesting account, it's worth it.
  • Install every browser you can and learn how each one of them renders your code.

5. Learn to write maintainable CSS

It'll probably take you years, but if there is just one single skill a CSS developer should have, it is to write maintainable structures.

This means knowing exactly how the cascade, the box model, and specificity works. Master CSS architecture models, learn their pros and cons and how to implement them.

Remember that a modular architecture leads to independent modules, good performance, accessible structures, and responsive components (AKA: CSS happiness).

The future looks bright

Modern CSS is amazing. Its future is even better. I love CSS and enjoy every second I spend coding.

If you need help, you can reach out to me or probably any of the CSS developers mentioned in this article. You might be surprised by how kind and generous the CSS community can be.

What do you think about my advice? What other advice would you give? Let me know what you think in the comments.


5 things CSS developers wish they knew before they started is a post from CSS-Tricks

Designing Websites for iPhone X

Sep. 25th, 2017 12:19 pm
[syndicated profile] csstricks_feed

Posted by Robin Rendle

We've already covered "The Notch" and the options for dealing with it from an HTML and CSS perspective. There is a bit more detail available now, straight from the horse's mouth:

Safe area insets are not a replacement for margins.

... we want to specify that our padding should be the default padding or the safe area inset, whichever is greater. This can be achieved with the brand-new CSS functions min() and max() which will be available in a future Safari Technology Preview release.

@supports(padding: max(0px)) {
    .post {
        padding-left: max(12px, constant(safe-area-inset-left));
        padding-right: max(12px, constant(safe-area-inset-right));
    }
}

It is important to use @supports to feature-detect min and max, because they are not supported everywhere, and due to CSS’s treatment of invalid variables, to not specify a variable inside your @supports query.

Jeremey Keith's hot takes have been especially tasty, like:

You could add a bunch of proprietary CSS that Apple just pulled out of their ass.

Or you could make sure to set a background colour on your body element.

I recommend the latter.

And:

This could be a one-word article: don’t.

More specifically, don’t design websites for any specific device.

Although if this pushes support forward for min() and max() as generic functions, that's cool.

Direct Link to ArticlePermalink


Designing Websites for iPhone X is a post from CSS-Tricks

Marvin Visions

Sep. 24th, 2017 11:53 pm
[syndicated profile] csstricks_feed

Posted by Robin Rendle

Marvin Visions is a new typeface designed in the spirit of those letters you’d see in scruffy old 80's sci-fi books. This specimen site has a really beautiful layout that's worth exploring and reading about the design process behind the work.

Direct Link to ArticlePermalink


Marvin Visions is a post from CSS-Tricks

[syndicated profile] csstricks_feed

Posted by Kaloyan Kosev

Recently I had the experience of reviewing a project and assessing its scalability and maintainability. There were a few bad practices here and there, a few strange pieces of code with lack of meaningful comments. Nothing uncommon for a relatively big (legacy) codebase, right?

However, there is something that I keep finding. A pattern that repeated itself throughout this codebase and a number of other projects I've looked through. They could be all summarized by lack of abstraction. Ultimately, this was the cause for maintenance difficulty.

In object-oriented programming, abstraction is one of the three central principles (along with encapsulation and inheritance). Abstraction is valuable for two key reasons:

  • Abstraction hides certain details and only show the essential features of the object. It tries to reduce and factor out details so that the developer can focus on a few concepts at a time. This approach improves understandability as well as maintainability of the code.
  • Abstraction helps us to reduce code duplication. Abstraction provides ways of dealing with crosscutting concerns and enables us to avoid tightly coupled code.

The lack of abstraction inevitably leads to problems with maintainability.

Often I've seen colleagues that want to take a step further towards more maintainable code, but they struggle to figure out and implement fundamental abstractions. Therefore, in this article, I'll share a few useful abstractions I use for the most common thing in the web world: working with remote data.

It's important to mention that, just like everything in the JavaScript world, there are tons of ways and different approaches how to implement a similar concept. I'll share my approach, but feel free to upgrade it or to tweak it based on your own needs. Or even better - improve it and share it in the comments below! ❤️

API Abstraction

I haven't had a project which doesn't use an external API to receive and send data in a while. That's usually one of the first and fundamental abstractions I define. I try to store as much API related configuration and settings there like:

  • the API base url
  • the request headers:
  • the global error handling logic
    const API = {
      /**
       * Simple service for generating different HTTP codes. Useful for
       * testing how your own scripts deal with varying responses.
       */
      url: 'http://httpstat.us/',
    
      /**
       * fetch() will only reject a promise if the user is offline,
       * or some unlikely networking error occurs, such a DNS lookup failure.
       * However, there is a simple `ok` flag that indicates
       * whether an HTTP response's status code is in the successful range.
       */
      _handleError(_res) {
          return _res.ok ? _res : Promise.reject(_res.statusText);
      },
    
      /**
       * Get abstraction.
       * @return {Promise}
       */
      get(_endpoint) {
          return window.fetch(this.url + _endpoint, {
              method: 'GET',
              headers: new Headers({
                  'Accept': 'application/json'
              })
          })
          .then(this._handleError)
          .catch( error => { throw new Error(error) });
      },
    
      /**
       * Post abstraction.
       * @return {Promise}
       */
      post(_endpoint, _body) {
          return window.fetch(this.url + _endpoint, {
              method: 'POST',
              headers: { 'Content-Type': 'application/json' },
              body: _body,
    
          })
          .then(this._handleError)
          .catch( error => { throw new Error(error) });
      }
    };

In this module, we have 2 public methods, get() and post() which both return a Promise. On all places where we need to work with remote data, instead of directly calling the Fetch API via window.fetch(), we use our API module abstraction - API.get() or API.post().

Therefore, the Fetch API is not tightly coupled with our code.

Let's say down the road we read Zell Liew's comprehensive summary of using Fetch and we realize that our error handling is not really advanced, like it could be. We want to check the content type before we process with our logic any further. No problem. We modify only our APP module, the public methods API.get() and API.post() we use everywhere else works just fine.

const API = {
    /* ...  */

    /**
     * Check whether the content type is correct before you process it further.
     */
    _handleContentType(_response) {
        const contentType = _response.headers.get('content-type');

        if (contentType && contentType.includes('application/json')) {
            return _response.json();
        }

        return Promise.reject('Oops, we haven\'t got JSON!');
    },

    get(_endpoint) {
        return window.fetch(this.url + _endpoint, {
            method: 'GET',
            headers: new Headers({
                'Accept': 'application/json'
            })
        })
        .then(this._handleError)
        .then(this._handleContentType)
        .catch( error => { throw new Error(error) })
    },

    post(_endpoint, _body) {
        return window.fetch(this.url + _endpoint, {
            method: 'POST',
            headers: { 'Content-Type': 'application/json' },
            body: _body
        })
        .then(this._handleError)
        .then(this._handleContentType)
        .catch( error => { throw new Error(error) })
    }
};

Let's say we decide to switch to zlFetch, the library which Zell introduces that abstracts away the handling of the response (so you can skip ahead to and handle both your data and errors without worrying about the response). As long as our public methods return a Promise, no problem:

import zlFetch from 'zl-fetch';

const API = {
    /* ...  */

    /**
     * Get abstraction.
     * @return {Promise}
     */
    get(_endpoint) {
        return zlFetch(this.url + _endpoint, {
            method: 'GET'
        })
        .catch( error => { throw new Error(error) })
    },

    /**
     * Post abstraction.
     * @return {Promise}
     */
    post(_endpoint, _body) {
        return zlFetch(this.url + _endpoint, {
            method: 'post',
            body: _body
        })
        .catch( error => { throw new Error(error) });
    }
};

Let's say down the road due to whatever reason we decide to switch to jQuery Ajax for working with remote data. Not a huge deal once again, as long as our public methods return a Promise. The jqXHR objects returned by $.ajax() as of jQuery 1.5 implement the Promise interface, giving them all the properties, methods, and behavior of a Promise.

const API = {
    /* ...  */

    /**
     * Get abstraction.
     * @return {Promise}
     */
    get(_endpoint) {
        return $.ajax({
            method: 'GET',
            url: this.url + _endpoint
        });
    },

    /**
     * Post abstraction.
     * @return {Promise}
     */
    post(_endpoint, _body) {
        return $.ajax({
            method: 'POST',
            url: this.url + _endpoint,
            data: _body
        });
    }
};

But even if jQuery's $.ajax() didn't return a Promise, you can always wrap anything in a new Promise(). All good. Maintainability++!

Now let's abstract away the receiving and storing of the data locally.

Data Repository

Let's assume we need to take the current weather. API returns us the temperature, feels-like, wind speed (m/s), pressure (hPa) and humidity (%). A common pattern, in order for the JSON response to be as slim as possible, attributes are compressed up to the first letter. So here's what we receive from the server:

{
    "t": 30,
    "f": 32,
    "w": 6.7,
    "p": 1012,
    "h": 38
}

We could go ahead and use API.get('weather').t and API.get('weather').w wherever we need it, but that doesn't look semantically awesome. I'm not a fan of the one-letter-not-much-context naming.

Additionally, let's say we don't use the humidity (h) and the feels like temperature (f) anywhere. We don't need them. Actually, the server might return us a lot of other information, but we might want to use only a couple of parameters only. Not restricting what our weather module actually needs (stores) could grow to a big overhead.

Enter repository-ish pattern abstraction!

import API from './api.js'; // Import it into your code however you like

const WeatherRepository = {
    _normalizeData(currentWeather) {
        // Take only what our app needs and nothing more.
        const { t, w, p } = currentWeather;

        return {
            temperature: t,
            windspeed: w,
            pressure: p
        };
    },

    /**
     * Get current weather.
     * @return {Promise}
     */
    get(){
        return API.get('/weather')
            .then(this._normalizeData);
    }
}

Now throughout our codebase use WeatherRepository.get() and access meaningful attributes like .temperature and .windspeed. Better!

Additionally, via the _normalizeData() we expose only parameters we need.

There is one more big benefit. Imagine we need to wire-up our app with another weather API. Surprise, surprise, this one's response attributes names are different:

{
    "temp": 30,
    "feels": 32,
    "wind": 6.7,
    "press": 1012,
    "hum": 38
}

No worries! Having our WeatherRepository abstraction all we need to tweak is the _normalizeData() method! Not a single other module (or file).

const WeatherRepository = {
    _normalizeData(currentWeather) {
        // Take only what our app needs and nothing more.
        const { temp, wind, press } = currentWeather;

        return {
            temperature: temp,
            windspeed: wind,
            pressure: press
        };
    },

    /* ...  */
};

The attribute names of the API response object are not tightly coupled with our codebase. Maintainability++!

Down the road, say we want to display the cached weather info if the currently fetched data is not older than 15 minutes. So, we choose to use localStorage to store the weather info, instead of doing an actual network request and calling the API each time WeatherRepository.get() is referenced.

As long as WeatherRepository.get() returns a Promise, we don't need to change the implementation in any other module. All other modules which want to access the current weather don't (and shouldn't) care how the data is retrieved - if it comes from the local storage, from an API request, via Fetch API or via jQuery's $.ajax(). That's irrelevant. They only care to receive it in the "agreed" format they implemented - a Promise which wraps the actual weather data.

So, we introduce two "private" methods _isDataUpToDate() - to check if our data is older than 15 minutes or not and _storeData() to simply store out data in the browser storage.

const WeatherRepository = {
    /* ...  */

    /**
     * Checks weather the data is up to date or not.
     * @return {Boolean}
     */
    _isDataUpToDate(_localStore) {
        const isDataMissing =
            _localStore === null || Object.keys(_localStore.data).length === 0;

        if (isDataMissing) {
            return false;
        }

        const { lastFetched } = _localStore;
        const outOfDateAfter = 15 * 1000; // 15 minutes

        const isDataUpToDate =
            (new Date().valueOf() - lastFetched) < outOfDateAfter;

        return isDataUpToDate;
    },

    _storeData(_weather) {
        window.localStorage.setItem('weather', JSON.stringify({
            lastFetched: new Date().valueOf(),
            data: _weather
        }));
    },

    /**
     * Get current weather.
     * @return {Promise}
     */
    get(){
        const localData = JSON.parse( window.localStorage.getItem('weather') );

        if (this._isDataUpToDate(localData)) {
            return new Promise(_resolve => _resolve(localData));
        }

        return API.get('/weather')
            .then(this._normalizeData)
            .then(this._storeData);
    }
};

Finally, we tweak the get() method: in case the weather data is up to date, we wrap it in a Promise and we return it. Otherwise - we issue an API call. Awesome!

There could be other use-cases, but I hope you got the idea. If a change requires you to tweak only one module - that's excellent! You designed the implementation in a maintainable way!

If you decide to use this repository-ish pattern, you might notice that it leads to some code and logic duplication, because all data repositories (entities) you define in your project will probably have methods like _isDataUpToDate(), _normalizeData(), _storeData() and so on...

Since I use it heavily in my projects, I decided to create a library around this pattern that does exactly what I described in this article, and more!

Introducing SuperRepo

SuperRepo is a library that helps you implement best practices for working with and storing data on the client-side.

/**
 * 1. Define where you want to store the data,
 *    in this example, in the LocalStorage.
 *
 * 2. Then - define a name of your data repository,
 *    it's used for the LocalStorage key.
 *
 * 3. Define when the data will get out of date.
 *
 * 4. Finally, define your data model, set custom attribute name
 *    for each response item, like we did above with `_normalizeData()`.
 *    In the example, server returns the params 't', 'w', 'p',
 *    we map them to 'temperature', 'windspeed', and 'pressure' instead.
 */
const WeatherRepository = new SuperRepo({
  storage: 'LOCAL_STORAGE',                // [1]
  name: 'weather',                         // [2]
  outOfDateAfter: 5 * 60 * 1000, // 5 min  // [3]
  request: () => API.get('weather'),       // Function that returns a Promise
  dataModel: {                             // [4]
      temperature: 't',
      windspeed: 'w',
      pressure: 'p'
  }
});

/**
 * From here on, you can use the `.getData()` method to access your data.
 * It will first check if out data outdated (based on the `outOfDateAfter`).
 * If so - it will do a server request to get fresh data,
 * otherwise - it will get it from the cache (Local Storage).
 */
WeatherRepository.getData().then( data => {
    // Do something awesome.
    console.log(`It is ${data.temperature} degrees`);
});

The library does the same things we implemented before:

  • Gets data from the server (if it's missing or out of date on our side) or otherwise - gets it from the cache.
  • Just like we did with _normalizeData(), the dataModel option applies a mapping to our rough data. This means:
    • Throughout our codebase, we will access meaningful and semantic attributes like
    • .temperature and .windspeed instead of .t and .s.
    • Expose only parameters you need and simply don't include any others.
    • If the response attributes names change (or you need to wire-up another API with different response structure), you only need to tweak here - in only 1 place of your codebase.

Plus, a few additional improvements:

  • Performance: if WeatherRepository.getData() is called multiple times from different parts of our app, only 1 server request is triggered.
  • Scalability:
    • You can store the data in the localStorage, in the browser storage (if you're building a browser extension), or in a local variable (if you don't want to store data across browser sessions). See the options for the storage setting.
    • You can initiate an automatic data sync with WeatherRepository.initSyncer(). This will initiate a setInterval, which will countdown to the point when the data is out of date (based on the outOfDateAfter value) and will trigger a server request to get fresh data. Sweet.

To use SuperRepo, install (or simply download) it with NPM or Bower:

npm install --save super-repo

Then, import it into your code via one of the 3 methods available:

  • Static HTML:
    <script src="/node_modules/super-repo/src/index.js"></script>
  • Using ES6 Imports:
    // If transpiler is configured (Traceur Compiler, Babel, Rollup, Webpack)
    import SuperRepo from 'super-repo';
  • … or using CommonJS Imports
    // If module loader is configured (RequireJS, Browserify, Neuter)
    const SuperRepo = require('super-repo');

And finally, define your SuperRepositories :)

For advanced usage, read the documentation I wrote. Examples included!

Summary

The abstractions I described above could be one fundamental part of the architecture and software design of your app. As your experience grows, try to think about and apply similar concepts not only when working with remote data, but in other cases where they make sense, too.

When implementing a feature, always try to discuss change resilience, maintainability, and scalability with your team. Future you will thank you for that!


The Importance Of JavaScript Abstractions When Working With Remote Data is a post from CSS-Tricks

[syndicated profile] hacks_mozilla_feed

Posted by Dietrich Ayala

I’ve been building extensions for Firefox since 2005. I’ve integrated bookmark services (which got me a job at Mozilla!), fixed the default theme, enhanced the developer tools, tweaked Github, optimized performance, eased tagging, bookmarked all Etherpads, fixed Pocket and many other terrible wonderful things.

I’ve written XUL extensions, Jetpacks, Jetpacks (second kind), SDK add-ons and worked on the core of most of those as well. And now I’ve seen it all: Firefox has the WebExtensions API, a new extension format designed with a goal of browser extensibility without sacrificing performance and security.

In Firefox 57, the WebExtensions API will be the only supported extension format. So it’s time to move on. I can no longer frolic in the luxury of the insecure performance footgun APIs that the legacy extension formats allowed. It’s time to migrate the extensions I really just can’t live without.

I started with Always Right, one of the most important extensions for my daily browser use. I recorded this migration, complete with hiccups, missteps and compromises. Hopefully this will help you in your extension migration odyssey!

I’m Always Right

Always Right is a simple Firefox extension which opens all new tabs immediately to the right of the current tab, regardless of how the tab is opened.

This is great for a couple of reasons:

  • Tab opening behavior is predictable – it behaves the same 100% of the time. The default behavior in Firefox is determined by a number of factors, too complex to list here. Suffice it to say that changing Firefox’s default tab-opening behavior is like chasing hornets in a hurricane.
  • Related tabs are grouped. When I have a thought about something I’m busy doing, and want to start a new search or open a tab related to it, I open a new tab. This addon makes sure that tab is grouped with the current tabs. The default behavior opens that tab at the end of the tab strip, which results in two separate clusters of tabs related to the same task.

Conceptually, Always Right is a simple extension but ultimately required a complete rewrite in order to migrate to the new WebExtensions API format. The majority of the rewrite was painless and fast, but as is our bane as developers, the last few bits took the most time and were terribly frustrating.

Migration Overview

The overall concept hasn’t changed: An extension built with the new APIs is still a zip file containing a manifest and all your code and asset files, just like every other extension format before it.

The major pieces of migration are:

  • Renaming and migrating to the new manifest format.
  • Rewrite the code to use the new WebExtensions APIs.
  • Use the new web-ext CLI tool for packaging.

Migrating the Manifest

The first step is to migrate your manifest file, beginning by renaming package.json to manifest.json.

Here’s an image that shows the differences between the old file and the new file:

The most important change is to add the property manifest_version and give it a value of 2. With the manifest_version, name and version fields in place, you now have all the required properties for a valid extension. Everything else is optional.

However, since you’re updating an extension that already exists, you need to do a couple of other things.

  • The id property is necessary in order for addons.mozilla.org (AMO) to match the new add-on with the old one. Remove the top-level id field, and copy its value into the applications/gecko/id field.
  • If you used the main property, you’ll now specify your entry point file by the specific functionality, such as background scripts, content_scripts, browser_actions (toolbar buttons), page_actions and options_ui. In my extension, I need to listen to tab events, so I used the background property to load a script.

  • The permissions property is used, but differently. The value is now an array instead of an object, and any values you had are likely not supported anymore, and will need to be replaced. You can read about the supported permissions keys and values on the manifest.json permissions page on MDN.

There are more optional fields not covered here. Read about the other fields in the new manifest.json docs, and for reference here are the old package.json docs.

Migrating the Functionality

The flow of Always Right is that it listens to an event that specifies that a new tab has been opened, and then modifies the index of that tab such that it is immediately to the right of the currently active tab.

I first moved my /lib/main.js file to /index.js and specified it as a background script in the manifest.json file as noted above.

I then migrated the code in /index.js from the old SDK tabs API to the WebExtensions tabs API. The old SDK code used the tabs.open event and the new code uses the WebExtensions tabs.onCreated event.

For example, this:

window.tabs.on('open', function(newTab) {
    // do stuff
});

Turned into this:

browser.tabs.onCreated.addListener(function(newTab) {
    // do stuff
});

A more interesting conversion was how to get access to the currently active tab.

In the SDK, you could simply access window.tabs.activeTab, but in the new world of extensions you’ll need to do this:

browser.tabs.query({currentWindow: true, active: true}).then(function(tabs) {
    // do stuff with the active tab
    var activeTab = tabs[0];
});

Those were the main changes. Application logic and code flow stayed pretty much the same as before. However, since it’s a new API with different behaviors, a few things came up. I had to make the following adjustments:

  • The WebExtensions API code initializes before the active tab is retrievable, so I needed to add checks for the active tab and no-op if it wasn’t available yet.
  • The SDK tabs API didn’t fire when the user reopened a previously-closed tab, but the WebExtensions API does. So I had to add checks to make sure that I didn’t relocate these tabs.

  • Another behavior change is that placing a tab adjacent to a pinned tab that’s not the last pinned tab puts it at the end of the tab strip, instead of just putting it at the end of the pinned tabs, like the SDK API did. So now I get all tabs and iterate over them until I find the last pinned tab, and place the tab there manually.

I also had to ship with some behavior that’s less than ideal and not fixable yet:

  • The WebExtensions API executes tabs.onCreated listeners after the tab is added to the tab strip. This means that with Always Right, if you have a lot of tabs (say, hundreds), you can actually see the tab added to the far end of the tabstrip and then whiz back over to the right of the active tab. It’s dizzying.
  • Compounding the problem above, the tab strip scrolls to the new tab, so the currently active tab is scrolled out of view.

Testing and Debugging

The new, improved developer workflow for testing in an existing profile is entirely different from the Firefox SDK.

To install your add-on for testing, open a new tab navigate to about:debugging (see screenshot below). Click the Load Temporary Add-on button and select any file from your extension’s source directory. Your extension will be installed!

But what if you see an error there? What if the functionality is not working as expected? You can debug your extension using the Browser Toolbox. Read here how to configure and open the toolbox.

Later in the development process I found the web-ext CLI which is great. Use web-ext run like you would cfx/jpm. It opens in a temporary profile.

Publishing

Once my changes were finished and tested, I relied on my new-found friend web-ext and bundled a zip file with web-ext build. The zip file is found in the web-ext-artifacts subdirectory, named with the new version number. I uploaded the file the same as always on addons.mozilla.org, and the extension passed validation. I waited less than a day and my extension was reviewed and live!

VICTORY! Well, not total victory: Immediately the bugs came in. In a spectacular display of generous hearts, my users reported the bugs by giving the add-on 5 star reviews and commenting about the new version being broken. 😅

I’ve fixed most of the reported issues, so my users and I can now ride happily off into the sunset together.

Looking for help? There’s detailed documentation on migrating your add-ons on MDN, check out the legacy extension porting page.

You can see the source code to this add-on in the Always Right repo on Github. If you see any more bugs, let me know!

And if you’d like to try the extension out, you can install Always Right from addons.mozilla.org.

Participation

All code is a work in progress, and participating in the development process by reporting bugs is the easiest way to get things fixed. If you encounter bugs in the WebExtensions APIs, file them in the WebExtensions components in Bugzilla.

The bugs I filed while migrating this extension:

[syndicated profile] csstricks_feed

Posted by Eduardo Bouças

When I first started building websites, the proposition was quite basic: take content, which may or may not be stored in some form of database, and deliver it to people's browsers as HTML pages. Over the years, countless products used that simple model to offer all-in-one solutions for content management and delivery on the web.

Fast-forward a decade or so and developers are presented with a very different reality. With such a vast landscape of devices consuming digital content, it's now imperative to consider how content can be delivered not only to web browsers, but also to native mobile applications, IoT devices, and other mediums yet to come.

Even within the realms of the web browser, things have also changed: client-side applications are becoming more and more ubiquitous, with challenges to content delivery that didn't exist in traditional server-rendered pages.

The answer to these challenges almost invariably involves creating an API — a way of exposing data in such a way that it can be requested and manipulated by virtually any type of system, regardless of its underlying technology stack. Content represented in a universal format like JSON is fairly easy to pass around, from a mobile app to a server, from the server to a client-side application and pretty much anything else.

Embracing this API paradigm comes with its own set of challenges. Designing, building and deploying an API is not exactly straightforward, and can actually be a daunting task to less experienced developers or to front-enders that simply want to learn how to consume an API from their React/Angular/Vue/Etc applications without getting their hands dirty with database engines, authentication or data backups.

Back to Basics

I love the simplicity of static sites and I particularly like this new era of static site generators. The idea of a website using a group of flat files as a data store is also very appealing to me, which using something like GitHub means the possibility of having a data set available as a public repository on a platform that allows anyone to easily contribute, with pull requests and issues being excellent tools for moderation and discussion.

Imagine having a site where people find a typo in an article and submit a pull request with the correction, or accepting submissions for new content with an open forum for discussion, where the community itself can filter and validate what ultimately gets published. To me, this is quite powerful.

I started toying with the idea of applying these principles to the process of building an API instead of a website — if programs like Jekyll or Hugo take a bunch of flat files and create HTML pages from them, could we build something to turn them into an API instead?

Static Data Stores

Let me show you two examples that I came across recently of GitHub repositories used as data stores, along with some thoughts on how they're structured.

The first example is the ESLint website, where every single ESLint rule is listed along with its options and associated examples of correct and incorrect code. Information for each rule is stored in a Markdown file annotated with a YAML front matter section. Storing the content in this human-friendly format makes it easy for people to author and maintain, but not very simple for other applications to consume programmatically.

The second example of a static data store is MDN's browser-compat-data, a compendium of browser compatibility information for CSS, JavaScript and other technologies. Data is stored as JSON files, which conversely to the ESLint case, are a breeze to consume programmatically but a pain for people to edit, as JSON is very strict and human errors can easily lead to malformed files.

There are also some limitations stemming from the way data is grouped together. ESLint has a file per rule, so there's no way to, say, get a list of all the rules specific to ES6, unless they chuck them all into the same file, which would be highly impractical. The same applies to the structure used by MDN.

A static site generator solves these two problems for normal websites — they take human-friendly files, like Markdown, and transform them into something tailored for other systems to consume, typically HTML. They also provide ways, through their template engines, to take the original files and group their rendered output in any way imaginable.

Similarly, the same concept applied to APIs — a static API generator? — would need to do the same, allowing developers to keep data in smaller files, using a format they're comfortable with for an easy editing process, and then process them in such a way that multiple endpoints with various levels of granularity can be created, transformed into a format like JSON.

Building a Static API Generator

Imagine an API with information about movies. Each title should have information about the runtime, budget, revenue, and popularity, and entries should be grouped by language, genre, and release year.

To represent this dataset as flat files, we could store each movie and its attributes as a text, using YAML or any other data serialization language.

budget: 170000000
website: http://marvel.com/guardians
tmdbID: 118340
imdbID: tt2015381
popularity: 50.578093
revenue: 773328629
runtime: 121
tagline: All heroes start somewhere.
title: Guardians of the Galaxy

To group movies, we can store the files within language, genre and release year sub-directories, as shown below.

input/
├── english
│   ├── action
│   │   ├── 2014
│   │   │   └── guardians-of-the-galaxy.yaml
│   │   ├── 2015
│   │   │   ├── jurassic-world.yaml
│   │   │   └── mad-max-fury-road.yaml
│   │   ├── 2016
│   │   │   ├── deadpool.yaml
│   │   │   └── the-great-wall.yaml
│   │   └── 2017
│   │       ├── ghost-in-the-shell.yaml
│   │       ├── guardians-of-the-galaxy-vol-2.yaml
│   │       ├── king-arthur-legend-of-the-sword.yaml
│   │       ├── logan.yaml
│   │       └── the-fate-of-the-furious.yaml
│   └── horror
│       ├── 2016
│       │   └── split.yaml
│       └── 2017
│           ├── alien-covenant.yaml
│           └── get-out.yaml
└── portuguese
    └── action
        └── 2016
            └── tropa-de-elite.yaml

Without writing a line of code, we can get something that is kind of an API (although not a very useful one) by simply serving the `input/` directory above using a web server. To get information about a movie, say, Guardians of the Galaxy, consumers would hit:

http://localhost/english/action/2014/guardians-of-the-galaxy.yaml

and get the contents of the YAML file.

Using this very crude concept as a starting point, we can build a tool — a static API generator — to process the data files in such a way that their output resembles the behavior and functionality of a typical API layer.

Format translation

The first issue with the solution above is that the format chosen to author the data files might not necessarily be the best format for the output. A human-friendly serialization format like YAML or TOML should make the authoring process easier and less error-prone, but the API consumers will probably expect something like XML or JSON.

Our static API generator can easily solve this by visiting each data file and transforming its contents to JSON, saving the result to a new file with the exact same path as the source, except for the parent directory (e.g. `output/` instead of `input/`), leaving the original untouched.

This results on a 1-to-1 mapping between source and output files. If we now served the `output/` directory, consumers could get data for Guardians of the Galaxy in JSON by hitting:

http://localhost/english/action/2014/guardians-of-the-galaxy.json

whilst still allowing editors to author files using YAML or other.

{
  "budget": 170000000,
  "website": "http://marvel.com/guardians",
  "tmdbID": 118340,
  "imdbID": "tt2015381",
  "popularity": 50.578093,
  "revenue": 773328629,
  "runtime": 121,
  "tagline": "All heroes start somewhere.",
  "title": "Guardians of the Galaxy"
}

Aggregating data

With consumers now able to consume entries in the best-suited format, let's look at creating endpoints where data from multiple entries are grouped together. For example, imagine an endpoint that lists all movies in a particular language and of a given genre.

The static API generator can generate this by visiting all subdirectories on the level being used to aggregate entries, and recursively saving their sub-trees to files placed at the root of said subdirectories. This would generate endpoints like:

http://localhost/english/action.json

which would allow consumers to list all action movies in English, or

http://localhost/english.json

to get all English movies.

{  
   "results": [  
      {  
         "budget": 150000000,
         "website": "http://www.thegreatwallmovie.com/",
         "tmdbID": 311324,
         "imdbID": "tt2034800",
         "popularity": 21.429666,
         "revenue": 330642775,
         "runtime": 103,
         "tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?",
         "title": "The Great Wall"
      },
      {  
         "budget": 58000000,
         "website": "http://www.foxmovies.com/movies/deadpool",
         "tmdbID": 293660,
         "imdbID": "tt1431045",
         "popularity": 23.993667,
         "revenue": 783112979,
         "runtime": 108,
         "tagline": "Witness the beginning of a happy ending",
         "title": "Deadpool"
      }
   ]
}

To make things more interesting, we can also make it capable of generating an endpoint that aggregates entries from multiple diverging paths, like all movies released in a particular year. At first, it may seem like just another variation of the examples shown above, but it's not. The files corresponding to the movies released in any given year may be located at an indeterminate number of directories — for example, the movies from 2016 are located at `input/english/action/2016`, `input/english/horror/2016` and `input/portuguese/action/2016`.

We can make this possible by creating a snapshot of the data tree and manipulating it as necessary, changing the root of the tree depending on the aggregator level chosen, allowing us to have endpoints like http://localhost/2016.json.

Pagination

Just like with traditional APIs, it's important to have some control over the number of entries added to an endpoint — as our movie data grows, an endpoint listing all English movies would probably have thousands of entries, making the payload extremely large and consequently slow and expensive to transmit.

To fix that, we can define the maximum number of entries an endpoint can have, and every time the static API generator is about to write entries to a file, it divides them into batches and saves them to multiple files. If there are too many action movies in English to fit in:

http://localhost/english/action.json

we'd have

http://localhost/english/action-2.json

and so on.

For easier navigation, we can add a metadata block informing consumers of the total number of entries and pages, as well as the URL of the previous and next pages when applicable.

{  
   "results": [  
      {  
         "budget": 150000000,
         "website": "http://www.thegreatwallmovie.com/",
         "tmdbID": 311324,
         "imdbID": "tt2034800",
         "popularity": 21.429666,
         "revenue": 330642775,
         "runtime": 103,
         "tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?",
         "title": "The Great Wall"
      },
      {  
         "budget": 58000000,
         "website": "http://www.foxmovies.com/movies/deadpool",
         "tmdbID": 293660,
         "imdbID": "tt1431045",
         "popularity": 23.993667,
         "revenue": 783112979,
         "runtime": 108,
         "tagline": "Witness the beginning of a happy ending",
         "title": "Deadpool"
      }
   ],
   "metadata": {  
      "itemsPerPage": 2,
      "pages": 3,
      "totalItems": 6,
      "nextPage": "/english/action-3.json",
      "previousPage": "/english/action.json"
   }
}

Sorting

It's useful to be able to sort entries by any of their properties, like sorting movies by popularity in descending order. This is a trivial operation that takes place at the point of aggregating entries.

Putting it all together

Having done all the specification, it was time to build the actual static API generator app. I decided to use Node.js and to publish it as an npm module so that anyone can take their data and get an API off the ground effortlessly. I called the module static-api-generator (original, right?).

To get started, create a new folder and place your data structure in a sub-directory (e.g. `input/` from earlier). Then initialize a blank project and install the dependencies.

npm init -y
npm install static-api-generator --save

The next step is to load the generator module and create an API. Start a blank file called `server.js` and add the following.

const API = require('static-api-generator')
const moviesApi = new API({
  blueprint: 'source/:language/:genre/:year/:movie',
  outputPath: 'output'
})

In the example above we start by defining the API blueprint, which is essentially naming the various levels so that the generator knows whether a directory represents a language or a genre just by looking at its depth. We also specify the directory where the generated files will be written to.

Next, we can start creating endpoints. For something basic, we can generate an endpoint for each movie. The following will give us endpoints like /english/action/2016/deadpool.json.

moviesApi.generate({
  endpoints: ['movie']
})

We can aggregate data at any level. For example, we can generate additional endpoints for genres, like /english/action.json.

moviesApi.generate({
  endpoints: ['genre', 'movie']
})

To aggregate entries from multiple diverging paths of the same parent, like all action movies regardless of their language, we can specify a new root for the data tree. This will give us endpoints like /action.json.

moviesApi.generate({
  endpoints: ['genre', 'movie'],
  root: 'genre'
})

By default, an endpoint for a given level will include information about all its sub-levels — for example, an endpoint for a genre will include information about languages, years and movies. But we can change that behavior and specify which levels to include and which ones to bypass.

The following will generate endpoints for genres with information about languages and movies, bypassing years altogether.

moviesApi.generate({
  endpoints: ['genre'],
  levels: ['language', 'movie'],
  root: 'genre'
})

Finally, type npm start to generate the API and watch the files being written to the output directory. Your new API is ready to serve - enjoy!

Deployment

At this point, this API consists of a bunch of flat files on a local disk. How do we get it live? And how do we make the generation process described above part of the content management flow? Surely we can't ask editors to manually run this tool every time they want to make a change to the dataset.

GitHub Pages + Travis CI

If you're using a GitHub repository to host the data files, then GitHub Pages is a perfect contender to serve them. It works by taking all the files committed to a certain branch and making them accessible on a public URL, so if you take the API generated above and push the files to a gh-pages branch, you can access your API on http://YOUR-USERNAME.github.io/english/action/2016/deadpool.json.

We can automate the process with a CI tool, like Travis. It can listen for changes on the branch where the source files will be kept (e.g. master), run the generator script and push the new set of files to gh-pages. This means that the API will automatically pick up any change to the dataset within a matter of seconds – not bad for a static API!

After signing up to Travis and connecting the repository, go to the Settings panel and scroll down to Environment Variables. Create a new variable called GITHUB_TOKEN and insert a GitHub Personal Access Token with write access to the repository – don't worry, the token will be safe.

Finally, create a file named `.travis.yml` on the root of the repository with the following.

language: node_js

node_js:
  - "7"

script: npm start

deploy:
  provider: pages
  skip_cleanup: true
  github_token: $GITHUB_TOKEN
  on:
    branch: master
  local_dir: "output"

And that's it. To see if it works, commit a new file to the master branch and watch Travis build and publish your API. Ah, GitHub Pages has full support for CORS, so consuming the API from a front-end application using Ajax requests will work like a breeze.

You can check out the demo repository for my Movies API and see some of the endpoints in action:

Going full circle with Staticman

Perhaps the most blatant consequence of using a static API is that it's inherently read-only – we can't simply set up a POST endpoint to accept data for new movies if there's no logic on the server to process it. If this is a strong requirement for your API, that's a sign that a static approach probably isn't the best choice for your project, much in the same way that choosing Jekyll or Hugo for a site with high levels of user-generated content is probably not ideal.

But if you just need some basic form of accepting user data, or you're feeling wild and want to go full throttle on this static API adventure, there's something for you. Last year, I created a project called Staticman, which tries to solve the exact problem of adding user-generated content to static sites.

It consists of a server that receives POST requests, submitted from a plain form or sent as a JSON payload via Ajax, and pushes data as flat files to a GitHub repository. For every submission, a pull request will be created for your approval (or the files will be committed directly if you disable moderation).

You can configure the fields it accepts, add validation, spam protection and also choose the format of the generated files, like JSON or YAML.

This is perfect for our static API setup, as it allows us to create a user-facing form or a basic CMS interface where new genres or movies can be added. When a form is submitted with a new entry, we'll have:

  • Staticman receives the data, writes it to a file and creates a pull request
  • As the pull request is merged, the branch with the source files (master) will be updated
  • Travis detects the update and triggers a new build of the API
  • The updated files will be pushed to the public branch (gh-pages)
  • The live API now reflects the submitted entry.

Parting thoughts

To be clear, this article does not attempt to revolutionize the way production APIs are built. More than anything, it takes the existing and ever-popular concept of statically-generated sites and translates them to the context of APIs, hopefully keeping the simplicity and robustness associated with the paradigm.

In times where APIs are such fundamental pieces of any modern digital product, I'm hoping this tool can democratize the process of designing, building and deploying them, and eliminate the entry barrier for less experienced developers.

The concept could be extended even further, introducing concepts like custom generated fields, which are automatically populated by the generator based on user-defined logic that takes into account not only the entry being created, but also the dataset as a whole – for example, imagine a rank field for movies where a numeric value is computed by comparing the popularity value of an entry against the global average.

If you decide to use this approach and have any feedback/issues to report, or even better, if you actually build something with it, I'd love to hear from you!

References


Creating a Static API from a Repository is a post from CSS-Tricks

[syndicated profile] csstricks_feed

Posted by Chris Coyier

(This is a sponsored post.)

Storyblocks is giving CSS-Tricks followers 7 days of complimentary downloads! Choose from over 400,000 stock photos, icons, vectors, backgrounds, illustrations, and more from the Storyblocks Member Library. Grab 20 downloads per day for 7 days. Also, save 60% on millions of additional Marketplace images, where artists take home 100% of sales. Everything you download is yours to keep and use forever—royalty-free. Storyblocks regularly adds new content so there’s always something fresh to see. All the stock your heart desires! Get millions of high-quality stock images for a fraction of the cost. Start your 7 days of complimentary downloads today!

Direct Link to ArticlePermalink


​No Joke…Download Anything You Want on Storyblocks is a post from CSS-Tricks

[syndicated profile] alistapart_feed

The notion that lossy image quality is subjective is not an unreasonable hypothesis. There are many factors that play into how humans perceive quality: screen size, image scaling, and yes, even performance.

Many research projects have tackled this subject, but I’ve recently launched a survey that attempts to understand how people perceive image quality in a slightly different way: in the context of performance.

This image quality assessment serves up 25 different specimens, each of which is presented in a random lossy quality setting between 5 and 100, in both JPEG and WebP formats. As participants complete the survey, navigation, resource and paint timings are collected (when available) from the browser, as well as other client details such as a device’s resolution, pixel density, and many other pertinent details.

The real work of gathering data begins. This is where you can help out. If you have five to ten minutes to spare, please head over to https://imagesurvey.site and participate. When the survey is finished, I’ll post the raw data and write and article (or two) on the findings. If further experimentation is required, that will be pursued as well. I don’t know what we’ll find out, but we’ll find out together with your input. So please participate!

Thank you!

Note: If you have feedback for how to improve the survey, feel free to comment! Just be aware that your feedback can’t be implemented in this run of the survey, but it could be useful in constructing any follow-up surveys.

[syndicated profile] csstricks_feed

Posted by Geoff Graham

Campaign Monitor has completely updated it's guide to CSS support in email. Although there was a four-year gap between updates (and this thing has been around for 10 years!), it's continued to be something I reference often when designing and developing for email.

Calling this an update is underselling the work put into this. According to the post:

The previous guide included 111 different features, whereas the new guide covers a total of 278 features.

Adding reference and testing results for 167 new features is pretty amazing. Even recent features like CSS Grid are included — and, spoiler alert, there is a smidgeon of Grid support out in the wild.

This is an entire redesign of the guide and it's well worth the time to sift through it for anyone who does any amount of email design or development. Of course, testing tools are still super important to the over email workflow, but a guide like this helps for making good design and development decisions up front that should make testing more about... well, testing, rather than discovering what is possible.

Direct Link to ArticlePermalink


The All-New Guide to CSS Support in Email is a post from CSS-Tricks

[syndicated profile] csstricks_feed

Posted by Chasen Le Hara

You've been convinced of the benefits the modlet workflow provides and you want to start building your components with their own test and demo pages. Whether you're starting a new project or updating your current one, you need a module loader and bundler that doesn't require build configuration for every test and demo page you want to make.

StealJS is the answer. It can load JavaScript modules in any format (AMD, CJS, etc.) and load other file types (Less, TypeScript, etc.) with plugins. It requires minimum configuration and unlike webpack, it doesn't require a build to load your dependencies in development. Last but not least, you can use StealJS with any JavaScript library or framework, including CanJS, React, Vue, etc.

In this tutorial, we're going to add StealJS to a project, create a component with Preact, create an interactive demo page, and create a test page.

Article Series:

  1. The Key to Building Large JavaScript Apps: The Modlet Workflow
  2. Improve Your Development Workflow with StealJS (You are here!)

1. Creating a new project

If you already have an existing Node.js project: great! You can skip to the next section where we add StealJS to your project.

If you don't already have a project, first make sure you install Node.js and update npm. Next, open your command prompt or terminal application to create a new folder and initialize a `package.json` file:

mkdir steal-tutorial
cd steal-tutorial
npm init -y

You'll also need a local web server to view static files in your browser. http-server is a great option if you don't already have something like Apache installed.

2. Add StealJS to your project

Next, let's install StealJS. StealJS is comprised of two main packages: steal (for module loading) and steal-tools (for module bundling). In this article, we're going to focus on steal. We're also going to use Preact to build a simple header component.

npm install steal preact --save

Next, let's create a `modlet` folder with some files:

mkdir header && cd header && touch demo.html demo.js header.js test.html test.js && cd ..

Our `header` folder has five files:

  • demo.html so we can easily demo the component in a browser
  • demo.js for the demo's JavaScript
  • header.js for the component's main JavaScript
  • test.html so we can easily test the component in a browser
  • test.js for the test's JavaScript

Our component is going to be really simple: it's going to import Preact and use it to create a functional component.

Update your `header.js` file with the following:

import { h, Component } from "preact";

export default function Header() {
  return (
    <header>
      <h1>{this.props.title}</h1>
    </header>
  );
};

Our component will accept a title property and return a header element. Right now we can't see our component in action, so let's create a demo page.

3. Creating a demo page

The modlet workflow includes creating a demo page for each of your components so it's easier to see your component while you're working on it without having to load your entire application. Having a dedicated demo page also gives you the opportunity to see your component in multiple scenarios without having to view those individually throughout your app.

Let's update our `demo.html` file with the following so we can see our component in a browser:

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>Header Demo</title>
  </head>
  <body>
    <form>
      <label>
        Title
        <input autofocus id="title" type="text" value="Header component" />
      </label>
    </form>
    <div id="container"></div>
    <script src="../node_modules/steal/steal.js" main="header/demo"></script>
  </body>
</html>

There are three main parts of the body of our demo file:

  • A form with an input so we can dynamically change the title passed to the component
  • A #container for the component to be rendered into
  • A script element for loading StealJS and the demo.js file

We've added a main attribute to the script element so that StealJS knows where to start loading your JavaScript. In this case, the demo file looks for `header/demo.js`, which is going to be responsible for adding the component to the DOM and listening for the value of the input to change.

Let's update `demo.js` with the following:

import { h, render } from 'preact';
import Header from './header';

// Get references to the elements in demo.html
const container = document.getElementById('container');
const titleInput = document.getElementById('title');

// Use this to render our demo component
function renderComponent() {
  render(<Header title={titleInput.value} />, container, container.lastChild);
}

// Immediately render the component
renderComponent();

// Listen for the input to change so we re-render the component
titleInput.addEventListener('input', renderComponent);

In the demo code above, we get references to the #container and input elements so we can append the component and listen for the input's value to change. Our renderComponent function is responsible for re-rendering the component; we immediately call that function when the script runs so the component shows up on the page, and we also use that function as a listener for the input's value to change.

There's one last thing we need to do before our demo page will work: set up Babel and Preact by loading the transform-react-jsx Babel plugin. You can configure Babel with StealJS by adding this to your `package.json` (from Preact's docs):

  ...

  "steal": {
    "babelOptions": {
      "plugins": [
        ["transform-react-jsx", {"pragma": "h"}]
      ]
    }
  },

  ...

Now when we load the `demo.html` page in our browser, we see our component and a form to manipulate it:

Great! With our demo page, we can see how our component behaves with different input values. As we develop our app, we can use this demo page to see and test just this component instead of having to load our entire app to develop a single component.

4. Creating a test page

Now let's set up some testing infrastructure for our component. Our goal is to have an HTML page we can load in our browser to run just our component's tests. This makes it easier develop the component because you don't have to run the entire test suite or litter your test code with .only statements that will inevitably be forgotten and missed during code review.

We're going to use QUnit as our test runner, but you can use StealJS with Jasmine, Karma, etc. First, let's install QUnit as a dev-dependency:

npm install qunitjs --save-dev

Next, let's create our `test.html` file:

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width">
    <title>Header Test</title>
  </head>
  <body>
    <div id="qunit"></div>
    <div id="qunit-fixture"></div>
    <script src="../node_modules/steal/steal.js" main="header/test"></script>
  </body>
</html>

In the HTML above, we have a couple of div elements for QUnit and a script element to load Steal and set our `test.js` file as the main entry point. If you compare this to what's on the QUnit home page, you'll notice it's very similar except we're using StealJS to load QUnit's CSS and JavaScript.

Next, let's add this to our `test.js` file:

import { h, render } from 'preact';
import Header from './header';
import QUnit from 'qunitjs';
import 'qunitjs/qunit/qunit.css';

// Use the fixture element in the HTML as a container for the component
const fixtureElement = document.getElementById('qunit-fixture');

QUnit.test('hello test', function(assert) {
  const message = 'Welcome to your first StealJS and React app!';

  // Render the component
  const rendered = render(<Header title={message} />, fixtureElement);

  // Make sure the right text is rendered
  assert.equal(rendered.textContent.trim(), message, 'Correct title');
});

// Start the test suite
QUnit.start();

You'll notice we're using Steal to import QUnit's CSS. By default, StealJS can only load JavaScript files, but you can use plugins to load other file types! To load QUnit's CSS file, we'll install the steal-css plugin:

npm install steal-css --save-dev

Then update Steal's `package.json` configuration to use the steal-css plugin:

{
  ...
  "steal": {
    "babelOptions": {
      "plugins": [
        ["transform-react-jsx", {"pragma": "h"}]
      ]
    },
    "plugins": [
      "steal-css"
    ]
  },
  ...
}

Now we can load the test.html file in the browser:

Success! We have just the tests for that component running in our browser, and QUnit provides some additional filtering features for running specific tests. As you work on the component, you can run just that component's tests, providing you earlier feedback on whether your changes are working as expected.

Additional resources

We've successfully followed the modlet pattern by creating individual demos and test pages for our component! As we make changes to our app, we can easily test our component in different scenarios using the demo page and run just that component's tests with the test page.

With StealJS, a minimal amount of configuration was required to load our dependencies and create our individual pages, and we didn't have to run a build each time we made a change. If you're intrigued by what else it has to offer, StealJS.com has information on more advanced topics, such as building for production, progressive loading, and using Babel. You can also ask questions on Gitter or the StealJS forums!

Thank you for taking the time to go through this tutorial. Let me know what you think in the comments below!

Article Series:

  1. The Key to Building Large JavaScript Apps: The Modlet Workflow
  2. Improve Your Development Workflow with StealJS (You are here!)

The Modlet Workflow: Improve Your Development Workflow with StealJS is a post from CSS-Tricks

In which our hero slips and falls

Sep. 20th, 2017 01:27 pm
[syndicated profile] paciellogroup_feed

Posted by Billy Gregory

A couple weeks ago I had what I jokingly referred to as “an old guy moment”. I slipped and fell in the shower. I realize that my age has nothing to do with this, it could happen to anyone, at any time, at any age, and it happened to me. If anyone has seen me present before, you’ve probably heard me mention that if you want to find examples of bad UX, look no further than the nearest bathroom. Never before has this been more painfully obvious to me.

Allow me to set the scene. I had just finished showering and, as I have always done, I stepped out of the tub with one foot and reached for my towel hanging within reach on the door. The difference this time is that the foot still in the tub slipped and rotated somewhere between 45 to 90 degrees, made a sickening “thunk thunk” as it popped, and I went down… hard.

I quickly realized that I absolutely, 100%, could not get up. My knee was done, and I had to call my girlfriend to come help me up and out of the tub. This was me at my absolute most vulnerable, remember I had just finished showering, and I totally stopped showering in my bathing suit sometime after 11th grade. I did not want to call for help, but I had no choice. I was stuck.

Fast forward a week and half. I was moving around ok now but since my fall I was really nervous about falling again. So nervous in fact, that I had taken to showering sitting in the tub or bathing. I realized I had to shower, but I couldn’t follow the same process that caused me to fall the first time, it was user tested and was decidedly flawed and needed to be fixed.

A decision tree showing two user paths. One where the towel hanging on the door causes the user to slip. The other showing the towel hanging on the shower curtain reduces injury

I thought about what I could change. Remodeling was out of the question. I rent my place and I can’t spend that kind of money. So I looked at what went wrong: the towel was too far away from where it was needed, so I moved it. Now, the towel didn’t make me fall, reaching for it while half-standing in a slippery tub did. So I bought a bathmat. I’m not the only person who showers at my house, my girlfriend and two daughters are also huge fans of not being disgusting. Thing is, I’m the only that fell, so I bought a mat. Guess what? This thing I put there for my own needs ended up being used by everyone. They didn’t need it, but they sure loved it once it was there.

Does any of this sound familiar?

My towel, a control, was located too far away and it caused a user error. When it was moved closer to where it was needed, the user stopped falling and dislocating their knee.

The bathmat was a feature built for one particular user that made everyone’s experience more enjoyable. By tweaking the foundation that the experience was built on, it made for a safer and quicker experience for everyone. Whether the other users noticed or not, they were most likely aware that the tub was slippery and couldn’t get the job done as quickly as they could once the mat was installed.

Oh, and for those of you who screamed “Why didn’t you have a bathmat?!”, sometimes we don’t use the right elements at the right time when we’re in a rush.

And that’s all it took. In order to make things more accessible, I made a couple of slight design changes. These changes didn’t really affect the look or feel, annoy other users, cost a ton of money, but it did improve the experience and reduced user “drop off” by at least 25%. And that’s all this user “kneeded”.

[syndicated profile] paciellogroup_feed

Posted by Heydon

Infusion's logo, picturing a teacup with a teabag's label folded over the rim.

As accessibility and inclusive design specialists, we are paid for our expertise. But the form that expertise takes and how it is delivered varies depending on different clients’ needs. Some look for high-level reports, while others prefer issues to be addressed independently, in the midst of ongoing development.

As clients have gotten wise to the benefit of designing for accessibility from the ground up, the demand for our expertise at earlier stages of product lifecycles has increased. Accordingly, we’ve been investigating and developing new kinds of services and deliverables; ones more suited to design than design critique.

One area that has been of particular interest has been pattern libraries. Often, design issues can only be effectively addressed by creating entirely new patterns and components to solve those problems. These solutions can be detailed in reports, or in working code. A good pattern library contains both working code and its documentation — all in the same place.

To manage pattern libraries effectively, we needed a system for building and hosting them. This is why we built Infusion, and we’re happy to share it as a free and open tool with you.

Documentation sites as PWAs

Infusion is a high performance generator of rich and accessible documentation sites in the form of progressive web applications (PWAs). That means the documentation sites you create can be saved to your devices and read offline. Because Infusion instances exist as Github repositories, this is possible out of the box. When you commit and push changes to your content, your Github Pages site is updated and a new service worker is installed.

Infusion Docs icon, pictured on an Android homescreen

Sites made with Infusion can be saved to the homescreen of your Android device, and read offline. The example pictured is for Infusion’s own documentation.

More than just markdown

“Pattern” files in Infusion are written in markdown. Markdown is the preferred format of static site generators because, as a text format, it is easy to version control. But Infusion allows you to include much richer content in your markdown via special shortcodesIncluded are shortcodes for expandable sections, file tree diagrams, WCAG references, browser support tables, and even working code demos encapsulated with Shadow DOM. These are all documented in Infusion’s own documentation site, which is built with Infusion.

Accessibility dogfooding

Infusion‘s name comes from the notion of “infusing” products and interfaces with inclusive design; transforming them irreversibly for the better. Regardless of what you are documenting using Infusion, the documentation you’ll be creating meets the highest accessibility standards and incorporates a wealth of inclusive design features.

For example, you might use Infusion to document a JavaScript library you’ve written. You can be confident that people using the documentation by keyboard will be able to read long code examples. That’s because Infusion makes scrollable regions focusable by keyboard. Infusion sites are lightweight, responsive, readable, and clearly structured both visually and semantically.

An anti-aesthetic

Infusion‘s visual design is almost entirely black and white. In fact, the only time it breaks into greyscale is for code syntax highlighting. The sparse, neutral aesthetic was chosen so that the branding and visual design of subject content would not come into conflict. Should you find a darker design kinder to the eyes, Infusion also offers a white-on-black “dark mode”, available from the footer of any instance.


If you are interested in using Infusion, please be our guest and fork the repository before following the installation and setup instructions. Below are all the resources you’ll need:

[syndicated profile] paciellogroup_feed

Posted by Matthew Atkinson

The last post in this series ends by saying that the world of academic research has a lot to offer in terms of innovative approaches to adaptation that can improve accessibility for people with various impairments—either due to disability or situation—and even multiple impairments at the same time.

But how can conflicting requirements for very large fonts and touch-target areas on a small-screen device be realised? How can the application fit into these seemingly irreconcilable constraints?

The SUPPLE project lead by Krzysztof Gajos is a prime example, and has produced algorithms that can automatically generate user interfaces from more abstract specifications (i.e. the types of data presented to the user, or needs from the user). For example, if a numerical input is required, this might be expressed using a textbox in a traditional desktop app, or a spinner control with a sensible default value on a mobile where keyboard entry is more time-consuming. Buttons on mobile may be larger, with less spacing between them, to make the touch targets larger.

But, there is much more to it than this: because the UI/content is specified in the abstract, its structure can be adapted where necessary. If all of the controls in a settings pane won’t fit onto the small screen of a mobile phone, then they can be “folded” into a tab panel, with related controls on separate tab panels, and with the most commonly-accessed controls on the first panel. Further: the font size (and other graphical properties) of the controls can be adjusted too. If making the font size larger means some controls won’t fit on the screen, scrolling or the aforementioned folding (or some combination of the two, depending on which would be easiest for this individual user to navigate) can be used to re-arrange them.

This allows content and interfaces to adapt to the user, regardless of the constraints imposed upon them by situational barriers (e.g. they’re using a phone rather than a full computer) or impairments (e.g. vision or motor difficulties).  It’s also important to note that people’s preferences are taken into consideration, too—if you prefer tab panels to scrolling, or sliders to entering numbers, then those controls will be used when possible.  If such an approach were adopted for apps and web content, we could have just the adaptations we need, without the compromises of assistive technologies that mimic physical interventions such as magnifiers.

Tree representing the decisions that the system could make regarding different combinations of adaptations that are available at a given moment.
An example decision tree showing different adaptations that could be applied to a user interface in combination, to achieve radically different results. The adaptations include: using visible text, or spoken words; folding content with tabs; shortening/summarising content so it takes less space (and time to browse) and leaving the content as-is.

Because of the abstract way in which interfaces are specified, one also wonders if they could be adapted for voice/conversational interaction, too—it would not take much additional effort to support such platforms…

How do we get there?

First of all, it would be great if platform developers and device manufacturers integrated novel approaches such as these into their systems.  More than this, though, some common means to describe users’ needs is also required, that will work ubiquitously.  Projects such as the Global Public Inclusive Infrastructure (GPII) aim to achieve this by having a standard set of user preferences that live in the cloud and can be served up to devices from ATMs to computers and TVs.  When such devices adopt techniques, such as those above, to help them adapt, then services such as GPII could help deliver the information about people, their preferences and capabilities, that would support that adaptation.

There’s one final barrier that has only been touched upon above, and that is the lack of awareness people have of their own accessibility needs and the support that is already (or could be, as above) on offer to support them.  Many don’t identify with the term “accessibility” at all.  Also, even if they had a set of preferences established, when starting to use a totally new device for the first time, those preferences may not translate fully to the new environment (multi-touch gestures aren’t usually applied to an ATM, and keyboards are not usually attached to phones).  Therefore, it seems natural to work out what adaptations a user needs based on their capabilities as a human—their visual acuity, various measures of dexterity and so on.  If we could get even a rough idea of these factors, then devices/environments can be set up for someone in an accessible manner, before they have even used them.

Summary

Current assistive technologies have been and remain instrumental in assuring our independence.  However, we can make such technologies more relevant to the mainstream by capitalising upon current trends, and by learning from novel techniques developed by research projects.

We will be visiting some more novel and interesting assistive technologies in forthcoming posts.

[syndicated profile] csstricks_feed

Posted by Chris Coyier

Philip Walton suggests making two copies of your production JavaScript. Easy enough to do with a Babel-based build process.

<!-- Browsers with ES module support load this file. -->
<script type="module" src="main.js"></script>

<!-- Older browsers load this file (and module-supporting -->
<!-- browsers know *not* to load this file). -->
<script nomodule src="main-legacy.js"></script>

He put together a demo project for it all and you're looking at 50% file size savings. I would think there would be other speed improvements as well, by using modern JavaScript methods directly.

Direct Link to ArticlePermalink


Deploying ES2015+ Code in Production Today is a post from CSS-Tricks

[syndicated profile] csstricks_feed

Posted by Chasen Le Hara

You're a developer working on a "large JavaScript application" and you've noticed some issues on your project. New team members struggle to find where everything is located. Debugging issues is difficult when you have to load the entire app to test one component. There aren't clean API boundaries between your components, so their implementation details bleed one into the next. Updating your dependencies seems like a scary task, so your app doesn't take advantage of the latest upgrades available to you.

One of the key realizations we made at Bitovi was that "the secret to building large apps is to never build large apps." When you break your app into smaller components, you can more easily test them and assemble them into your larger app. We follow what we call the "modlet" workflow, which promotes building each of your components as their own mini apps, with their own demos, documentation, and tests.

Article Series:

  1. The Key to Building Large JavaScript Apps: The Modlet Workflow (You are here!)
  2. The Modlet Workflow: Improve Your Development Workflow with StealJS

Following this pattern will:

  • Ease the on-boarding process for new developers
  • Help keep your components' docs and tests updated
  • Improve your debugging and testing workflow
  • Enforce good API design and separation of concerns
  • Make upgrades and migrations easier

Let's talk about each of these benefits one by one to see how the modlet workflow can help your development team be more effective.

Ease the onboarding process for new developers

When a new developer starts on your project, they might be intimidated by the amount of files in your app's repository. If the files are organized by type (e.g. a CSS folder, a JS folder, etc.), then they're going to be searching across multiple folders to find all the files related to a single component.

The first step to following the modlet workflow is to create folders for each of your components. Each folder, or modlet, should contain all of the files for that component so anyone on your team can find the files they need to understand and develop the component, without having to search the entire project.

Additionally, we build modlets as their own mini apps by including at least the following files in their folders:

  • The main source files (JavaScript, stylesheets, templates, etc.)
  • A test JavaScript file
  • A markdown or text file for docs (if they're not inline with your code)
  • A test HTML page
  • A demo HTML page

Those last two files are crucial to following the modlet workflow. First, the test HTML page is for loading just the component's tests in your browser; second, the demo HTML page lets you see just that component in your browser without loading the entire app.

Improve your debugging and testing workflow

Creating demo and test HTML pages for each component might seem like overkill, but they will bring some great improvements to your development workflow.

The demo HTML page:

  • Lets you quickly see just that component without loading the entire app
  • Gives you a starting place for reproducing bugs (and reduces the surface area)
  • Offers you an opportunity to demo the component in multiple scenarios

That last item can be leveraged in a couple ways. I've worked on projects where we've:

  • Had multiple instances of the same component on a single page so we could see how it behaved in a few key scenarios
  • Made the demo page dynamic so we could play with dozens of variables to test a component

Last but not least, debugging issues will be easier because the component is isolated from the rest of the app. If you can reproduce the issue on the component's demo page, you can focus your attention and not have to consider unrelated parts of your app.

The test HTML page gives you similar benefits to the demo HTML page. When you can run just a single component's tests, you:

  • Don't need to litter your test code with .only statements that will inevitably be forgotten and missed during code review
  • Can make changes to the component and focus on just that component's tests before running the app's entire test suite

Enforce good API design and separation of concerns

The modlet workflow also promotes good API design. By using each component in at least two places (in your app and on the demo page), you will:

  1. Consider exactly what's required by your component's API
  2. Set clear boundaries between your components and the rest of your app

If your component's API is intuitive and frictionless, it'll be painless to create a demo page for your component. If too much "bootstrapping" is required to use the component, or there isn't a clean separation between the component and how it's used, then you might reconsider how it's architected.

With your component's API clearly defined, you set yourself up for being able to take your component out of its original repository and make it available in other applications. If you work in a large company, a shared component library is really helpful for being able to quickly develop projects. The modlet workflow encourages you to do that because each of your components already has its own demos, docs, and tests!

Help keep your components' docs and tests updated

A common issue I've seen on projects that don't follow the modlet workflow is that docs and tests don't get updated when the main source files change. When a team follows the modlet workflow, everyone knows where to look for each component's docs and tests: they're in the same folder as the component's source code!

This makes it easier to identify missing docs and tests. Additionally, the files being in the same folder serve as a reminder to every developer on the team to update them when making changes to that component.

This is also helpful during code review. Most tools list files by their name, so when you're reviewing changes for a component, you're reminded to make sure the docs and tests were updated too. Additionally, flipping between the implementation and tests is way easier because they'll be close to each other.

Make upgrades and migrations easier

Last but not least, following the modlet workflow can help you upgrade your app to new versions of your dependencies. Let's consider an example!

A new major version of your JavaScript framework of choice is released and you're tasked with migrating your app to the new version. If you're following the modlet workflow, you can start your migration by updating the components that don't use any of your other components:

The individual demo and test pages are crucial to making this upgrade. You can start by making the tests pass for your component, then double check it visually with your demo page.

Once those components work, you can start upgrading the components that depend on those:

You can follow this process until you get all of your app's components working. Then, all that's left is to test the actual app, which will be far less daunting because you know the individual components are working.

Large-scale migrations are easier when components are contained and well defined. As we discussed in an earlier section, the modlet workflow encourages clear API boundaries and separation of concerns, which makes it easier to test your components in isolation, making an entire app upgrade less intimidating.

Start using the modlet workflow in your app today

You can get started with the modlet workflow today—first, if your team is still organizing files by type, start grouping them by component instead. Move the test files to your component folders and add some HTML pages for demoing and testing your component. It might take your team a little bit of effort to transition, but it'll be worth it in the long run.

Some of the suggestions in this article might seem intimidating because of limitations in your tooling. For example, if you use a module loader & bundler that requires you to create a separate bundle for each individual page, adding two HTML pages per component would require an intimidating amount of build configuration.

In the next article in this series, we'll discuss how you can use a module loader and bundler called StealJS to load the dependencies for each of your components without a separate build for each HTML page.

Let me know what you think in the comments! If you follow a similar organization technique, then let me know what's worked well and what hasn't worked for you.

Article Series:

  1. The Key to Building Large JavaScript Apps: The Modlet Workflow (You are here!)
  2. The Modlet Workflow: Improve Your Development Workflow with StealJS

The Key to Building Large JavaScript Apps: The Modlet Workflow is a post from CSS-Tricks

[syndicated profile] alistapart_feed

API documentation is the number one reference for anyone implementing your API, and it can profoundly influence the developer experience. Because it describes what services an application programming interface offers and how to use those services, your documentation will inevitably create an impression about your product—for better or for worse.

In this two-part series I share what I’ve learned about API documentation. This part discusses the basics to help you create good API docs, while in part two, Ten Extras for Great API Documentation, I’ll show you additional ways to improve and fine-tune your documentation. 

Know your audience

Knowing who you address with your writing and how you can best support them will help you make decisions about the design, structure, and language of your docs. You will have to know who visits your API documentation and what they want to use it for. 

Your API documentation will probably be visited and used by the following audiences. 

Developers

Based on their skills, experience, and role in projects, developers will generally be the largest and most diverse group. They’ll be using your docs in different ways.

At Pronovix, we started conducting developer portal workshops with our clients to help them learn more about what developers need and how to best support their work—and what they’re really looking for in API documentation. This is also supported by solid research, such as the findings published in Stephanie Steinhardt’s article following a two-year research program at Merseburg University of Applied Sciences.

Newcomers: Developers lacking previous experience with your API tend to need the most support. They will take advantage of quickstart guides that encourage them to start using your API—clear, concise, step-by-step tutorials for the most important topics, and sample code and examples to help them understand how to use it in real projects. If you can make onboarding pleasant for newcomers, they will be more likely to devote themselves to learning every nuance of your API.

External developers: Developers already working with your API will come back repeatedly to your docs and use them as reference material. They will need quick information on all the functionality your API offers, structured in an easy to understand way to help them quickly find what they need.

Debuggers: Developers using your API will encounter errors from time to time and use your documentation to analyze the responses and errors that crop up.

Internal developers: API providers tend to focus so much on their external audience that they forget about their own developers; internal teams working on the API will use the API documentation, as well.

These are just the most common use cases.

Decision makers

Decision makers like CTOs and product managers will also check out your API documentation and evaluate your API. They need to determine whether your API will be a good fit for their project or not, so it’s crucial to your business that this group can easily and quickly find what they’re looking for.

Other audiences

Although not as common, journalists, technical writers, support staff, developer evangelists, and even your competition might read your API documentation. 

Remember the purpose of documentation

The foundation of your API documentation is a clear explanation of every call and parameter.

As a bare minimum, you should describe in detail:

  • what each call in your API does
  • each parameter and all of their possible values, including their types, formatting, rules, and whether or not they are required.

Context-based structure

People won’t read your API documentation in order, and you can’t predict which part they will land on. This means, you have to provide all the information they need in context. So following the best practices of topic-based authoring, you should include all necessary and related information in the explanation of each call.

Context.IO, for example, did a great job documenting each of their API calls separately with detailed information on parameters and their possible values, along with useful tips and links to related topics.

Examples

In order to be able to implement your API, developers need to understand it along with the domain it refers to (e.g., ecommerce). Real world examples reduce the time they need to get familiar with your product, and provide domain knowledge at the same time.

Add the following to the description of each call:

  • an example of how the call is made
  • an explanation of the request
  • sample responses

Studies have shown, that some developers immediately like to delve into coding, when getting to know a new API; they start working from an example. Analysis of eye-tracking records showed that visual elements, like example code, caught the attention of developers who were scanning the page, rather than reading it line by line.  Many looked at code samples before they started reading the descriptions.

Using the right examples is a surefire way to improving your API docs. I’ll explore ways to turn good API docs into great ones using examples in my upcoming post “Ten Extras for Great API Documentation”.

Error messages

When something goes wrong during development, fixing the problem without detailed documentation can become a frustrating and time-consuming process. To make this process as smooth as possible, error messages should help developers understand:

  • what the problem is;
  • whether the error stems from their code or from the use of the API;
  • and how to fix the problem.

All possible errors—including edge cases—should be documented with error-codes or brief, human-readable information in error messages. Error messages should not only contain information related to that specific call, but also address universal topics like authentication or HTTP requests and other conditions not controlled by the API (like request timeout or unknown server error).

This post from Box discusses best practices for server-side error handling and communication, such as returning an HTTP status code that closely matches the error condition, human-readable error messages, and machine-readable error codes.

Quickstart guide

Newcomers starting to implement your API face many obstacles:

  • They are at the beginning of a steep learning curve
  • They might not be familiar with the structure, domain, and ideas behind your API
  • It’s difficult for them to figure out where to start.

If you don’t make the learning process easier for them, they can feel overwhelmed and refrain from delving into your API. 

Many developers learn best by doing, so a quickstart guide is a great option. The guide should be short and simple, aimed at newcomers, and list the minimum number of steps required to complete a meaningful task (e.g., downloading the SDK and saving one object to the platform). Quickstart guides usually have to include information about the domain and introduce domain-related expressions and methods in more detail. It’s safest to assume that the developer has never before heard of your service.

Stripe’s and Braintree’s quickstart guides are great examples; both provide an overview of the most likely tasks you’ll want to perform with the API, as well as link you to the relevant information. They also contain links to contact someone if you need help.

Tutorials

Tutorials are step-by-step walkthroughs covering specific functionality developers can implement with your API, like SMS notifications, account verification, etc.

Tutorials for APIs should follow the best practices for writing any kind of step-by-step help. Each step should contain all the information needed at that point—and nothing more. This way users can focus on the task at hand and won’t be overloaded with information they don’t need.

The description of steps should be easy to follow and concise. Clarity and brevity support the learning process, and are a best practice for all kinds of documentation. Avoid jargon, if possible; users will be learning domain-related language and new technology, and jargon can instill confusion. Help them by making all descriptions as easy to understand as possible. 

The walkthrough should be the smallest possible chunk that lets the user finish a task. If a process is too complex, think about breaking it down into smaller chunks. This makes sure that users can get the help they need without going through steps they’re not interested in.

Screenshot of Twilio's tutorials page
Twilio’s tutorials explain the most-likely use cases with sample apps in a wide variety of programming languages and frameworks.

Universal topics

To implement your API, there are some larger topics that developers will need to know about, for example:

  • Authentication. Handled differently by each type of API, authentication (e.g., OAuth) is often a complicated and error-prone process. Explain how to get credentials, how they are passed on to the server, and show how API keys work with sample code.
  • Error handling. For now, error handling hasn’t been standardized, so you should help developers understand how your API passes back error information, why an error occurs, and how to fix it.
  • HTTP requests. You may have to document HTTP-related information as well, like content types, status codes, and caching.

Dedicate a separate section to explaining these topics, and link to this section from each related API call. This way you can make sure that developers clearly see how your API handles these topics and how API calls change behavior based on them. 

Layout and navigation

Layout and navigation are essential to user experience, and although there is no universal solution for all API docs, there are some best practices that help users interact with the material.

Dynamic layout

Most good examples of API documentation use a dynamic layout as it makes navigation easier for users than static layouts when looking for specific topics in extensive documentation. Starting with a scalable dynamic layout will also make sure you can easily expand your docs, as needed.

Single page design

If your API documentation isn’t huge, go with a single page design that lets users see the overall structure at first sight. Introduce the details from there. Long, single page docs also make it possible for readers to use the browser’s search functionality.

Screenshot of Stripe's API reference page
Stripe managed to present extensive documentation in an easy to navigate single page.

Persistent navigation

Keep navigation visible at all times. Users don’t want to scroll looking for a navigation bar that disappeared.

Multi-column layout

2- or 3-column layouts have the navigation on the left and information and examples on the right. They make comprehension easier by showing endpoints and examples in context.

Screenshot of Clearbit's API reference page
Clearbit’s three-column layout displays persistent navigation (table of contents) on the left, references in the middle, and code examples on the right.

Syntax highlighter

Improving the readability of samples with syntax highlighting makes the code easier to understand.

Screenshot of Plaid's API documentation page
The syntax highlighter in action on Plaid’s API documentation site.

If you’d like to start experimenting with a layout for your docs, you might want to check out some free and open source API documentation generators.

To learn about the pros and cons of different approaches to organizing your API docs in the context of developer portals, this is an excellent article by Nordic APIs.

Editing

All writing that you publish should go through an editing process. This is common sense for articles and other publications, but it’s just as essential for technical documentation.

The writers of your API docs should aim for clarity and brevity, confirm that all the necessary information is there, and that the structure is logical and topics aren’t diluted with unnecessary content. 

Editors should proofread your documentation to catch grammar mistakes, errors, and any parts that might be hard to read or difficult to understand. They should also check the docs against your style guide for technical documentation and suggest changes, if needed.

Once a section of documentation is ready to be published, it’s a good idea to show it to people in your target audience, especially any developers who haven’t worked on the documentation themselves. They can catch inconsistencies and provide insight into what’s missing.

Although the editing process can feel like a burden when you have to focus on so many other aspects of your API, a couple of iterations can make a huge difference in the final copy and the impression you make.

Keep it up-to-date

If your API documentation is out of date, users will get frustrated by bumping into features that aren’t there anymore and new ones that lack documentation. This can quickly diminish the trust you established by putting so much work into your documentation in the first place.

When maintaining your API docs, you should keep an eye on the following aspects:

  • Deprecated features. Remove documentation for deprecated features and explain why they were deprecated.
  • New features. Document new features before launch, and make sure there’s enough time planned for the new content to go through the editorial process.
  • Feedback. Useful feedback you get from support, or analytics should be reflected in your docs. Chances are you can’t make your docs perfect at the first try, but based on what users are saying, you can improve them continuously.

For all this to work, you will have to build a workflow for maintaining your documentation. Think about checkpoints and processes for the above mentioned aspects, editing, and publication. It also helps if you can set up a routine for reviewing your docs regularly (e.g. quarterly).

Following these best practices, you can build a solid foundation for your API documentation that can be continuously improved upon as you gain more insight into how users interact with them. Stay tuned for part two, where I give you some tips on how to turn good API docs into amazing ones.

[syndicated profile] csstricks_feed

Posted by Chris Coyier

Mattias Geniar:

A lot of (web) developers use a local .dev TLD for their own development. ... In those cases, if you browse to http://site.dev, you'll be redirect[ed] to https://site.dev, the HTTPS variant.

That means your local development machine needs to;

  • Be able to serve HTTPs
  • Have self-signed certificates in place to handle that
  • Have that self-signed certificate added to your local trust store (you can't dismiss self-signed certificates with HSTS, they need to be 'trusted' by your computer)

This is probably generally A Good Thing™, but it is a little obnoxious to be forced into it on Chrome. They knew exactly what they were doing when they snatched up the .dev TLD. Isn't HSTS based on the entire domain though, not just the TLD?

Direct Link to ArticlePermalink


Chrome to force .dev domains to HTTPS via preloaded HSTS is a post from CSS-Tricks

React + Dataviz

Sep. 18th, 2017 04:20 pm
[syndicated profile] csstricks_feed

Posted by Chris Coyier

There is a natural connection between Data Visualization (dataviz) and SVG. SVG is a graphics format based on geometry and geometry is exactly what is needed to visually display data in compelling and accurate ways.

SVG has got the "visualization" part, but SVG is more declarative than programmatic. To write code that digests data and turns it into SVG visualizations, that's well suited for JavaScript. Typically, that means D3.js ("Data-Driven Documents"), which is great at pairing data and SVG.

You know what else is good at dealing with data? React.

The data that powers dataviz is commonly JSON, and "state" in React is JSON. Feed that JSON data to React component as state, and it will have access to all of it as it renders, and notably, will re-render when that state changes.

React + D3 + SVG = Pretty good for dataviz

I think that idea has been in the water the last few years. Fraser Xu was talking about it a few years ago:

I like using React because everything I use is a component, that can be any component writen by myself in the project or 3rd party by awesome people on NPM. When we want to use it, just import or require it, and then pass in the data, and we get the visualization result.

That components thing is a big deal. I've recently come across some really good libraries smooshing React + D3 together, in the form of components. So instead of you leveraging these libraries, but essentially still hand-rolling the actual dataviz components together, they provide a bunch of components that are ready to be fed data and rendered.

nivo

nivo provides a rich set of dataviz components, built on top of the awesome d3 and Reactjs libraries.

Victory

Victory is a set of modular charting components for React and React Native. Victory makes it easy to get started without sacrificing flexibility. Create one of a kind data visualizations with fully customizable styles and behaviors. Victory uses the same API for web and React Native applications for easy cross-platform charting.

react-vis

[react-vis is] a composable charting library

Recharts

A composable charting library built on React components

React D3

A Javascript Library For Building Composable And Declarative Charts. A new solution for building reusable components for interactive charts.


React + Dataviz is a post from CSS-Tricks

A Rube Goldberg Machine

Sep. 18th, 2017 03:39 pm
[syndicated profile] csstricks_feed

Posted by Chris Coyier

Ada Rose Edwards takes a look at some of the newer browser APIs and how they fit together:

These new APIs are powerful individually but also they complement each other beautifully, CSS custom properties being the common thread which goes through them all as it is a low level change to CSS.

The post itself is a showcase to them.

Speaking of new browser APIs, that was a whole subject on ShopTalk a few weeks back.

Direct Link to ArticlePermalink


A Rube Goldberg Machine is a post from CSS-Tricks

[syndicated profile] csstricks_feed

Posted by Robin Rendle

I often see a lot of questions from folks asking about fallbacks in CSS Grid and how we can design for browsers that just don’t support these new-fangled techniques yet. But from now on I'll be sending them this post by HJ Chen. It digs into how we can use @supports and how we ought to ensure that our layouts don't break in any browser.

Direct Link to ArticlePermalink


Basic grid layout with fallbacks using feature queries is a post from CSS-Tricks

Profile

carene_waterman: An image of the Carina Nebula (Default)
carene_waterman

July 2014

S M T W T F S
  12345
6789101112
13141516171819
2021 22 2324 2526
2728293031  

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 25th, 2017 06:51 pm
Powered by Dreamwidth Studios