On variable naming when teaching

One of the hardest things a programmer has to do on a daily basis is naming things. Anything that we name will stay with us for a while and it’s very likely that other programmers will have to use the thing we just named as well. So naming something properly is very important. It’s often said that the two hardest problems in programming are naming things and cache invalidation. I tend to agree with that statement.

A lot of times when I struggled with a piece of code, naming could have made my struggle easier. A function name like fetchData might make sense at the time of writing. But a few weeks later you look at that code and you start wondering.. what data does that function fetch, exactly? Couldn’t it have been named fetch just as well? I mean, the fact that it fetches data is implied. Or is it? Almost always there’s room for discussion on the naming of things in code. However, that’s not the point I want to make in this post. This post is about naming things in the context of tutorials and books. In other words, code snippets that have a teaching purpose.

When you write code that is intended or teaching you should make sure that you keep two things in mind.

  1. You want to get a certain point or concept across to the reader. The code should make this point as clear as possible.
  2. You want to stick to best practices so you’re not teaching bad habits.

I have found that these two goals can easily be in conflict.

An example

Let me show you an example I have found in the Functional Swift book by the folks over at objc.io.

These four lines won’t look very scary to somebody who is familiar with Generics in Swift. They will know that T and U are just placeholders for types. Any value could go there, we could have defined them as A and B or Hello and World just as well. However, the best practice is that generic naming starts at T and seems to work it’s way up through the alphabet from there. So that’s why this snippet uses the T and U as type names for the generic portion of this struct. For beginners this might be confusing so you could argue that more descriptive type names would be better. For example, FirstType and SecondType are a lot clearer. They don’t follow best practices though, so picking between the two can prove to be quite tough and in my opinion it depends on the point you’re trying to get across. In the above snippet the concept of generics is already explained in previous chapters, so T and U are just fine in this snippet. If this snippet was about explaining generics it might have been better to help the reader out a little bit by breaking best practices for the sake of readability and introducing the proper way after explaining how generics work.

What does bother me about the way the Times struct is defined is the way fst and snd are named. The author chose to sacrifice readability in order to save a few keystrokes. In production code this happens all the the time. Loops like for u in users or [obj.name for obj in res] are not uncommon in Swift and Python. One might even argue that using short names like this is actually some sort of convention and while that might be true, if you’re explaining something in code you do not want the reader to have a single doubt about what something does because of the name. For example, fst and snd in the Times struct could have been named first and second or left and right. A loop like for u in users could be for user in users. [obj.name for obj in res] could be clarified by writing [user.name for user in fetched_users]. These more verbose versions of code might not be fully in line with best practices or common in production code, but when you’re using code to explain something you need to make sure that your code is as readable as possible.

In conclusion

Naming things is hard, there’s no doubt about it. What might be clear one day could look like gibberish the next. What might be obvious to me could be nonsense to you. Conventions help ease the pain. If everybody uses the same rules for naming things it becomes a little bit easier for programmers to come up with good naming. However, we should not forget that people who read our code to learn more about a certain topic might not be fully aware of certain conventions. Or they might not be very good at understanding what i, j and k mean when we’re nesting loops. And let’s be honest, those single letter variables lack all kinds of meaning. Even though it’s convention and we all do it, it’s just not a good convention follow when teaching. At least not all of the time.

Next time you write code that’s intended for explaining something, ask yourself if breaking a convention will make your snippet simpler or easier to follow. If the answer is yes, it just might be a good idea to break the convention and save your readers some brainpower.

Apple has launched Safari Technology Preview (and that’s great news).

For a long time web developers have been complaining about the lack of updates (and modern features / APIs) for Safari. With the current release cycle for Safari we get a major updated version with every major OS release (which only happens once a year). This release cycle, and the lack of new features in Safari made some people go as far as calling Safari the new IE.

NowApple has launched Safari Technology Preview. Developers can use this browser to try out new web features way before they land in the consumer version of Safari. The developer version is based off the WebKit Nightly Builds and will contain the latest and greatest features that were added to the WebKit platform. According to the The Next Web Apple will be updating the Technology Preview version of their browser approximately every two weeks, which is a lot more than they are updating the main browser.

When comparing this browser with the Chrome Canary we should consider Canary as more of a playground, features are going to change a lot before making it to the Chrome browser or could disappear completely. The Safari Technology Preview is intended as a browser that provides people with features that are intended to be shipped (and are mostly ready for production).

So why is this great news?

In the title of this post I mentioned that I think the Safari Technology Preview is great news. The reason I consider this great news is that if Apple sticks to their plan and updates this Preview every two weeks, it might be a sign that Apple intends to ship more updates for the main Safari browser as well. The Technology Preview might just be Apple’s way of experimenting with a faster release cycle for their consumer product. Also, if developers start developing for the Preview platform they are testing new features out in the wild, which just might give Apple the confidence to release new features to end-users more rapidly once they now a certain feature work well in real life.

All in all I really like this move from Apple and it might make Safari a more solid, robust, modern and up-to-date browser than it is today.

Build a simple web scraper with node.js

Recently I released my first personal iOS app into the wild. The app is called unit guide for Starcraft 2 and it provides Starcraft 2 player with up to date and accurate information for every unit in the game. Instead of manually creating a huge JSON file I wrote a web scraper in node.js that allows me to quickly extract all the data I need and output it in a JSON format. In this post I will explain how you can build something similar using techniques that are familiar for most web developers.

Step 1: preparing

Before you get started you’re going to want to install some dependencies. The ones I have used are: request, cheerio and promise. Installing them will work like this:

If you don’t have npm installed yet then follow the instructions here to install node and npm.

Once you have all the dependencies, you’re going to need a webpage that you will scrape. I picked the Starcraft 2 units overview page as a starting point. You can pick any page you want, as long as it contains some data that you want to extract into a JSON file.

Step 2: loading the webpage

In order to start scraping the page, we’re going to need to load it up. We’ll be using request for this. Note that this will simply pull down the html, for my use case that was enough but if you need the webpage to execute javascript in order to get the content you need you might want to have a look at phantomjs. It’s a headless browser that will allow javascript execution. I won’t go in to this right now as I didn’t need it for my project.

Downloading the html using request is pretty straightforward, here’s how you can load a webpage:

Getting the html was pretty easy right? Now that we have the html we can use cheerio to convert the html string in to a DOM-like object that we can query with css style selectors. All we have to do is include cheerio in our script and use it like this:

That’s it. We now have an object that we can query for data pretty easily.

Step 3: finding and extracting some content

Now that we have the entire webpage loaded up and we can query it, it’s time to look for content. Or in my case, I was looking for references to pages that contain the actual content I wanted to extract. The easiest way to find out what you should query the DOM for is to use the “inspect element” feature of your browser. It will give you an overview of all of the html elements on the page and where they are in the page’s hierarchy. Here’s part of the hierarchy I was interested in:

Screen Shot 2016-02-29 at 15.42.15

You can see an element that has the class table-lotv in the hierarchy. This element has three children with the class unit-datatable. The contents of this unit-datatable are of interest for me because somewhere in there I can find the names of the units I want to extract. To access these data tables and extract the relevant names you could use a query selector like this:

In the above snippet $('.table-lotv .unit-datatable') selects all of the data tables. When I loop over these I have access to the individual dataTable objects. Inside of these objects I have found the race name (Terran, Protoss or Zerg) which is contained inside of a span element which is contained in an element with the class title-bar. Extracting the name isn’t enough for my use case though. I also want to scrape each unit’s page and after doing that I want to write all of the data to a JSON file at once. To do this I used promises. This is a great fit because I can easily create an array of promise objects and wait for all of them to be fulfilled. Let’s see how that’s done, shall we?

Step 4: build your list of promises

While we’re looping over the dataTable objects we can create some promises that will need to be fulfilled before we output the big JSON file we’re aiming for. Let’s look at some code:

Okay, so in this snippet I included Promise to the requirements. Inside of the request callback I created an empty array of promises. When Looping over the data tables I insert a new promise which is returned by the scrapeUnits function (I’ll get to that function in the next snippet). After looping through all of the data tables I use the Promise.all function to wait until all promises in my promises array are fulfilled. When they are fulfilled I use the results of these promises to populate a data object (which is our JSON data). The function we provide to the then handler for Promise.all receives one argument. This argument is an array of results for the responses we put in the promises array. If the promises array contains three elements, then so will the promiseResults. Finally I write the data to disk using fs. Which is also added in the requirements section. (fs is part of node.js so you don’t have to install that through npm).

Step 5: nesting promises is cool

In the previous snippet I showed you this line of code:

The function scrapeUnits is a function which returns a promise, let’s have a look at how this works, shall we?.

This function is pretty straightforward. It returns a new Promise object. A Promise object takes one function as a parameter. The function should take two arguments, fulfil and reject. The two arguments are functions and we should call them to either fulfil the Promise when our operation was successful, or we reject it if we encountered an error. When we call fulfil, the Promise is “done”. When we use Promise.all, the then handler will only get called if all promises passed to all have been fulfilled.

Step6: Putting it all together

The above script is a stripped version of the code I wrote to scrape all of the unit information I needed. What you should take away from all this, is that it’s not very complex to build a scraper in node.js. Especially if you’re using promises. At first promises might seem a bit weird, but if you get used to them you’ll realise that they are the perfect way to write maintainable and understandable asynchronous code. Especially Promise.all is a very fitting tool for what we’re trying to do when we scrape multiple webpages that should be merged into a single JSON file. The nice thing about node.js is that it’s javascript so we can use a lot us the technology we also use in a browser. Such as the css / jQuery selectors that cheerio makes available to us.

Before you scrape a webpage, please remember that not every webpage owner appreciates it if you scrape their page to use their content so make sure to only scrape what you need, when you need it. Especially if you start hitting somebody’s websites with hundreds of requests you should be asking yourself if scraping this site is the correct thing to do.

If you have questions about this article, or would like to learn more about how I used the above techniques, you can let me know on Twitter

How I migrated from Apache to Nginx

It’s no secret that nginx has certain advantages over apache. One of them is that nginx is supposed to have better options for forwarding requests to ports other than port 80. My VPS has been using apache ever since I set it up because at the time apache was the only server I knew how to install and set up. But, as I learned more and wanted to start using different ports for node.js or python apps, I figured that I needed to move over to nginx. And so I did. In this post I will describe how.

Preparing

When I started the process of migrating I made sure that I had a backup of my most important website files. Not that I expected my files to blow up somehow, I just wanted to make sure that I wouldn’t lose anything. Fortunately I have most of my projects in gitlab so that wasn’t really an issue. After that I downloaded a back-up of my blog’s database. Again, just to be sure.

After I reassured my paranoid mind that everything would be fine I went on to install the things I needed to start migrating. There were only two things I really needed, nginx and php5-fpm. The nginx package is needed to launch the nginx server and php5-fpm is what nginx will hand php files off to. Installing them took just two simple commands.

Configuration

Before I was able to make nginx serve my websites I had to configure php5-fpm so it doesn’t serve files based on a best guess, but only if we have an explicit (valid) file path. Even though this sounds like something that would make a great default I had to set that myself. In order to make this happen I had to modify the /etc/php5/fpm/php.ini file. This is the line that I had to change:

If the line is set like that it’s good. That’s the secure config we’re looking for. Next I had to set up the socket that php-fpm and nginx will use to communicate. The configuration for that is in /etc/php5/fpm/pool.d/www.conf and the line where the listening is configured should look like this.

Once this is set, php-fpm need to be restarted for the changes to take effect.

Setting up nginx

After setting up php-fpm it was time to start setting up my websites. Just like apache, nginx can have multiple websites configured. Translating a basic website is not very hard, it took me something like 30 minutes to figure out how I could move Arto from an apache to an nginx config. A virtual host in apache might look something like below.

Translating this to an nginx configuration looks like the following snippet.

What you might notice here is that the nginx configuration is a little bit longer and more detailed than the one for apache. This is because apache uses .htaccess files to set up global or per folder access and rewrite rules. Nginx doesn’t use these files and instead the configuration for that goes into your server configuration.

The first part tells nginx on which port it should listen. This makes it easy to spawn servers on different ports. Or to have multiple domains on a single port. The next three lines tell nginx what the full path to the website is, what server name it should use and which file should be used as the index. The first two properties can also be found in the site configuration for apache. The last one, index is not in the apache config. It tells nginx which file is the index file for the website.

Next up:

This section is similar to what the .htaccess would do. It tells nginx how it should attempt to serve files for certain paths. So if we’re at /, it tries to serve the requested files. If that’s not possible, we get index.html. If we request something from /api/ it tries to serve the requested path as a file, if that fails we get sent through to index.php file. The /cms/ section works just like the /api/ section, just a different path.

The part after that are intended to send files through to php-fpm if the requested file is a php file. Next there’s a special declaration for robots.txt. Finally we deny access to all files that start with . because those files are not supposed to be accessed and the final block denies execution of php in the uploads and files folder.

Making the switch

After translating Arto I had to convert a bunch of other websites with very similar configurations. When I was done with that it was time to make the switch. All it took to migrate after setting up my configuration was to stop apache (sudo service apache2 stop) and start nginx (sudo service nginx start or sudo service nginx restart if nginx was already running for some reason).

I expected things to break, fall over and not work because migrating just couldn’t be this easy. But, in fact, it was. I only had one issue with a php extension I use, it wasn’t enabled on php-fpm yet. Enabling it was all it took to get everything up and running. Next up, building awesome node.js and python websites instead of using php! If you’re looking for a more complete guide of what you can do with nginx, they have great docs over at nginx.org.

If you have questions about this article or if you’ve got feedback for me, you’re always welcome to shoot me a tweet.

Icon fonts vs. svg icons

We can all agree that using png sprites for icons is not the most modern (or best) way to present icons on the web. Png is a rasterized format which means that if you try to make the image (or icon) larger, the quality will become worse. When browsers started properly supporting @font-face and svg some people chose to use icon fonts to serve their icons, others chose svg sprites to do this. These methods share the big benefit of scalability. This matters because our websites get viewed on many devices and you want your icons to be crisp on every device, not just the ones you optimized for by hand. This post is intended to give an overview of these two methods and to explore the benefits and drawbacks of each method. At the end of this post you will hopefully have an understanding of both svg icons and iconfonts and you’ll be able to choose one of these icon delivery methods for your own projects.

TL;DR: The comparison is very close, both have their big upsides and no real big downsides. I’d say iconfonts win because they’re a bit easier to use. Svg icons are a easier to position and manipulate. The code for this blogpost is on Github.

Getting set up

gulpfile

The first thing I’m going to compare is the set up process for each method. The method you end up choosing should not only work well but it should also be easy to manage. The first method I will set up is the iconfont. I will be using Gulp to automate the asset creation process. I made this decision because I use Gulp on all my projects. Also, Gulp seems like the right tool for this type of job; I don’t want to create my assets by hand. The icons I’m going to use were created by Jamison Wieser for The Noun Project.

Icon font

Like I mentioned, I will be using Gulp to generate my assets. The gulp-iconfont plugin seems like a good plugin to generate a font with. I also used the gulp-iconfont-css plugin so I didn’t have to create my own css template. With a couple of lines in my gulpfile and two plugins I managed to convert my svg icons into a font. Not bad!

svg icons

To make using the icons easy I will create a spritesheet that contains all my svg icons. I’ll be using gulp for this as well, just like I did for the iconfont. I’ve set up my output in “defs” mode. Which means that I can use the icons like Chris Coyier descrives in this css-tricks.com post. This method only uses one gulp plugin but the fact that this gulp plugin uses svg-sprite which has a ton of options, it does seem a little less straightforward to set up. The output that was produced seems decent on first viewing so that’s good.

Easiest to set up

Both methods are actually easy to set up within about 5-10 minutes. Therefor, this section will be a tie. Both methods are easy to set up and there isn’t a clear winner.

Filesize

Something we should always take into consideration is the file size of the things we end up using. So, a simple comparison in filesize:

iconfont: 8kb (or 12kb if the svg font is used)
spritesheet: 25kb

Best filesize

The winner in this example is the iconfont. The iconfont is significantly less Kb than the spritesheet is.

Ease of use

Whenever I pick a technique I want to use, it has to be something that’s easy to use. Of course it’s important that something works well, is fast and lightweight and more but I feel like ease of use should be mentioned right alongside those requirements because in the end you might end up working with the tool or technique you chose for quite some time. So for my own sanity, I like something that’s easy to use.

Implementing

To implement the iconfont all you have to do is add the stylesheet to the  head  of your document. In order to use the icons you just create span  elements and give them the icon class. The second class you give them is your icon’s name. An example:

Easy enough, right? This results in the following rendered output:

Schermafbeelding 2015-04-16 om 19.51.31

 

To implement the svg spritesheet I needed a polyfill to make everything work. That’s because IE doesn’t support the <use> method for external svgs and I didn’t want to include the whole svg inside of my html body. For more info refer to this css-tricks.com post. The html I ended up using looks like this:

This is quite a bit more complicated than the iconfont method. I had to figure out the proper viewBox settings for my icon and I have to do a lot more typing.

Positioning

When you want to position your icon with the iconfont method you’ll run into some weird and complicated stuff eventually. That’s because the iconfont is rendered as text so for example, there’s line-height applied to it. This could lead to unpredictable and strange behavior in some cases.

When you use the spritesheet approach you get to decide almost everything. The sizing, positioning, display style, you can all directly manipulate it as if you’re manipulating an image. So for positioning, the spritesheet is definitely better.

Styling

When you want to style your icons you’re going to love the spritesheet approach. because you’re working with actual svg icons you can set strokes, fill colors and everything. Just like you might do with any other svg! The iconfont however is flattened in a way. You can set a color for the whole icon bt you can’t style individual sections, so the spritesheet is more customizable than the iconfont is.

The easiest to use method

Even though it’s a little bit more work to implement, the spritesheet wins. It’s easier to position and the more powerful styling options are also a big advantage over an iconfont. So the winner for this section is spritesheet, hands down.

Render quality

Personally I haven’t seen the difference yet but there are definitely some potential rendering differences between the spritesheet and an iconfont. Because an iconfont is rendered by the browser as a font, it is also anti aliased. The result of this could be that your icons look less sharp if they’re used as a font. Like I said, I haven’t had any issues with this in the real world but the potential is there.

The best rendering method

Even though the rendering seems to be nearly identical the spritesheet wins here. That’s because an iconfont can potentially suffer from a lack of sharpness due to anti aliasing of the browser.

Browser support

The  @font-face method of embedding custom fonts is supported by all major browsers so it’s very safe to use. The spritesheet method is supported by all browsers except for IE. However, a polyfill called svg4everybody is available so at the end of the day both methods are available on all major browsers.

The best browser support

Because the spritesheet method requires a polyfill and the iconfont doesn’t I declare the iconfont the winner of the browser support section.

And the winner is..

After exploring and comparing both the iconfont and spritesheet approach I can honestly say that the comparison is very close. The iconfont is better at the implementation, more lightweight and it has better browser support. The spritesheet is more flexible, easier to work with and has great browser support if you include a polyfill.

Earlier in the article I mentioned that one of the major factors for me to decide on things like this is ease of use. And because of that I would say that the iconfont wins. The decision is really tough actually because I’m not a fan of how you have to mess around in order to position an icon with this technique. Nor am I a fan of the anti aliasing risks because I like my icons to be sharp and crisp. But iconfonts are lightweight, easy to use and implement in general and I’ve never come across a situation where I actually had to style parts of an icon rather than change the color of the entire icon. So, yeah, that concludes this post. Iconfonts win. If you beg to differ or have feedback for me, please send me a Tweet. I’d love to your opinions on this.

If you want to have a look at the source files I’ve used, the repo is on located right here on Github.

How to choose between rem and em

A few days ago I found this article that argues for using rem units when defining font sizes. Generally speaking this is good advice. The rem comes with great predictable behavior, just like pixels do. But the rem also comes with accessibility advantages. When a user changes the font size in their browser settings, the rem and em unit will both respect that and resize accordingly while the pixel unit doesn’t. That’s great news for the user. But how do you choose between rem or em? Time to go in depth on what these units do. First I’ll explain how each unit works and what they do. Based on that I’ll explain how you can make the decision for a sizing unit.

The rem unit

Since the rem unit is the easiest one to understand and use, it will be the sizing unit I start off with. The rem is relatively new but if you don’t have to support IE8 anymore you can safely use it. Rem is short for “root em”, that’s because it is a lot like the em unit except it is relative to the root font size.

So, what does this all mean? The rem unit is a sizing unit that’s related to font size. With default browser settings 1 rem should be equal to 16px. That is because the default browser font size is, you may have guessed it, 16px. So using rems is almost as easy as using pixels. Want to make something 80px wide? That will be 5 rem please.

More complicated things like 100px will require you to do some math but if you use something like Sass I recommend that you check out Bourbon.io, it provides a rem-calc function to help you calculate rems.

A workaround many people use is to set the body’s main font-size to 10px so 1 rem equals 10px on their website instead of 16px which will make working with rems a lot easier. The cool thing about the rem unit is the fact that 1 rem will always be the same size everywhere on the page, no matter what.

The em unit

The em unit is a lot like the rem unit. The difference between them is that the rem unit is always relative the the root font size. The em unit is relative to it’s containing element. An h1 that is directly inside of the body and has a font size of 2 em will have a font size of 32px if we assume the default browser font size is in tact. If you would add a link inside of that h1 and you would want that to be 24px you first instinct would probably be to use 1.5 em as a font-size for that anchor tag. Let’s try this out.

So… what went wrong here? The header has a 2 em font size, the anchor is 1.5 em so the anchor should be smaller that the rest of the text, right? Except, the anchor is larger than the rest of the header text which makes no sense. Remember that I stated earlier that the em unit is relative to it’s containing element? That’s why the anchor is larger than the header text. The anchor is a child of the header so a 1.5 em font-size means that the anchor’s font size should be 1.5 times the size of the anchor text.

This is something that makes the em a complicated unit to work with, you can imagine that deep nesting with multiple font sizes can get really ugly at some point. A simple demonstration:

What you see here is a list with a nested list. The outer list has a larger font size than the inner list. This happened because I set a 0.8 em font size on the ul tag. So when there’s a nested list, this 0.8 em is relative to the 0.8 em font size the outer list already has. So the outer list is 80% the font size of the body. The nested list is 80% the font size of the outer list. Confused? I understand, the em isn’t a very straightforward unit.

Making the decision

Now that we know how both units work we should be able to make an informed decision. So, should we use rem or do we use em for our sizing? The answer, according to me, should be both. Whenever you want to have absolute control over a size you probably want to use rem. An example would be an element that you would normally make 100px wide. You want that element to have the same size, no matter where in the document you use it, the size has to be 100px. That is a case where you should convert that 100px to rems.

However, there are cases like the link inside of an header element where you might not want to set an absolute size. You might want to say this header element should have a font size that is two times larger than the body text that it’s above. That would mean that you want to use a 2 em font size because then you know that your header is always two times larger than the body text of the element. Taking this one step further you might want to say that the anchor tag’s font size should be 75% percent of the header’s font size. That’s 0.75 em.

What I would like to conclude here is that both of these sizing units are extremely powerful. One is very good for setting absolute sizes that are still accessible and adaptable. The other is good for setting relative sizes, whenever something should be x times the size of something else, regardless how big, the em unit is your friend. I do think, however that the rem units should be the unit of choice in many situations. But especially with margins, paddings and certain spacing situations I have found the em unit to be the best choice because all those sizes are usually relative to another size and that’s where the em shines bright.

So, next time you’re faced with the rem vs. em decision I hope you think about the way they each work and make an informed decision. My rule of thumb is: rem replaces absolute pixel sizes, em is for relative sizes. If you have questions for me, feedback or want to get in touch you can always contact me on Twitter.

Consistency and discipline over motivation

One of the beautiful things about being a developer is that many of us actually have the opportunity to take an activity we enjoy, and make it our job. Many developers are happy to do some extra work or learn something when they’re at home or in the weekend just because they are so eager to learn and play. While this is pretty awesome, it won’t last forever. You won’t be motivated to learn every single day. Especially once you start doing development as a full-time job. I experience this as well, sometimes I have a couple of days or even weeks where my motivation is through the roof. I’ll get tons of work done and the days just fly by. On other days I just can’t seem to get started, everything is distracting and the motivation just doesn’t seem to be there. When I look at some of the more senior developers I know, it seems that they have moved past this phase. They always seem to be motivated and sometimes they just seem extra motivated. They just seem to have no shortage of the good stuff! How do they do this?

Is it motivation you should look for?

If you think about it, motivation isn’t worth much. It’s just not there all the time and you can’t build a solid career based on it. When I was looking for ways to improve motivation I came across posts like this, telling me that I should get disciplined. Some went even further and said that motivation just isn’t worth your time.

Because of these posts I started to realize that motivation is a great driver of productivity. But only when it’s there. When motivation isn’t there, every job seems like a chore. Have to adjust a form on a website? It sounds terrible when you’re not motivated. You’d have to create a new input field, maybe change a database table and more. You get tired just by thinking about it. But then consider doing that same thing when you’re motivated. You probably would get excited because you get to possibly improve the product and code base that you’re working on. This isn’t feasible in the long run though, when you want a job in development you’ll need to train yourself to become more consistent, more disciplined. Motivation will be the bonus, not the requirement.

Changing motivation into discipline

If you want to be more disciplined you’ll sometimes have to be pretty tough on yourself. There’s rarely a valid excuse to not do what you’re supposed to do. So instead of postponing things until you feel motivated or obliged to do them, just get started. If you do this, and are consistent about it, you’ll see that it helps. I often find that it helps to not jump in headfirst like you would when you’re super motivated but to just sit back first. Take 15-20 minutes to figure out what it is that you’re going to build, what code are you going to write. Figure out what sub tasks there are and split them up in blocks that will take about 40 minutes to complete. If you do this, you will have a great structured overview of what you’re going to do. You’ll know how busy you are for the day or week and you’ll be able to plan accordingly. During those 40 minute work cycles try to turn off notifications that might distract you. Discipline yourself to only check notifications in between your 40 minute cycles.

After a 40 minute cycle it’s time to take a quick break. And try to make it an actual break, get up and grab a drink. If there’s email or anything similar that requires your attention, take a peek. Reply if needed or add replying to your to-do list. Make it a part of a 40 minute cycle if the email requires you to figure something out in-depth. Otherwise, use the break or extend the break a little (but not too much, 10 minutes should be the maximum). In the beginning you might feel like you’re restricting yourself because everything has to be thought about or planned in, you can’t just start doing something and then do something else until you’re out of motivation. That’s fine, you are training yourself to have a consistent and disciplined workflow. If you find that 40 minutes is too long or too short for you you can always change the cycles. You could even do that on a day-to-day basis if you feel like it’s appropriate for the tasks you’re working on. I personally found after a few weeks that I prefer 50 minute cycles with 10-15 minute breaks.

The benefits are real

When I look at more senior developers I notice that many of them have a workflow similar to this. They take multiple short breaks throughout the day and between those breaks they tend to be very focused on the tasks they have to complete. They don’t have their Slack open all the time and they work on a single thing at a time. And they are consistent about that. Everyday they seem to be able to flick the switch and go into work mode. Of course they still have more and less motivated days, but a motivated day will just make them super productive instead of only productive because being productive is their default setting.

So let’s get out there and become consistently more disciplined!

Using Flexbox in the real world

The Flexbox module for css was built with the intent to make a more robust, less hacky way to layout elements on pages. When you’re building a webpage you often don’t know how high or wide every element could or should be. This can cause problems in certain layouts which lead to ugly hacks. Flexbox solves this widespread layout issue. The module has been in development for quite some time and the W3C gave the spec a “last call working draft” status back in september of 2014. The browser support for this module is very good if you don’t have to support IE9 so you can safely use it. In this post I will provide a few code examples to show you how you can use Flexbox in some everyday situations.

Creating a filmstrip

Have you ever created a horizontally scrolling filmstrip kind of module? You know the size of the each element in the strip and the size of the containing element but the size of the inner element, the filmstrip itself is unknown. This would normally result in a layout like this and you would have to use Javascript or hardcode the size of the inner element to make this work.

So what would happen if you used Flexbox for this? Well, Flexbox allows an element to grow, not only on the y axis like an element normally does, but also on the x axis. It’s one of the reasons Flexbox is really cool. Let’s try this.

It’s pretty neat, isn’t it? Between the first and second example there’s only three lines of code that are different. Okay, actually there’s a little more but I’m not counting in the vendor prefixes because you don’t have to write those if you use an autoprefixer. The first difference is that we give  overflow-x: scroll; to the filmstrip container, that’s just to make the contents scroll. The second step is to set  display: flex; on the inner element. If you did only these two things, the items inside of the inner element will shrink to fit inside of their container. You don’t want this so the last thing you do is add  flex-shrink: 0; to the filmstip items. The shrink property has a value of 0(false) or 1(true). There’s also a  flex-grow  property, it’s the same as the shrink property but it determines whether an element will grow or not.

Vertical centering

Ever since I started writing css this has been a problem. How do you center an element, with or without a known height, in a container that is or isn’t flexible? No matter how you look at it, vertical centering is annoying. I’ve used hacks that would absolutely position the centered element at 50% from the top and then I would use a negative top margin to push the element towards the center. Another method is to use translating which is slightly cleaner but you still have to use position absolute and a top offset of 50%. You could display your stuff as if it’s a table and then vertically center content which works well but it just doesn’t feel right. It starts to feel plain wrong once you’ve tried to do this with Flexbox.

So in this example I’ve set up a container and inside of that container is an image. Flexbox is used to center the image both vertically and horizontally inside of it’s containing element. The property that is used for vertical centering is  align-items . The property that is centering horizontally in this example is  justify-content . In my opinion, this is the best way to vertically align items I’ve seen.

Fitting things into a container

The filmstrip example gave this one away already but Flexbox can be used to fit an unknown number of items into a container. This is really nice if you have a couple of images but you can’t really be sure of how many. You could optimize for a certain number, let’s say four in the case of this example. And then for the edge cases where you have five images or more, you have Flexbox to make the images smaller so everything will still fit nicely into the containing element. Let’s check it out.

All we had to do to achieve this is add  display: flex; to the containing element. In the first example we saw that, by default, children of an element with  display: flex; will shrink to fit inside of that container.

Conclusions and further resources

In this post I showed you three examples of what you can achieve with Flexbox and how you can do that. Note that we just used Flexbox for three small things and that I didn’t mention Flexbox as a method of laying out your entire page. The reason for this is that Flexbox is not intended for that. There is a spec on the way for laying out your entire page and it’s called grid.

If you’re looking for a good overview of how Flexbox works I recommend that you visit this cheatsheet on css-tricks.com. This cheatsheet provides a lot of information on how you can use Flexbox and what properties it has. Lastly, if you’re looking for more examples of what problems you can solve with Flexbox check out this “solved by Flexbox” page.

Service workers are awesome

In the war between native and web apps there’s a few aspects that make a native app superior to a web app. Among these are features like push notifications and offline caching. A native app, once installed, is capable of providing the user with a cache of older content (possibly updated in the background) while it’s fetching new, fresh content. This is a great way to avoid loading times for content and it’s something that browser vendors tried to solve with cache manifests and AppCache. Everybody who has tried to implement offline caching for their webpages will know that the AppCache manifest files are a nightmare to maintain and that they’re pretty mysterious about how and when they store things. And so the gap between native and web remained unchanged.

In comes the service worker

The service worker aims to solve all of our issues with this native vs. web gap. The service worker will allow us to have a very granular controlled cache, which is great. It will also allow us to send push notifications, receive background updates and at the end of this talk Jake Archibald mentions that the Chrome team is even working on providing stuff like geofencing through the service worker API. This leads me to think that the service worker just might become the glue between the browser and the native platform that we might need to close the gap once and for all.

If you watch the talk by Jake Archibald you’ll see that the service worker can help a great deal with speeding up page loads. You’ll be able to serve cached content to your users first and then add new content later on. You’re able to control the caching of images that aren’t even on your own servers. And more importantly, this method of caching is superior to browser caching because it will allow for true offline access and you can control the cache yourself. This means that the browser won’t just delete your cached data whenever it feels the need to do so.

How the service worker works

When you want to use a service worker you have to install it first. You can do this by calling the register function on  navigator.serviceWorker . This will attempt to install the service worker for your page. This is usually the moment where you’ll want to cache some static assets. If this succeeds the service worker is installed. If this fails the service worker will attempt another install the next time the page is loaded, your page won’t be messed up if the installation fails.

Once the service worker is installed you can tap into network requests and respond with cached resources before the browser goes to the network. For example, the browser wants to request /static/style.css . The service worker will be notified through the  fetch event and you can either respond with a cached resource or allow the browser to go out and fetch the resource.

HTTPS only!!

Because the server worker is such a powerful API it will only be available through HTTPS when you use it in production. When you’re on localhost HTTP will do but otherwise you are required to use HTTPS. This is to prevents man-in-the-middle attacks. Also, when you’re developing locally you can’t use the file:// protocol, you will have to set up a local webserver. If you’re struggling with that, I wrote this post that illustrated three ways to quickly set up an HTTP server on your local machine. When you want to publish a demo you can use github pages, these are server over HTTPS by default so service workers will work there.

A basic server worker example

Browser support

Before I start with the example I want to mention that currently Chrome is the only browser that supports service workers. I believe Firefox is working hard to make an implementation happen as well and the other vendors are vague about supporting the service worker for now. This page has a good overview of how far the completion of service workers is.

The example

The best way to illustrate the powers of the service worker probably is to set up a quick demo. We’re going to create a page that has ten pretty huge pictures on it, these pictures will be loaded from several resources because I just typed ‘space’ in to Google and picked a bunch of images there that I wanted to include on a webpage.

When I load this page without a service worker all the images will be fetched from the server, which can be pretty slow considering that we’re using giant space images. Let’s speeds things up. First create an app.js  file and include that in your page html right before the body tag closes. In that file you’ll need the following script:

This code snipper registers a service worker for our website. The register function returns a promise and when it gets resolved we just log the words ‘success’ for now. On error we’ll log failure. Now let’s set up the service worker.

The code above creates a new service worker that adds a list of files to the  "SPACE_CACHE" . The install eventHandler will wait for this operation to complete before it returns a success status, so if this fails the installation will fail as well.

Now let’s write the fetch handler so we can respond with our freshly cached resources.

This handler will take a request and match it against the SPACE_CACHE. When it finds a valid response, it will respond with it. Otherwise we will use the fetch API that is available in service workers to load the request and we respond with that. This example is pretty straightforward and probably a lot more simple than what you might use in the real world.

Debugging

Debugging service workers is far from ideal, but it’s doable. In Chrome you can load chrome://serviceworker-internals/ or chrome://inspect/#service-workers to gain some insights on what is going on with your service workers. However, the Chrome team can still improve a lot when it comes to debugging service workers. When they fail to install properly because you’re not using the cache polyfill for instance, the worker will return a successful installation after which the worker will be terminated without any error messages. This is very confusing and caused me quite a headache when I was first trying service workers.

Moving further with service workers

If you think service workers are interesting I suggest that you check out some examples and posts online. Jake Archibald wrote the <a href=”http://jakearchibald.com/2014/offline-cookbook/” target=”_blank”>offline cookbook</a>. There’s many information on service workers in there. You can also check out his <a href=”https://github.com/jakearchibald/simple-serviceworker-tutorial” target=”_blank”>simple-serviceworker-tutorial</a> on Github, I learned a lot from that.

In the near future the Chrome team will be adding things like push notifications and geofencing to service workers so I think it’s worth the effort to have a look at them right now because that will really put you in the lead when service workers hit the mainstream of developers and projects.

If you feel like I made some terrible mistakes in my overview of service workers or if you have something to tell me about them, please go ahead and hit me up on Twitter.

The source code for this blog post can be found on Github.

Some tips for new front-end developers

You’ve decided you want to get into front-end development and you’ve managed to learn a few things. The time has come for you to get some working experience and start growing your career in a beautiful field. I was in that position not so long ago and I noticed that actually having a job and working is a lot different than writing code from the safety of your own environment. So how do you present yourself in a professional way? How do you make sure that you learn as much as you can? Today I want to share some of the things I have learned about this.

Take yourself seriously

But don’t be cocky. It’s good to show to other people that you’re serious about development. It will make sure that they don’t just steamroll right over you. It will show that you have passion and aren’t just there to put in a couple of hours and then go home aftwerwards. The things you say and do matter and it’s okay to make sure that people know. However, at the same time you should be aware that you’re just starting out. You’re a junior developer and you’re in the position to learn. Which brings me to my next point.

Ask questions

When you’re trying to proof yourself it’s easy to isolate yourself and work really hard. When your get assigned a task you’re probably going to want to solve it on your own. It makes sense, you’re trying to make a name for yourself and you’re trying to be taken seriously. How will your colleagues be able to do that if you can’t even be trusted with doing a small, simple task on your own? So you try to keep everything to yourself. Even though this mindset makes a lot of sense I want to advice you to ask questions. A lot of them. Don’t just ask about the tasks you’re trying to complete but also ask why certain things work the way they do. Ask what motivated a certain technology or design choice in your code base. Even ask you colleagues for their opinions on work related topics. You can learn a lot from them and it will prove to them that you care enough about your job to ask somebody for help with finding the best solutions.

Take responsibility for the code you work with

Now that you’re part of a team of developers, you’re sharing in the responsibility over a code base. When I was listening to Ben Orenstein’s podcast a few days ago he mentioned something he noticed in interviews which really stuck with me. He said that when he asked people he interviewed why a certain piece of code worked the way it did many candidates would come up with a variety of excuses why they didn’t know or care about how the code worked. What these excuses usually came down to was that somebody else wrote that piece of code and it wasn’t 100% relevant to the task they were trying to do. So they would just assume that the person who wrote the code knew what they were doing and they didn’t feel responsible for the code, so they wouldn’t touch it.

When I thought about that I figured that I take code written by my colleagues for granted a lot of the time. Event though they often double check my code to see what it does and how it can be improved. They don’t do that because they don’t trust me, they do that because they feel responsible for the code I write because we all share a code base. So when you touch a piece of code somebody else wrote and you’re not sure how it works you should ask somebody. If you do this it will show that you actually care about the bigger picture and that you’re taking responsibility for the code that you’re working with. And that is a good thing.

Don’t pretend you know everything

When I was doing my first internship I though I actually knew a lot. Whenever my boss would come up with a project I would have a solution immediately. I think I kept that up for about a few months until I realized that in fact I didn’t really know anything. I knew the very basics of Actionscript and I knew how to create simple things with Adobe Flash and that was it. I wasn’t a good programmer, I just didn’t know what I didn’t know. So I want to advice you to be humble, be aware that you probably don’t know half of what you think you know. You don’t have the experience to know what works and what doesn’t work. And nobody is going to blame you for that. It’s okay to say that you’re not sure about something, it’s also okay to just say that you don’t have a clue about how you should approach something. And again, it’s okay to ask questions.

If anything, your colleagues and your boss will appreciate the fact that you ask them for help, it gives them a sense of comfort knowing that you’re not just doing whatever. Many good developers also seem to enjoy the act of teaching, sharing their knowledge with others. So actually by not pretending you know it all you’re learning more, making sure you don’t do anything weird and you’re providing others with the opportunity to share knowledge.

Take the time to read and learn

If you pick up a book on development every once in a while it will give you a much better understanding of the topic you’re reading about. Even though the modern world allows you to find almost anything online I have found that books are a great way to take a more casual approach at learning. I feel like reading a book speaks to a whole different mindset for me and I seem to be able to focus a lot better and longer when I’m reading a book. Also, some subjects require the repetition and explanation that a book can give you.

This also applies to learning a new tool or framework. It’s okay to sit down for an hour or two so you can read about it before you start working with it. I have found that doing this will provide a sense of context and it can really help with exploring the feature of the given framework. You’ll also be able to gain a much deeper understanding of what goes on because sometimes the framework documentation will go in to why they made certain choices. While these choices may seem insignificant at first sight they might provide you with some context when you’re actually working with the framework. Which can help you a lot in the long run.

Conclusions

When you’re just starting out as a developer it’s really easy to overlook all the things you don’t know. When I was just starting out I worked with some people who kept emphasizing that some things were easy but I never knew why. I never asked. And if there’s something I’ve noticed, it’s that asking is key to becoming a better developer. And it doesn’t stop there, you also have to listen. Listening is a great way to learn. Opinions of more experienced people aren’t based on the latest and greatest, they’re based on what works and what doesn’t. They tend to have experience with a lot of things and also, they tend to admit when they aren’t sure. So if somebody with tons of experience tells you something, assume that they know what they’re talking about. And if you have doubts, ask them. They will most likely be happy to explain to you what they think and why. They will probably even be excited about hearing what you have to say as well. So I guess this whole post comes down to a few things. Be confident, be eager, ask questions and don’t fake it.