What I talk about when I talk about DevOps

What does ‘DevOps’ mean, anyway?

devops-photo-mEarlier in my career, I was a DevOps consultant – and we were trying to hire other DevOps consultants. But the software industry is actually quite confused about the term ‘DevOps’ and what it really means. I was starting to wonder whether putting ‘DevOps’ in our job ads might actually be counter-productive – and sitting in a seminar at YOW! one week, I finally understood why.

Elabor8 (where I worked at the time) had a booth at YOW!, and I gave a couple of lightning talks there. Probably the biggest crowd-pleaser was my talk on resiliency patterns in distributed systems. I covered some difficult topics like the importance of having idempotent rollback steps in compensating transactions and how lessons learned from the ship-building industry help us craft better distributed systems, all presented in 10 minutes in a crowded event space.

Hang on, you may well ask. If I’m a DevOps consultant, why am I talking about atomicity and consistency in distributed systems? Shouldn’t I be talking about cool PowerShell tips and how to set up Jenkins?

As is so common with rhetorical questions, the answer is a resolute ‘No’.

When I talk about DevOps, I talk about Software Engineering.

When I do DevOps work, I’m doing software engineering. When I hire for DevOps roles, I hire software engineers. But I don’t hire just any software engineers: I want the ones who care about the delivery process. They know how to build software, and also how to put it in front of the user – and how to keep on putting that software in front of users, sprint after sprint, story after story, rapidly, efficiently, and without breaking things.

‘DevOps Engineer’ is the next ‘Full-Stack Developer’ – and not because it’s the next hype-cycle in tech hiring. In the same way full-stack developers expanded their scope to cover both the UI and the back-end, DevOps engineers have expanded their scope beyond writing software – and into the realm of how we get that software in front of users, in the fastest and most reliable way possible.

No longer are we content to just build back-end systems and UI layers on top of them: as a profession, we’re coming to understand that software engineering is bigger than just churning out vertically-sliced user stories. Software engineering is about building the right thing and keeping it running – and a DevOps engineer is a software engineer who cares about both steps. You don’t want a team full of DevOps engineers – but you definitely need at least one.

When I talk about DevOps, I talk about Agile.

You start to see the real benefits of automation when you’re deploying to production regularly. For example, if you’re only shipping a few times a year, the overhead doesn’t hurt you that much. If you have a 6-week QA pipeline and a 2-month UAT window, you don’t need DevOps (yet) (but you do need to change something! Ouch!). Once you start trying to deploy regularly – getting your cycle time down, keeping your WIP low, delivering value to the user faster and more frequently – that’s when the overhead starts to hurt.

Once you introduce agility to your process, that’s when you need to pay attention to the DevOps movement. That’s when you need some software engineers who care about automation. Please note though that I’m not saying you need “some DevOps” – there’s no such thing as “some DevOps”, and anyone who tries to sell you “some DevOps” is doing you no more favours than someone who tries to sell you “some Agile”. What you need is some smart software engineers who care about DevOps – and you need to give them the time and resources they need to do their job.

When I talk about DevOps, I talk about teams.

DevOps engineers are software engineers, but that doesn’t mean you should fill your software teams up with DevOps engineers. DevOps engineers tend to be passionate about a bunch of really interesting stuff: resilience patterns, testing, automation, source control, and release management. The great thing about multi-disciplinary, cross-functional teams, however, is that you get a bunch of people with different passions together, and that breadth gives you the ability to do great things. Don’t try to hire DevOps engineers (or worse, to build a DevOps team). Hire software engineers, and when you find ones who are great at DevOps, keep them.

Having cross-functional teams also gives your team members the chance to cross-skill while they work with other team members, which is why…

When I talk about DevOps, I talk about teaching.

Great DevOps engineers have a genuine enthusiasm for quality software engineering and release management, and their enthusiasm is infectious. Great DevOps engineers don’t hoard their knowledge, but help their fellow software engineers to learn more about the DevOps mindset by sharing what they know. They also learn from their colleagues who specialise in other software engineering fields, becoming more well-rounded themselves as they help others do the same.

Finally: When I talk about DevOps, I talk about people.

DevOps is a branch of software engineering – and whatever you might hear, software engineering is all about people. It’s about the people who use our software, and the people who build it. DevOps is the intersection of those two groups: users and developers. Our users are not just end-users, who enjoy higher-quality software, but also our fellow engineers, who rely on our automation to work more efficiently. They’re our managers, who rely on the insights we give them into the release process. Our users are the junior software engineers who may one day specialise in DevOps engineering – or who might use what they can learn from us to be better at some other branch of software engineering.

Update (2021): If you’re a software engineer who cares about the difference between GitFlow and GithubFlow; who has made calls to the Octopus API; or who loves showing other developers how to write a better test or add more context to a log message; please talk to me about joining Squiz. It’s a great team, and whether or not we’re actively hiring for a role, I can look to see if there’s a place for you here. If you want to take the engineering you care about and have a broader impact on more people, you want to join us. Get in touch.

Anatomy of a Job Ad

Wanted: Senior DBA!

  • Minimum 10 years experience managing SQL Server 2015 or newer

It’s a classic joke – the job ad which wants you to have experience in a specific technology for longer than it’s existed. Sadly, it’s funny because there’s a grain of truth in it: job ads are terrible.

pexels-photo-52608 (Small)Wanted: Junior Developer

  • 1-2 years experience writing software
  • Strong knowledge of SQL, git, Python, Haskell, and Perl
  • Good UI skills, excellent API design skills, knowledge of ESBs and other distributed software patterns
  • Excellent grounding in the principles of good software design and Agile project delivery
  • Experience with [you usually find a list of specific platforms and libraries here]
  • Go-getter, always-be-learning attitude
  • Highly regarded: strong maths and stats skills, C++ experience
  • Highly regarded: exposure to data science, experience with Big Data platforms such as Hadoop
  • Excellent communication skills are a must!

I hate seeing job ads like this one. You want someone with 1-2 years’ experience to know all of that – and be confident enough to tell a recruiter that, and back it up in an interview? You’re not selecting for capability here. You’re selecting for over-confidence, and the coincidence of having a first job which involved just the right mix of technologies.

Technology Leadership Position!

Are you a dynamic leader with top-notch management skills and brilliant technical ability? Do you have world-class knowledge of big data platforms and machine learning? Are you just as comfortable writing a PhD dissertation as you are selling a group of executives on a new company strategy? Do you have a passion for creating a dynamic company culture and mentoring a team of brilliant engineers, all while maintaining an unwavering focus on creating an unmatched user experience? Do we have a job for you!

If you’re posting a job ad like this one, you’d better have a remuneration package to match; but you probably don’t. The technology industry is full of people with imposter syndrome – and if the perfect applicant (who would have been great for your team) didn’t have it before they read this job ad, they will afterward. If the candidate who does meet that brief exists, they’re definitely not reading job ads on Seek, anyway.

These are, of course, caricatures, but they’re not so different from real job ads I’ve seen. How about a real example from my own life?

Checkbox Syndrome

Many years ago, I landed a job in the United States (I’m based in Australia, so this would have been a big move for me). The recruiter explained that the job had been open for nearly two years, and they were thrilled to have found me! I was their first suitable candidate, and they were keen to rush me through the hiring process.

This puzzled me, because I’m not that special. How was I the first suitable candidate in two years?

Well, it turns out that they’d written a very specific candidate description, and someone in HR had been handed the job of finding someone that matched. They were after someone who listed optimising high-throughput transactional systems on their LinkedIn profile, and who had completed at least two years of tertiary study in Latin or Ancient Greek.

They were building some language-processing software, and someone had decided that an engineer with a background in linguistics would be good to have on the team. That was somehow translated to requiring some tertiary education in a classical language – and bam! Two years of turning down candidates who probably would have been highly successful in the role. All they actually wanted me to do was to come up with a way to load-test the system, and then find hot-spots in the code and optimise them.

I never did end up moving to Houston: my visa application fell through (thanks, Global Financial Crisis). It was probably for the best: I wouldn’t have been a person to this company, or even an engineer; just a series of checkboxes.

What is a job ad for, anyway?

When you’re writing a job ad, please don’t forget what the ad is for. Your goal is to attract suitable applicants to apply, and discourage unsuitable applicants. Anything which doesn’t accomplish one of those two things is wasteful. Anything which discourages suitable applicants is a net loss.

As a specific example of this, please leave “Good Communication Skills” out of your ad. You might think that poor communication skills will disqualify a candidate – but putting that in the ad isn’t going to stop unsuitable candidates from applying, and it’s not going to encourage suitable candidates to apply. It’s noise. Leave it out.

Sell the job – genuinely

It all starts with empathy. Try to imagine yourself as your target engineer, tester, analyst, or whoever you’re trying to hire. Your goal is to sell them on the job, but also keep enough important criteria in there to discourage unsuitable applicants. The selling part is important: if there aren’t many great candidates out there, you need a job ad which will attract them. Candidates are about to invest a significant amount of their own, personal, outside-work time in applying for and interviewing at your company, and then you’re going to ask them to resign from their current position, where they have friends and valued colleagues and know the system, to join your team. Give them good reasons.

Please don’t give them a sales pitch. Don’t spend most of the ad spruiking the company. Talking about how great you are is for your marketing material, or your annual shareholders’ report.

Just talk honestly about your culture, values, and benefits.

Anatomy of a Good job ad

Here it is. This is what your job ad should look like.

  • Talk about the role.
    Tell prospective candidates a little bit about the company and the role. This should be short. What does your company do? Who are you looking for? How will this position further the company’s mission?
  • Say who you’re looking for.
    Really cut it down. Don’t have 10 bullet points. Keep it to 3 or 4. Try to focus on higher-order skillsets, rather than specific technologies, unless that technology really is core to the role.
    If “Good Communication Skills” is making it to your short-list of 3 or 4 key skillsets, I hope you’re hiring a radio operator or air traffic controller. Otherwise, please, just leave it off.
  • Tell them what’s in it for them.
    Describe your benefits. Tell them about the great company culture. Talk about training and conference budgets and career opportunities. Once again, keep it short and to the point.
    If you can’t think of anything to put here, you might have bigger problems.
  • Tell them how to apply.

That’s it. Really. No nice-to-haves – those just give suitable candidates a reason not to apply. Don’t do that. If you get two suitable candidates, and one of them happens to have one of your nice-to-haves, you might still offer the role to the other based on the bigger picture. If it’s not necessarily a differentiator even at the end of the recruitment process, it definitely has no place at the beginning.

But I want to list a bunch of stuff!

Resist the temptation. Are you hiring a team lead? Just say you’re hiring a team lead. Don’t list out all the stuff that team leads need to do.

  • Lead a team of software experts to deliver innovative products.
  • Help to work with product owners to deliver real business value!
  • Mentor senior engineers, and help them to mentor juniors.
  • Engage with the business about technical challenges.
  • Foster a strong team culture.

Don’t do this! Team leads already know the day-to-day detail of the role, and they won’t be going back to the job ad to work out how to spend their time. Just say that you’re looking for an experienced team lead or a senior engineer looking to step up to a team lead role – and get on to covering the more important points! What kind of team is it? What is the team mission? Are you looking for someone to drive a major change, to keep an already-high-performing team pointed in the right direction, or to build a whole new team? That stuff is much more useful than saying things like “Ensuring the team is aligned with business priorities”.

Where do I put things like “Strong work ethic” and “Ability to work in a collaborative team environment”?

In the same place you put “Good Communication Skills”. People who don’t have a strong work ethic or the ability to work in teams are going to apply anyway, so all you’re doing is making the job ad longer and more boring.

This sounds like you’re a really cool person and I’d like to work with you.

TheTradeDesk is looking at hiring a heap of software engineers in the next year!

Wait, was this secretly a job ad?

No. This doesn’t match my “Anatomy of a Good job ad” at all! But recruitment is bigger than just job ads, and this was, secretly, a bit of guerrilla recruitment. That’s another idea I’m hoping you’ll take away from this post: recruitment is about a lot more than just writing a good job ad. It’s about being a place people will want to work, and making sure the right people know it.

Measure What Matters

What’s your average API response time? Do you know? Is it important to your business? What about the 90th percentile? Do response times suffer during peak demand?

Do you think about those questions? How about these ones:

How long does it take to get a software change reviewed? Do you know? Is it important to your business? Is it a bottleneck? Do reviews get skipped during busy periods?

If you care about code reviews, you should measure them. Put them on your system dashboard. They’re as much an indicator of the health of your software environment as your API response times. Minimising Work In Progress and Mean-Time-To-Release are important parts of your QA process, and making sure your pull requests are reviewed and merged in a timely fashion is a great way to improve those numbers.

What existing products are there out there to do this? Depending on the tools you use, you can probably pull out a few relevant reports. Jira is popular, and I’ve seen PMs produce some great graphs to include in their monthly management update. The problem is, the numbers you get out of these tools don’t give you direct, real-time feedback. Their very nature as longer-term averages mean they can’t represent a call to action.

Enter TeamLab

As a software shop, if the tools I’m using don’t do what I want, I have an option: build something. This is a dangerous option to have, and countless business hours have been wasted solving the wrong problems, but I really needed a nice visual prompt of how we’re doing at our code reviews in-the-moment. I also wanted a side-project for the team to tinker with new ideas for writing web applications – so even if the project didn’t turn out to be useful, the experiment would teach us something.

I had a specific technology I wanted to try out: React Storybook. This is a really nice way to visualise your React components in various different states, and I wanted something relevant to use as a demo for the team. It was very quick and easy to get up and running with a create-react-app project including Storybook, and I hacked together a quick picture of what my PR display should look like:
Storybook10PRs
On the right, you can see my quick mock-up of a board displaying ten pull requests, and the left is the Storybook control panel.

I decided it would be useful to colour-code the pull requests, and display any reviewers and approvers on the PR cards. A new PR is yellow, and an approved one is green. A PR with reviewers turns blue, and most importantly, any PR which is older than 48 hours turns red.
StorybookPRs

This was a nice little mock-up, but there was no real data behind it at this stage. Fortunately, the Git server we use has a fairly straightforward API, and so it didn’t take long to get some real data behind this component.
TeamLabPRs

It’s really easy to see when we have PRs which are starting to get stale, and need attention. Quick – at a glance, how many PRs here have been hanging around too long and need attention?
MorePRs

This has become the go-to way of seeing our outstanding PRs at a glance, and has since gone up on a big screen on the wall in our dev team office. I soon got requests for a few other widgets to go on the same dashboard, and our little side project has become a key part of our DevOps toolkit.
TVDashboard

Has It Worked?

Having those cards up where we can see them during the day has been good – but the biggest signal is during stand-up each morning. A quick glance at the TeamLab PR board has become part of the ritual, and if those cards start to build up – especially if they start to turn red – the team has a really strong signal that we’re getting behind on our code reviews.

I don’t currently have a report which tells me the Mean-Time-To-Merge for our PRs – but I don’t think I need it. Mean-Time-To-Merge isn’t as strong or immediate a signal as a pile of glaring red PR cards looming over our morning stand-up, nor does it provide the immediate sense of relief when we clear the board.

What Next?

I’m not sure what will go on the dashboard next, but I have some idea what kinds of things I’m looking for.

I need things I can measure – things I can pull straight out of an API. Things which can directly influence numbers like Mean-Time-To-Release – but I don’t want to display averages like that. I’m going to give people a dial they can turn directly. I’ll pick an angry colour like red for things which are outside targets, and nice friendly colours like blue and green for things which are on track. Once something is off the list, I’ll make it go away.

In short, I want to find things which I can measure, which team members can directly influence, and which will improve our overall quality – and I want to put them up where everyone can see them.

Announcing NSchemer 1

If you already know all about NSchemer, you can jump straight to the Version 1 release notes.

What is NSchemer?

Database schema management has been an interest of mine for a very long time. I’ve seen all sorts of approaches tried: folders full of .sql files, schema version tracking in Excel, and of course the tried-and-true1 manual approach using schema diffing tools. I’m a keen proponent of automated schema management. Automated deployment is all the rage these days, and if you can’t automate your schema updates, you can’t automate your releases.

I prefer to go one step further: I like to aim towards single code path schema management. Any database, whether a brand-new one to support a new installation, or an ancient database restored from a backup for a returning client, should get to the current version using the same code path – or at least, as close as possible.

When I started pushing this idea – that developers should write their own SQL migrations as they went, rather than leaving it to the designated DBA to do in the lead-up to a release – I got some push-back. Some of my team didn’t want to write SQL. Thus, NSchemer was born2. The framework languished in alpha status for many years, despite being actively used in a number of production systems. Recently, I finally decided to tidy up the API, add a few new features I’d been meaning to for a while, and bump it up to version 1.

Why Automated Schema Management?

The number one reason for automating your schema management is testing and reliability. Assuming, for a moment, that you have test environments, automated schema management means your test environments should go through the same migrations as your production environments will – automatically, with no opportunity for a manual step to get skipped or done incorrectly. This gives you a lot of confidence that when you hit the Big Red Button to go live with a new version, your schema migrations will work: the same automated set of steps which have run against all of your other environments will run against production.

You get a lot of other nice bonuses, as well. When you merge master into your own branch, not only do you get all of the new code; you also get the migrations that update your database schema to match. No more pulling in another branch, only to have to manually update your local database schema to match.

Digging up a backup from a couple of years ago? No worries, NSchemer will bring it up to current without any hassle at all.

Installation

While you can install NSchemer into an existing assembly, I typically create a new assembly just for managing schema transitions (I use a console app, so I can run the transitions from a script during deployment). Once you’ve created YourProjectName.Schema, just

install-package NSchemer

and you’re ready to go.

Show me the code!

NSchemer uses a single class which inherits from SqlClientDatabase to represent a versioned schema. Just create one, implement the Versions collection, and start writing transitions (beginning from 1 – NSchemer uses version 0 internally). If you’re starting with an existing schema, just use your favourite SQL tool to generate a full CREATE script, and drop it in as version 1 (use the embedded resource transition mentioned below).

public class TestSchema : SqlClientDatabase
{
    public TestSchema(string connectionString) : base(connectionString) {}
    public override List<ITransition> Versions
    {
        get
        {
            return new List<ITransition>
            {
                new CodeTransition(1, "Initial Schema", BuildTheWorld),
                new CodeTransition(2, "Add Widget Table", "This script adds a very important table", AddWidgets)
            };
        }
    }
    private bool BuildTheWorld()
    {
        CreateTable("Thing",
            new Column("ThingId", DataType.BIGINT).AsIdentity(1, 1).AsPrimaryKey(),
            new Column("ThingName", DataType.STRING, 50)                
        );
        CreateTable("ThingAnnotation",
            new Column("AnnotationId", DataType.BIGINT).AsIdentity(1, 1).AsPrimaryKey(),
            new Column("Text", DataType.STRING, 50),
            new Column("ThingId", DataType.BIGINT, false).AsForeignKey("Thing", "ThingId")
        );
        return true;
    }
    private bool AddWidgets()
    {
        RunSql(@"CREATE TABLE DBO.Widget (WidgetId [int],WidgetName [nvarchar](50)) ON [PRIMARY]");
        return true;
    }
}

The core of your schema class is the Versions collection, which contains a numbered list of transitions to be run in order. NSchemer will automatically create a table to track which versions have and haven’t been run, and whenever you call Update() on your class, it will work out which transitions haven’t run yet, and apply them. Assuming your transitions all ran without exceptions and returned true, your schema should now be up-to-date, and the version history table updated. If any of your transitions either returned false, or threw an exception, Update() will throw an exception.

Why not just SQL?

You’ll notice, in the sample above, that as well as being able to write SQL transitions, there are helper methods like CreateTable. These are here for four reasons:

  1. Some developers refuse to write SQL.
  2. Some developers write terrible SQL.
  3. They’re actually pretty convenient.
  4. If/when NSchemer officially supports non-MSSQL databases, your transitions should be cross-platform.

If you don’t want to use these convenience methods at all and you’re happy just using SQL, you could also look at some of the SQL-only frameworks which do the same thing as NSchemer, such as DbUp (which has explicit support for a range of other databases as well as Microsoft SQL Server).

If you have larger blocks of SQL to run, don’t put them all into a string like in the example above: NSchemer also supports resource files. You can create a transition like this:

new SqlScriptTransition(3, "Add another table", "NSchemer.SystemTests.EmbeddedFile.sql")

It will look in the same assembly for an embedded resource file with that name. There is also an overload which allows you to specify a different assembly for the resource file.

NSchemer uses a similar format to SQL Server Management Studio: it uses GO on a line by itself as a command separator, allowing you to submit multiple chunks of the file as separate commands.

Configuration

NSchemer has a couple of options you can use to control its behaviour. I’m afraid the API is inconsistent and the options are limited: I plan to address this when I do the overhaul in 2.0 (see below).

  • VersionTable
    Override this property to change the table NSchemer uses. Default: NSCHEMER_VERSION
  • SchemaName
    Set this property to use a schema other than dbo (use with caution: this has limited test coverage).

Can I rely on NSchemer?

You should be testing all of your migrations in testing and staging environments before they make it to production. These tests will also ensure NSchemer is behaving itself in your environment. Should you start using NSchemer in production without ensuring it goes through a testing pipeline? No, but you shouldn’t be running your own code that way either.

If you find any bugs or problems in NSchemer, please report them on the NSchemer GitHub repository. I use NSchemer myself, and I’m keen to fix any bugs you find as soon as possible. I’m also open to pull requests.

Implementation Advice

I like to run NSchemer from two different places: one in development environments, and another in deployed environments (testing, staging, UAT, production, whatever you prefer to call them).

To run in development environments, I throw some guards in (to make really sure it never runs in a deployed environment), and put it somewhere it will run on application start-up. It might look like this:

if (Debugger.IsAttached && _connectionString.Contains(".\sqlexpress")) {
    new MySchemaClass(_connectionString).Update();
}

For deployed environments, I make sure the assembly containing my transitions is a console app, run the transitions from there, and return or output a success/failure message so my deployment tool knows whether to continue or alert me that something went wrong.

Upgrading from 0.x

The only changes you should need are to sprinkle a few using NSchemer.Sql statements at the top of your files.

You should notice some API improvements:

  • When you create columns, there are new options you can provide using a fluent syntax:
    • .AsPrimaryKey()
      Only allowed when creating a new table. Specifies that this column is part of the primary key. Supports composite keys.
    • .AsForeignKey(…)
      Allowed when either creating a table or adding columns to an existing table. Allows you to indicate the referenced table, column, and (optionally) cascade options.
    • .Identity(…)
      Allows you to create the column as an IDENTITY (auto-increment) column. You can specify the initial seed, and increment value.
  • Description is no longer required on transitions.
  • CreateTable(…) now has a params syntax, so you don’t need to create a List every time you use it.
  • You can now specify the nullable status of a column for data types which don’t require a length.

The Future: NSchemer 2.0

As I’ve used NSchemer, I’ve discovered a few short-comings of the existing API. I made some minor breaking changes when I bumped NSchemer to 1.0, mostly just relating to namespaces, but 2.0 is likely to be a more significant overhaul of the API.

I also want to split Microsoft SQL Server support out into a separate package, and provide official support for other database servers. If you have strong knowledge of MySQL, PostgreSQL, Oracle, or another database platform and would like to help maintain support for that platform, please get in touch.

License

I’ve chosen to release NSchemer under the LGPL because I want it to be generally usable, but I want to make sure any improvements are available to everyone who uses it. If you want to use NSchemer but the LGPL is a problem for you, please let me know. I’m sure we can work something out.

Helping Your Best Developers Leave

It’s my job to help my team members find better jobs. I know that sounds a little counter-intuitive, but stick with me for this one – I hope I can convince you.

There are two types of software developers: wanderers and lifers. Wanderers drift from job to job. They may stick around at a job for a year, or five, or even ten, but the one constant throughout their career is that they don’t intend to stay in their current job forever. The reasons change from person to person, and from job to job: sometimes they just hate their job; sometimes they’re in over their head; some people just love variety. Whatever the reason, they know that their next job won’t be the one they stay in any more than the last one was.

Then, there are the lifers. They may not stay in any one job for good either – people get made redundant, companies go broke, and changing circumstances force lifers to change jobs, but their goal is to find a team they like (or can tolerate) and stay in it as long as they can. The motivations for lifers vary, too, and it’s not just being scared of change: they may enjoy developing a deep knowledge of the business domain, the company culture, or the niche industry they’re in, and prefer leveraging that knowledge to starting fresh somewhere new.

Lifers can become a problem, though: if you’re a good environment for lifers, you will tend to collect them. Even if you’re a bad environment for lifers, you will tend to collect them. Every time you replace someone, they probably left because they were a wanderer – and some of the time, you’ll end up replacing them with a lifer. The reverse almost never happens: your lifers aren’t leaving (if they are, you might have bigger problems), so you can’t replace them with wanderers.

Before we get too far into this, I want to clarify something: I don’t buy into the stereotype that lifers are bad developers, and wanderers are good ones. There’s a bit of an attitude that anyone who stays in one place for too long “got stuck there”, while people who move from job to job are “in demand”. I’ve had plenty of recruiters talk to me in these terms. Too-frequent job hopping is bad (“they can’t hold down a job”), but staying in one place for too long is considered bad too. To speak frankly: this stereotype is rubbish. There are plenty of smart, productive developers who find themselves great jobs – which let them do interesting, intellectually-challenging computer science or engineering – and stay in them. Conversely, there are plenty of developers who drift from job to job, never really contributing much, but neither being quite bad enough to be worth firing.

But that’s an aside: I’m not writing this to tell you how to hire and retain good developers (I’ve written plenty of other articles about that). This time, I’m telling you how to get rid of your good developers! But first, more about why.

Remember how I told you that teams naturally accumulate lifers? Well, if you’re too complacent, you’ll also accumulate bad developers who have discovered your team is a safe place to hide. Wanderers who are just not much good (and don’t want to improve) will latch onto a team which tolerates them, and will milk it for all it’s worth.

There’s another big problem: market rates for developers have consistently risen faster than inflation, while salary increases almost never keep up. That means that the intermediate developer you hire today is probably getting paid more than the junior you hired five years ago – even though the ex-junior may well be worth more by now. Worse: if the junior hangs around to become a senior, they may well be one of your most valuable team members – and one of your lowest-paid ones. Why is this a problem? Well, aside from it being really unfair, it also sets them up to be poached. Even the staunchest lifer will eventually be tempted away by market rates, and if they’re even a little good at math, when they do the calculus, they’ll resent you for all the missed income over the years.

It gets worse: the longer they hang around before being poached, the more reliant on them you become. You may well be incurring some really significant business risks by hanging onto your developers for too long.

You could solve all of these problems by aggressively monitoring productivity and performance, firing the under-performers, and rewarding the achievers – and if you try this approach, you won’t be alone: software giants (and many other major employers) have dubbed this the “up or out” system (it was originally termed the “Cravath System”), and it kind of works OK – for them. Unless you’re Cisco or Google, I bet you can’t make it work at all: measuring developer effectiveness is famously difficult and error-prone. You might just find you end up promoting the networkers, self-promoters, and empire-builders, and firing your good engineers. In fact, even the software giants probably do this more than they’d like to admit.

What is a CTO, team lead, or development manager to do? Easy. Help prepare your developers to get better jobs elsewhere. One of my favourite software quotes came in response to a question about funding developer training: “What if we pay for all this training, and they leave?” The response: “What if we don’t, and they stay?” (I’m not certain of the origin of the quote, but Martin Risgaard tweeted something similar back in 2012). I think every good software team needs to double down on this idea. Don’t just pay for PluralSight accounts. Don’t just send your developers to a token conference every year. Really invest in turning your team members into people who are just too good to stay. If someone hangs around for six or eight years, you’re failing – or perhaps they will just never become the sort of developer teams want to hire. Perhaps you will eventually need to force them out – but now you will know it’s not just because of some silly “up-or-out” rule, but because you’ve done everything you can to help them thrive, and it hasn’t worked. This is good for you, and it’s good for them: you’re not dooming them to a career as one of the “got stuck there” brigade, and you’re also not letting them hang around in a job which just isn’t succeeding at building their career. Let’s face it: it’s entirely possible that their lack of success is as much your fault as it is theirs.

So, is there room in this philosophy for the genuine, high-achieving lifer? That rare individual who develops a deep understanding of your industry, continually improves themselves, boosts their team performance, and has a track record of innovation, year in and year out, for five, ten, or more years? Yes, absolutely. The best-laid plans rarely survive first contact, and you will absolutely run into people throughout your career who buck this trend, and should definitely be allowed to hang around for decades.

If this sounds like I’m back-tracking on everything I’ve said, you’re right – sometimes, there’s just no alternative but to have experienced and knowledgeable team leads, managers, and company officers who know when to exercise discretion and ignore all the rules. My central message here is not that you should fire anyone who makes it to their 10-year anniversary: rather, you should focus on doing your very best to turn your developers into the sorts of professionals who are in-demand and will definitely be hired away. On the way through, you’ll build a more effective team. You’ll create an environment which will encourage past employees to return – and they’ll be developers you’ll want back. Your team reputation will spread, as past employees go out into the general software industry and talk about everything they learned, and everything they accomplished. It will cost you more per developer, but you will reap the rewards many times over – and please, never underestimate the enormous benefit of having a team which can easily attract high-quality developers when you need them.

Most software teams have a long way to go – so you need some first steps. Here they are:
1. Invest in your employees. Don’t just allocate budget to buy them a hotel and a conference ticket every year – find real ways to help them learn and grow.
2. Support your employees in finding the next step in their career once you’ve finished learning from each other.
3. Expect great things of genuinely outstanding long-term employees – and find ways to reward them commensurately.

Above all, don’t make the mistake so many employers do – the mistake of encouraging employees to stay too long.

One final note: if you haven’t been embracing these ideas, don’t try to implement things too quickly. If you’ve spent the past ten years not investing in your employees, trying to move senior talent out could well be disastrous: it takes time to build the right sort of turnover, and to decide to hang onto the rare lifer who you really want to keep around. If you’re not sure, it’s safer to err on the side of spending more time investing in your existing employees, and giving them more time to find their next career move (or proving that they’re genuinely worth keeping around, at above-market rates).

On FizzBuzz and interviewing developers

Lots of people have heard of the FizzBuzz interview test (if you haven’t, Google it!), and Jeff Atwood once famously asked: “Why can’t programmers.. program?” But is it a useful test?

I’ve interviewed lots of developers, and hired quite a few of them. I’ve only regretted a handful of hires, and I’ve spent a lot of time trying to work out how to improve. I’ve made a career out of building teams, and hiring good people is a key part of that. Posing a simple programming challenge – often referred to as a FizzBuzz problem – is a common strategy, and it’s one that interviewers and job-seekers should both understand.

FizzBuzz is Pass/Fail.

One of the mistakes I think people make is in judging the code people write. I always put candidates through a FizzBuzz-type test, but I don’t really care about how good their implementation is. I have one very specific thing I want to know: can they write code, or can they not?

The pass/fail nature of FizzBuzz isn’t the sort of pass/fail you write a unit test for. I have no interest in whether the string they output has correct spacing, or whether they even get the calculation correct. I want an answer to this question:

Has this candidate spent even a little bit of their recent career writing code?

If I’m asking someone to solve FizzBuzz, I’m not hiring a program manager, a technical writer, or a business analyst. I’m hiring someone to write code. I’m hoping to hire someone who can write good code, which solves the correct problem, and produces a good user experience, and doesn’t introduce performance problems, but the core skill I’m looking for is the ability to write code. If they can’t write code at all, the quality or correctness of the code they write isn’t a concern.

FizzBuzz is trivial.

I’ve heard people lump FizzBuzz in with algorithmic problems, like asking a candidate to solve the traveling salesman problem. I’ll admit: if I was asking someone to solve FizzBuzz and send me their answer, it’s an algorithm problem. A very simple one, which I’d expect a high-school student doing a programming course to cope with, but an algorithm problem nonetheless. I don’t ask people to submit a solution, though: I ask them to do it in front of me, and what I’m really interested in is the first step.

Loops are one of the simplest programming concepts.

Fundamentally, programming is about loops and conditions. There are higher-level concepts that are really important, but you really don’t get any simpler than loops and conditions. FizzBuzz has a really simple beginning: “go through the numbers between one and twenty, and …”

The rest doesn’t really matter. I’ll pass someone who isn’t sure about how to work out if a number is a multiple of 3, or 4, or both. I want to know if the candidate can take a really simple problem statement, with an extremely obvious first step, and make a start writing a really simple solution.

People Fail FizzBuzz.

Do 199 candidates out of 200 fail FizzBuzz? No way. If that many fail, you are interviewing people you shouldn’t. Most people I interview have no trouble at all passing FizzBuzz, because I don’t interview people unless I think I might want to hire them. I simply don’t have the time to interview 200 people to find one who can pass FizzBuzz. Nobody has that kind of time to waste.

FizzBuzz is pass/pass.

You shouldn’t be interviewing people who can’t pass FizzBuzz. FizzBuzz is trivial. It’s the sort of simple problem that professional developers can’t have trouble with. Asking a professional developer to write a solution to FizzBuzz is like asking a professional mathematician to solve 5+4.

If FizzBuzz is so simple, why even ask it?

I get candidates to solve FizzBuzz because I’m going to actually test their technical skills later in the interview, and I want them to be comfortable. Interviews are stressful, and the best way to take someone from stressed to comfortable is to let them succeed – and not just succeed, but easily succeed. FizzBuzz lets someone with even the most basic programming ability succeed, and that lets them relax – and that makes it easier for them to show me why they’re worth hiring.

“Interviews are stressful” is no excuse.

I’ve seen plenty of people complain that asking developers to write code during an interview is unfair, because interviews are stressful, and that makes it hard for candidates to perform.

Yes. That’s the point. Let me tell you a story.

My team was releasing a new feature to a business-critical site. I do similar things all the time – that’s my job – but this time, something went wrong. The moment someone hit the site, the server went to 100% processor utilization and stopped responding. Ten minutes later, we managed to kill the process and roll the update back. We postponed the update until tomorrow, and started trying to diagnose the problem.

We couldn’t.

Several person-days worth of testing and analysis later, we hadn’t been able to replicate the problem in any of our test environments, so we decided to deploy the new version again (with a few minor tweaks). Once again, the server went to 100% CPU usage, and after about 10 minutes we were able to roll back the update. We were behind schedule, and senior management started to get involved.

Evenings and weekends were cancelled, experts were consulted, and we put a number of measures in place to ensure the new features went out successfully. We rolled back a number of non-critical changes. We put additional testing in place. We put some data collection in place to collect memory dumps, and we deployed – and our production system came to a screaming halt. Everything froze, and we rolled back. Senior management were upset, and my team’s credibility was at stake. Consultants were being brought in. We collected dump files, fired up debuggers – and diagnosed a faulty third-party library which was misbehaving on some edge-case which only happened in production.

Excising the third-party library and getting a working version tested and released wasn’t an easy task, but it had to happen fast. With the problem identified, we wrote a pile of code at very short notice, got it tested, and pushed it into production – and everything worked. The whole situation lasted only a few days, but the pressure to identify and fix the problem was tremendous, and my team was suddenly under a spotlight.

I need to hire people who can write code under stress.

As a team lead, it’s my job to make sure our team doesn’t end up in high-stress, tight-deadline situations. As a manager, it’s my boss’s job to ensure that stress doesn’t get passed on to my team. But sometimes it goes that way, and when it comes right down to it, I want a team filled with people who can write good code in stressful situations.

Professional developers write code.

When you get right down to it, the job of a developer is to write working code. However you boil it down, someone with any kind of experience – even experience as a student – should have spent plenty of time writing software. Trivial problems should be trivial, even under stress (the kind of stress that happens in real life, whether it’s in exams, during assignment periods, or at work) – and in fact, even moderate or difficult problems should be manageable under stress.

Someone who can’t solve FizzBuzz under stress isn’t someone I want on my team.

This is what it gets down to. FizzBuzz is trivial. It’s not the problem: as I discussed earlier, FizzBuzz is the simple introduction, designed to help people relax. I’ve seen it work, over and over: stressed people, nervous in a job-interview situation, are distracted by their interest in writing code. Someone who came in to an interview nervous has an easy win, and goes on to tackle some of the harder technical problems I have for them with confidence.

At the end of it all, if you can’t solve FizzBuzz under interview-stress conditions, I can’t trust you to be on my team.

I’ve probably turned down one or two developers I shouldn’t have, over the years, because they froze and couldn’t solve FizzBuzz in the moment. I have successfully built teams full of successful people, though. It hasn’t been by making people solve a FizzBuzz-like problem before hiring them – but watching candidates try to solve such simple problems has been a key part of deciding whether to hire them or not.

In Summary…

FizzBuzz, on its own, is a terrible way to judge whether to hire someone or not – but it is a tremendously useful tool for a team lead who is trying to decide whether someone will be a great team member or not.

Serverless Deployment with Azure and Octopus

Azure’s Platform-as-a-Service offering provides the promise of deploying applications, databases, file shares, buses, caches, and other infrastructure without ever needing to spin up a server. That means no operating system to manage, no IP addresses, no need to configure IIS or SQL Server or any of those other platforms. This hopefully lets us spend less time yak-shaving; but most of us are used to deploying things to servers, and so we’ll need to integrate this new serverless mindset with our existing deployment tool chain.

I dove into this proof-of-concept expecting to write piles of PowerShell, but it turns out the teams at Microsoft and Octopus Deploy have already done most of the heavy lifting – as you’ll see.

My goal is to be able to build entirely new test environments from scratch in as few steps as possible. It turns out, with the right planning, fully-automated deployments of both application and infrastructure are possible.

The Azure Story

First of all, you’ll want a fully-updated Visual Studio installation with the Azure SDK enabled. We like to keep everything necessary to run an application or service together in one repository: code, schema, and now – infrastructure! The Azure Resource Manager (ARM) lets us define infrastructure using JSON files called templates, and Octopus lets us deploy them just as easily as we deploy applications.

I’m going to show a fairly limited example here – just a Nancy service and a database – but ARM templates are very powerful: you can build just about anything Azure provides with them, and Visual Studio has a number of templates to get you started. To start out, create a new Azure Resource Group project alongside your existing application projects.

You’ll have the opportunity straight away to select a template: all I need is a SQL database and a web application, so I’m choosing the “Web app + SQL” template. You’ll be able to add more resources later, so just pick whichever template gives you the best start towards what you need.

The first thing you’ll notice is that you have a .json file, a .parameters.json file, and a deployment script. We’re going to use Octopus to handle the deployment and variable replacement, so we’re mainly interested in the .json file.

Open up the JSON Outline window in Visual Studio. It will give you a great overview of the template you’re working on:

This template is ready to deploy to Azure now, but it needs a few changes to work nicely with Octopus. Octopus ties in really nicely with the parameters in the ARM template, but it doesn’t work so well with the variables you can see in the JSON outline – they tie back to the .template.json file, and we don’t want that.

It’s quite straight-forward to change those variables into parameters, and then you have something ready to start putting into Octopus.

There’s a lot more to getting your ARM template just right, and I highly recommend you spend some time taking a deep-dive into ARM and all the features it gives you. Get something running, and start trying things.

The Octopus Story

We have an existing on-premise Octopus server which is deploying well over a dozen different applications, and we want to keep that as a core part of our tool-chain. If you don’t have an existing Octopus server, it’s very easy to install an on-premise trial server, or if you’re going 100% Azure there’s an Octopus server template ready to spin up. If you don’t fit within the free tier, give it a shot using the trial license: you’ll love it.

Start with a Project

If you’ve never used Octopus before, there’s a lot to learn, but there’s one easy place to start: create a project. Projects are things you want to deploy: they can be small services, or they can be entire application environments with many steps. It turns out they don’t just deploy software; they also deploy infrastructure.

Azure templates are all about resource groups. A resource group is exactly what the name says: a grouping of resources which you can treat as a single unit. Unfortunately, Octopus doesn’t create our resource group for us. Fortunately, it’s very easy to create one using PowerShell. This is easier than it sounds: in your new project, click the “Add step” button, and select “Run an Azure PowerShell Script”.

I called my step “Create Resource Group”. This ends up being a single line:

New-AzureRmResourceGroup -Name Application-#{Octopus.Environment.Name} -Location australiaeast -Force

Notice that I’m using {Octopus.Environment.Name} here: you’re going to see that a lot. I don’t want to waste time setting up variables for things like database connection strings for each environment, so I’m going to use the environment name as much as possible.

The next step you need to create will deploy your ARM template: again, Octopus is ready with a pre-made step to do exactly that.

I named this step “Provision Environment” – it’s going to pass your ARM template to Azure, and ask it to create all the infrastructure you need to deploy your environment.

It might look like you need to select a fixed resource group for this step, but if you choose the “Use a custom expression” option from the drop-down to the right of the Resource Group box, you can write an expression.

Make it match the resource group we created in the previous step:

Application-#{Octopus.Environment.Name}

You’ll need to understand the difference between Complete and Incremental modes: Complete essentially means that any deployment to the template will delete any resources which aren’t in the deployment. Incremental means it will only update existing resources and create new ones. There are arguments both ways, and I won’t go into that in this post.

The really important thing is the Template. For now, it’s easiest to paste your template from Visual Studio straight into Octopus:

Eventually, you’re going to want your build environment to publish your template project as a package, so Octopus will stay up-to-date with any template changes automatically.

Octopus auto-magically exposes all the parameters from your ARM template, and you can use all the usual Octopus variables to complete these. Once again, I highly recommend driving everything off the environment name: you don’t want lots of variables to configure every time you create a new environment.

Remember, the goal we’re working towards is being able to create a fresh environment and deploy straight to it with no additional steps.

Now that your infrastructure step is complete, you need to deploy your actual application. I’m going to skip all the detail of publishing nuget packages to the Octopus feed: if you don’t have an existing CI/CD pipeline, you can upload nuget packages containing your application straight into the Octopus UI.

Once Octopus knows about your nuget package, you can create a “Deploy an Azure Web App” step to publish your application to the endpoint you created in step two.

You’ll need to build the web app name using the same expression you used to create it in step two:

Our project is a NancyFx project rather than Mvc or WebApi, but it all just worked. Our database schema is deployed using a Deploy.ps1 script (use a schema management system like NSchemer or DbUp to deploy and update your database schema), and that just works too.

You’ll need to setup up any connection strings and other environmental variables your application needs: again, focus on building these using #{Octopus.Environment.Name} so there’s no need to set up per-environment values.

Create an environment, hit deploy, and you should find your application is up and running – and any changes you make in your project, to either environment, schema, or application, get deployed to your new environment.

If you want to stop paying for all this infrastructure, just sign in to the Azure portal and delete the entire matching resource group. Boom! Everything is gone. (Don’t do this to production!)

Where Next?

This is really just a proof of concept. There’s no reason this couldn’t be extended to include VMs running services, if you really need that. You can add other resources to the base template you added.

We have a number of applications with a microservices backend. I want to be able to deploy feature branches across services: an environment containing all of the feature branches for a particular ticket or story, along with the master branch for any other dependencies. This feature-branch-environment will become a target for automated integration tests, as well as end-user feedback.

I haven’t planned out the whole system yet, but the integration between Octopus and Azure has been so seamless that I expect to be able to build exactly the CI/CD pipeline I want.

Why We Dispose Things

Pop quiz: Why do we use IDisposable?

If you said something like “To allow us to clean up unmanaged resources”, I have good news: most other people make the same mistake.

The correct answer is “To make our code faster and more predictable.”

Don’t believe me? Let me try to convince you.

Consider the following:

void Main()
{
     var thing = new Thing();
     GC.Collect();                  // It doesn't matter what you do
     GC.WaitForFullGCComplete();    // Or how long you wait
     GC.WaitForPendingFinalizers(); // Thing will never release its resource
}

public class Thing : IDisposable {
      public object FakeResource = new object();
      public void Dispose() {
            // Do not implement the Disposable pattern this way!
           FakeResource = null;
           "Resource released!".Dump();     // This never happens
     }
}

It doesn’t matter how thoroughly you implement IDisposable. If somebody using your code fails to call the Dispose() method (or wrap your object in a using block), your resources will never be released. If you’re looking to ensure your resources are released, you should implement a finalizer:

void Main()
{
     var thing = new Thing();
}

public class Thing {
     public object FakeResource = new object();
     ~Thing() {
           FakeResource = null;
           "Resource released!".Dump();
     }
}

This guarantees that our resource will be released – however, it doesn’t guarantee when it will be released. In fact, when I ran that code, the first two runs didn’t print anything, and the third run printed the message twice. The fourth run printed the message twice again (LINQPad doesn’t unload the app domain between runs, so we see the finalizers from earlier runs completing during later runs.)

What you should see from this is that IDisposable isn’t for disposing resources. One of the uses of IDisposable is, however, to provide some control over when those resources are released. A basic pattern you might use is this one:

public class Thing : IDisposable {
     public object FakeResource = new object();
     ~Thing() {
           releaseResources();
     }
     public void Dispose() {
           releaseResources();    // This still isn't the full pattern you should be using
     }
      private void releaseResources() {
           if (FakeResource != null) {
                FakeResource = null;
                 "Resource released!".Dump();
           }
     }
}

Now, if a Thing is wrapped in a using block, or Dispose() is called, the resource will be released immediately. If the caller fails to ensure Dispose() is called, the resource will still be released by the finalizer.

Hopefully you can see that a finalizer is what we should be using to ensure resources are released, and IDisposable gives us a way to control when that happens. This is what I meant about predictability, and it also improves our stability: if resources are cleaned up in a timely fashion, our system is less likely to run out of limited resources under heavy load. If we rely on the finalizer, we guarantee that the resource will be released, but it’s possible for large numbers of objects to be waiting to be finalized, while hanging onto resources which won’t be used again.

Performance

I promised that IDisposable can also make code run faster, and to do that we need to understand a little bit about the garbage collector.

In the CLR, our heap has three different generations, numbered 0, 1, and 2. Objects are initially allocated on the gen 0 heap, and are moved up to the gen 1 and 2 heaps as they last longer.

The garbage collector needs to make a fast decision about every object, and so every time it encounters an object during a collection, it does one of two things: collect the object, or promote it to the next generation. This means that if your object survives a single gen 0 garbage collection, it will be moved onto the gen 1 heap by copying the memory and updating all references to the object. If it survives a gen 1 garbage collection, it is again moved – it is copied to the gen 2 heap, and all references are updated again.

The other thing you need to understand is how finalizers get called. When the garbage collector encounters an object which needs to be finalized, it has to put it on a queue and leave it uncollected until the finalizer has run – but remember that the garbage collector can only do two things: collect or promote. This means that the object has to be promoted to the next generation, just to give it time for the finalizer to be run.

Let’s look at some numbers again. The following code has a simple finalizer which just adds to some counts: the total number of objects finalized, and the number which reached the later generation heaps.

void Main()
{
     Thing.Gen1Count = 0;
     Thing.Gen2Count = 0;
     Thing.FinaliseCount = 0;
     for (int repeatCycles = 0; repeatCycles < 1000000; repeatCycles++) {
            var n = new Thing();
     }
     GC.Collect();
     GC.WaitForPendingFinalizers();
     ("Total finalizers run:" + Thing.FinaliseCount).Dump();
     ("Objects which were finalized in gen1:" + Thing.Gen1Count).Dump();
     ("Objects which were finalized in gen2:" + Thing.Gen2Count).Dump();
}

public class Thing {
     public static int FinaliseCount;
     public static int Gen1Count;
     public static int Gen2Count;
     ~Thing() {
           finalize();
     }
     private void finalize() {
           FinaliseCount += 1;
            var gen = GC.GetGeneration( this);
            if (gen == 1) Gen1Count++;
            if (gen == 2) Gen2Count++;
     }
}

After running this a few times, it’s quite clear that the performance is all over the place. I got run-times ranging from 0.5 seconds up to 1.1 seconds. A typical output looks like this:

Total finalizers run: 999999
Objects which were finalized in gen1: 118362
Objects which were finalized in gen2: 881637

As you can see, most objects go through two promotions before they are collected, incurring a significant overhead.

With a few changes, we can significantly improve this situation.

void Main()
{
     Thing.Gen1Count = 0;
     Thing.Gen2Count = 0;
     Thing.FinaliseCount = 0;
     for (int repeatCycles = 0; repeatCycles < 1000000; repeatCycles++) {
           var n = new Thing();
           n.Dispose(); // This is new - we could also have used a using block
     }
     GC.Collect();
     GC.WaitForPendingFinalizers();
     ("Total finalizers run: " + Thing.FinaliseCount).Dump();
     ("Objects which were finalized in gen1: " + Thing.Gen1Count).Dump();
     ("Objects which were finalized in gen2: " + Thing.Gen2Count).Dump();
}

public class Thing : IDisposable {
     public static int FinaliseCount;
     public static int Gen1Count;
     public static int Gen2Count;
     public void Dispose() {
           finalise();
           GC.SuppressFinalize(this); // If we can perform finalization now, we can tell the GC not to bother
     }
     ~Thing() {
           finalise();
     }
     private void finalise() {
           FinaliseCount += 1;
           var gen = GC.GetGeneration(this);
           if (gen == 1) Gen1Count++;
           if (gen == 2) Gen2Count++;
     }
}

The changes I have made is to make Thing implement IDisposable, make the Dispose() method call GC.SuppressFinalize(this), and make the main loop call Dispose(). That tells the garbage collector that the object has already finished disposing of any resources it uses, and it can be collected immediately (instead of being promoted and placed on the finalizer queue).

The code now runs in a very consistent 0.2 seconds – less than half the original – and the output looks like this:

Total finalizers run: 1000000
Objects which were finalized in gen1: 0
Objects which were finalized in gen2: 0

As you can see, the finalizers now all run while the object is still in gen 0. Measuring using the Windows Performance Monitor tells a similar story: in the version which uses only the finalizer, the monitor records numerous promotions and an increase in both gen 1 and 2 heap sizes. We don’t see that happening when we use the Dispose() method to suppress the finalizer.

So there you have it. Finalizers are for guaranteeing your resources get released. IDisposable is for making your code faster and more predictable.