Call Us: +44 (0) 203 005 2505
Blockchain – What is it? Blockchain – What is it?

Blockchain – What is it?


Our C-Suite colleagues at Recommend’s wide berth of Technology clients often have cause to produce proof-of-concept to Boards for BlockChain projects. Be that transactional / trading solutions in Fintech or document transfer in the Insurance industry, to e-invoicing solutions from the Procure Tech domain, distributed ledger technology has a plethora of applications. For those still getting to grips, this is a great breakdown from Chris Ward over at Codeship on Blockchain and its various considerations – Leo

What Is Blockchain?


The world and its dog has gone crazy for blockchain (and related technologies, which is a whole other post). The past two years have been a hype roller coaster for the technology with stories of equally insane valuations, technical proposals, media exposure, regulatory nightmares, frauds, and unrealized dreams.

But what is the technology, and why could it be of interest to developers like yourselves?

Before I get started, speaking about “blockchain” is like speaking about “programming” in that there are multiple differences in paradigms, approaches, and patterns, most of which their communities are still defining. In some ways, this is why I find the technology so compelling, but it makes it hard to define the technology completely.

For simplicity, I use Ethereum as the reference technology in this post, because it is more interesting to developers, sparked the broader use of blockchain beyond Bitcoin, and other approaches use it, replicate it, or compete with it.


Blockchain’s Basic Definition

My absolute basic, abstract definition of blockchain is:

A cryptographically secure distributed ledger of transactions that is generally immutable and keeps all instances in the cluster informed of changes.

I know people will want to dispute this, change, and add to it, but I’d like to stick with broad strokes right now.

Let’s take a quick tangent as to what “cryptographically secure” means before continuing. All blockchains use public-key cryptography to sign transactions, associate them with an origin, and achieve consensus on the state of the network.

Blockchains also often use a chain of hashes to bundle transactions for extra security and some efficiency improvements. The hash chains are then written to blocks that contain a reference to previous blocks to maintain a reliable sequence of events.

The exact implementation of this process varies wildly from protocol to protocol. For Ethereum, you can read the yellow paper for more details.

Comparing Blockchain to Distributed Systems

For those of you with experience in distributed systems and especially distributed databases, this definition might sound somewhat familiar. I am often surprised how little the blockchain community is aware of fundamental distributed system concepts, thinking that everything they are doing is something new and original, but hey. that’s technologists for you!

Where things start to differ is in the definition of cluster. In “traditional” distributed systems, the time-honored formula of 2n+1 applies in defining how many nodes to add to a cluster, and getting the right number takes time. Too little and the cluster may not cope with demand; too many and replicating data across the cluster can be too slow.

With the Ethereum blockchain, there is only one cluster (well, actually four, but the others are test networks). When you write and read transactions to it, they are replicated across the entire network, typically around 20,000 nodes at the time of writing.

Herein lies most of the point of blockchain technologies, by having all nodes and data public, you create a decentralized network, with a mutual trust of that data, and a consensus algorithm (that varies from protocol to protocol) that determines if a transaction submitted is valid or not, and subsequently written.

This also leads to the biggest problem with many blockchain protocols (but not all), in that large-scale replication makes transactions slow. There are numerous supplements to existing blockchain protocols and entirely new protocols that look to solve this, but it’s not entirely solved and production-tested yet.

While there are dozens of super interesting projects built on blockchain (often Ethereum, or something like it) that look to challenge the entire contemporary computing stack (I wrote another post on this subject), until the community solves this speed and scalability issue, they are mostly experiments and proofs of concept.

Public Versus Private Blockchains

There is another option to consider, and that’s private blockchains, with Hyperledger being one of the most mature and established, but there are others to consider. These protocols offer you many of the features of a public blockchain (decentralized consensus, for example), but control who runs instances, where they run, and the number of instances. Private blockchains are somewhat controversial in the community, as in many opinions they go against the “point,” while others believe that it’s too much like a conventional distributed system to have any point.

Still, the decentralized consensus nature versus the conventional account and access setup is potentially compelling.

Ethereum Components

With all this aside, what does the Ethereum blockchain consist of? Quite a lot actually, which makes it more complex/interesting, depending on your perspective.

A simple illustration is hard to come by, but I recommend you take a look (and be overwhelmed by) this comprehensive image created by LeeJThomas. The fundamental components to gain an overview, in my mind, are:

  • Solidity smart contracts: The ‘magic sauce’ that Ethereum introduced to the world. These allow you to run simple applications attached to the blockchain. Other blockchain protocols have their own smart contract languages; this repository has a good summary.
  • The Ethereum Virtual Machine (EVM): The contracts are compiled into bytecode, which the EVM reads and executes. The EVM is sandboxed and isolated from the host machine.
  • Swarm and Whisper: Working with the EVM, Whisper provides communication channels between applications running on the network (called DApps), and Swarm provides storage for the application code and any data written by an application.
  • web3.js (and other SDKs): For a long time, I didn’t understand how you integrated Ethereum into a “real” application, and granted, some abstraction remains, but with JSON RPC endpoints for C++, Python, Go, and web3 for JavaScript, the options aren’t too bad.
  • Integration tools: There are a handful of widely adopted tools for handling migration and deployment of smart contracts. Due to the complete immutability of transactions and contracts, you can’t update a contract once you have deployed it. There are a variety of techniques to work around this limitation, but an interesting aspect of blockchain programming is that there’s much more impetus to get it right the first time around.


Another interesting aspect of Ethereum and other blockchains is how you pay for access. Every smart contract deployed to the main Ethereum network consumes “gas” when it runs, again showing that efficiency and “getting it right are paramount.

You can think of this cost as somewhat equivalent to paying for services on your cloud host of choice. It’s hard to make a comparable cost comparison, but while cloud providers don’t always encourage efficient usage, the blockchain does; your code directly relates to cost.

To gain gas, you need ether, the token of the Ethereum network, which you can buy or trade from others, or you provide computing resources (mining) to the network and gain ether for doing so.

What Does Blockchain Mean to Developers?

This post was a brief and broad introduction to the blockchain, and I promised to say why developers should care about it.

Some fans and critics say that the blockchain space right now is like the nascent internet days in the late ’90s. Everyone is competing for a half-baked, over-hyped idea that few understand, think is needed, or think is possible. Anyone who is old enough to remember the dot-com bubble burst of the late ’90s may remember the number of promising and dumb ideas that failed, leaving in their wake a mixture of much better ideas, or ideas that managed to survive and define the modern internet.

Time will tell if the same happens with blockchain, and thankfully we are slowly filtering the noise of the ICO madness of late 2017 in time to readdress some of the original ideals of blockchain. These ideals were that too much power and influence lay in the hands of too few, and by decentralizing as much as possible, we create the internet that everyone envisioned in the first place.

Again, some critics say that some blockchain protocols and networks are way more centralized than they allude to and that it’s already too late.

But hey, if some of the ideas in this article encouraged you to try to work on something different, then find a project that interests you and get involve. These are early days, and everyone can still play a part.

Elixir – executing Concurrency Elixir – executing Concurrency

Elixir – executing Concurrency

Concurrency in Elixir

An entry into Codeships knowledge vaults from Leigh Halliday on executing concurrency with Elixir on Erlang


Erlang has been around for over 30 years and was built well before multi-core CPUs existed. Yet it’s a language that couldn’t be more relevant today! The underlying architecture of the language lends itself perfectly to the modern CPUs that are on every computer and mobile device.

The computer I’m writing this article on has a 2.2 GHz Intel Core i7 CPU, but more importantly it comes with eight cores. Simply put, it can perform eight tasks at once.

The ability to take advantage of these cores exists in many languages but often feels out of place or fraught with traps and challenges. If you’ve ever had to worry about a mutexshared mutable state, and code being thread safe, you know that there are at least several pitfalls to be wary of.

In Erlang, and therefor Elixir which leverages the Erlang VM (BEAM), it makes writing and reasoning about concurrent code feel effortless. While Ruby has some great libraries for helping write concurrent code, with Elixir it’s built-in and a first-class citizen.

That isn’t to say that writing highly concurrent or distributed systems is easy. Far from it! But with Elixir, the language is on your side.

Processes, PIDs, and Mailboxes

Before we look at how to go about writing concurrent code in Elixir, it’s a good idea to understand the terms that we’ll be using and the model of concurrency that Elixir employs.

Actor Model

Concurrency in Elixir (and Erlang) is based upon the Actor Model. Actors are single threaded processes which can send and receive messages amongst themselves. The Erlang VM manages their creation, execution, and their communication. Their memory is completely isolated, which makes having to worry about “shared state” a non-issue.


  • Process: Similar to an OS level thread, but much more lightweight. This is essentially the unit of concurrency in Elixir. The processes are managed by BEAM (the Erlang runtime), which handles spreading the work out over all the cores of the CPU or even across other BEAM nodes on the network. A system can have millions of these processes at a time, and you shouldn’t be afraid to take liberal advantage of them.
  • Process ID (PID): This is a reference to a specific process. Much like an IP address on the internet, a PID is how you tell Elixir which process you want to send a message to.
  • Mailbox: For processes to communicate with each other, messages are sent back and forth. When a message is sent to a process, it arrives to that process’ mailbox. It is up to that process to receive the messages sitting in its mailbox.

So to bring it all together, a process in Elixir is the actor. It can communicate with another actor by sending a message to a specific PID. The recipient can receive a message by checking its mailbox for new messages.



Writing Concurrent Code

In this section, we’ll look at how the Actor Model for concurrency is actually used within Elixir.

Creating processes

Creating a new process is done with the spawn or spawn_link functions. This function accepts an anonymous function which will be invoked in a separate process. In response, we are given a process identifier, often referred to as a PID. This is important if we want to communicate with this process going forward or ask the kernel for information about the process.

pid = spawn(fn -> :timer.sleep 15000 end)

Everything in Elixir runs within a process. You can find out the PID of your current process by calling the self() function. So even when you are in the iex shell, by calling self() you can see the PID for that iex session, something like #PID<0.80.0>.

We can use this PID to ask Elixir for information about the process. This is done using the function.

[current_function: {:timer, :sleep, 1}, initial_call: {:erlang, :apply, 2},
 status: :waiting, message_queue_len: 0, messages: [], links: [],
 dictionary: [], trap_exit: false, error_handler: :error_handler,
 priority: :normal, group_leader: #PID<0.50.0>, total_heap_size: 233,
 heap_size: 233, stack_size: 2, reductions: 43,
 garbage_collection: [max_heap_size: %{error_logger: true, kill: true, size: 0},
  min_bin_vheap_size: 46422, min_heap_size: 233, fullsweep_after: 65535,
  minor_gcs: 0], suspending: []]

It’s interesting what you can find here! For example, in iex if you ask for the info about itself, you’ll see the history of the commands you’ve typed:

iex(1)> 5 + 5
iex(2)> IO.puts "Hello!"
iex(3)> pid = spawn(fn -> :timer.sleep 15000 end)

%IEx.History.State{queue: {[
  {3, 'pid = spawn(fn -> :timer.sleep 15000 end)\n', #PID<0.84.0>},
  {2, 'IO.puts "Hello!"\n', :o k}],
  [{1, '5 + 5\n', 10}]},
 size: 3, start: 1}

Sending messages

Messages can be sent to a process using the send function. You provide it with the PID of the process you wish to send a message to along with the data being sent. The message is sent to the receiving processes’ mailbox.

Sending is only half the battle though. If the recipient isn’t prepared to receive the message, it will fall on deaf ears. A process can receive messages by using the receive construct, which pattern matches on the messages being received.

In the example below, we spawn a new process which waits to receive a message. Once it has received a message in its mailbox, we’ll simply output it to the screen.

pid = spawn(fn ->
  IO.puts "Waiting for messages"
  receive do
    msg -> IO.puts "Received #{inspect msg}"

send(pid, "Hello Process!")

Keeping our process alive

A process exits when it no longer has any code to execute. In the example above, the process will stay alive until it has received its first message, then exit. So the question then arises: How do we get a long running process?

We can do this by utilizing a loop function that calls itself recursively. This loop will simply receive a message and then call itself to wait for the next one.

defmodule MyLogger do
  def start do
    IO.puts "#{__MODULE__} at your service"

  def loop do
    receive do
      msg -> IO.puts msg

# This time we will spawn a new processes based on the MyLogger module's method `start`.
pid = spawn(MyLogger, :start, [])

send(pid, "First message")
send(pid, "Another message")

Maintaining state

Our current process doesn’t track any state. It simply executes its code without maintaining any extra state or information.

What if we wanted our logger to keep track of some stats, like the number of messages it has logged? Notice the call spawn(MyLogger, :start, []); the last parameter, which is an empty list, is actually a list of args that can be passed to the process. This acts as the “initial state” or what is passed to the entry point function. Our state will simply be a number that tracks the number of messages we’ve logged.

Now, when the init function is called, it will be passed the number 0. It’s up to us to keep track of this number as we do our work, always passing the updated state to the next loop of our process.

Another thing we’ve done is added an additional action our logger can perform. It can now log messages and also print out the stats. To do this, we’ll send our messages as a tuple where the first value is an atom that represents the command we want our process to perform. Pattern matching in the receive construct allows us to differ one message’s intent from another.

defmodule MyLogger do
  def start_link do
    # __MODULE__ refers to the current module
    spawn(__MODULE__, :init, [0])

  def init(count) do
    # Here we could initialize other values if we wanted to

  def loop(count) do
    new_count = receive do
      {:log, msg} ->
        IO.puts msg
        count + 1
      {:stats} ->
        IO.puts "I've logged #{count} messages"

pid = MyLogger.start_link
send(pid, {:log, "First message"})
send(pid, {:log, "Another message"})
send(pid, {:stats})

Refactoring into a client and server

We can refactor our module a little bit to make it more user friendly. Instead of directly using the send function, we can hide the details behind a client module. Its job will be to send messages to the process running the server module and optionally wait for a response for synchronous calls.

defmodule MyLogger.Client do
  def start_link do
    spawn(MyLogger.Server, :init, [0])

  def log(pid, msg) do
    send(pid, {:log, msg})

  def print_stats(pid) do
    send(pid, {:print_stats})

  def return_stats(pid) do
    send(pid, {:return_stats, self()})
    receive do
      {:stats, count} -> count

Our server module is quite simple. It consists of an init function which doesn’t do much in this case other than start the loop function looping. The loop function is in charge of receiving messages from the mailbox, performing the requested task and then looping again with the updated state.

defmodule MyLogger.Server do
  def init(count \\ 0) do

  def loop(count) do
    new_count = receive do
      {:log, msg} ->
        IO.puts msg
        count + 1
      {:print_stats} ->
        IO.puts "I've logged #{count} messages"
      {:return_stats, caller} ->
        send(caller, {:stats, count})

If we are to use the code below, we don’t really need to know how the server is implemented. We interact directly with the client, and it in turn sends messages to the server. I’ve aliased the module just to avoid typing MyLogger.Client various times.

alias MyLogger.Client, as: Logger

pid = Logger.start_link
Logger.log(pid, "First message")
Logger.log(pid, "Another message")
stats = Logger.return_stats(pid)

Refactoring the server

Notice that all of the messages being received by the server are being pattern matched in order to determine how to handle them? We can do better than having a single large function by creating a series of “handler” functions that pattern match on the data being received.

Not only does this clean up our code, it also makes it much easier to test. We can simply call the individual handle_receive functions with the correct arguments to test that they are working correctly.

defmodule MyLogger.Server do
  def init(count \\ 0) do

  def loop(count) do
    new_count = receive do
      message -> handle_receive(message, count)

  def handle_receive({:log, msg}, count) do
    IO.puts msg
    count + 1

  def handle_receive({:print_stats}, count) do
    IO.puts "I've logged #{count} messages"

  def handle_receive({:return_stats, caller}, count) do
    send(caller, {:stats, count})

  def handle_receive(other, count) do
    IO.puts "Unhandled message of #{inspect other} received by logger"

Parallel map

For a final example, let’s take a look at performing a parallel map.

What we’ll be doing is mapping a list of URLs to their returned HTTP status code. If we were to do this without any concurrency, our speed would be the sum of the speed of checking each URL. If we had five and each took oe second, it would take approximately five seconds to finish checking all the URLs. If we could check them in parallel though, the amount of time would be about one second, the time of the slowest URL since they are happening all at once.

Our test implementation looks like this:

defmodule StatusesTest do
  use ExUnit.Case

  test "parallel status map" do
    urls = [
      url1 = "",
      url2 = "",
      url3 = "",
      url4 = "",
      url5 = ""
    assert == [
      {url1, 200},
      {url2, 200},
      {url3, 500},
      {url4, 200},
      {url5, 200}

Now for the implementation of the actual code. I’ve added comments to make it clear what each step is doing.

defmodule Statuses do
  def map(urls) do
    # Put self into variable to send to spawned process
    caller = self()
      # Map the URLs to a spawns process. Remember a `pid` is returned.
      |> -> process(&1, caller) end)))
      # Map the returned pids
      |> pid ->
          # Receive the response from this pid
          receive do
            {^pid, url, status} -> {url, status}

  def process(url, caller) do
    status =
      case HTTPoison.get(url) do
        {:ok, %HTTPoison.Response{status_code: status_code}} ->
        {:error, %HTTPoison.Error{reason: reason}} ->
          {:error, reason}
    # Send message back to caller with result
    send(caller, {self(), url, status})

When we ran the code, it took 2.2 seconds. This makes sense because one of the URLs is a faker URL service that we told to delay the response by two seconds…so it took approximately the time of the slowest URL.

Where to go from here?

In this article, we covered the basics of spawning a new process, sending that process a message, maintaining state in the process via recursive looping, and receiving messages from other processes. This is a good start, but there is a lot more!

Elixir comes with some very cool modules to help us remove some of the boilerplate involved in what we did today. Agent is a module for maintaining state in a process. Task is a module for running code concurrently and optionally receiving its response. GenServer handles both state and concurrent tasks in a long standing process. I plan on covering these topics in a second article in this series.

Lastly there is the whole topic of linking, monitoring, and responding to errors which may occur in a process. Elixir comes with a Supervisor module for this and is all part of building a dependable fault-tolerant system.

Highest Paying Contract IT Roles of April 2016

Here is a snapshot of the Top ten best paid contract IT roles as of April 2016 – information compiled by Sonovate and data from Innovantage

Highest Paid Contract IT Roles, April 2016


These Contract IT Roles statistics were sourced using Innovantage, which compiles data from IT jobs postings across over 180 global job boards and half a million employer websites.
Your Neighborhood Bank Is About to Have Its ‘Uber Moment’ – Fintech Your Neighborhood Bank Is About to Have Its ‘Uber Moment’ – Fintech

Your Neighborhood Bank Is About to Have Its ‘Uber Moment’ – Fintech

Your Neighborhood Bank Is About to Have Its ‘Uber Moment’ by Ian Mount at Fortune


As bank staff lose their jobs, it’s really not a good time to work in a bank branch. Bank automation and competition from FinTech companies are set to land with a serious thud.

The new Global Perspectives & Solutions (GPS) report from Citigroup  C 1.72%  says that U.S. bank staffing will dive 30% between 2015 and 2025, from 2.6 million to 1.8 million. (It’s already down from a pre-crisis peak of 2.9 million.) And things are even tougher in Europe, where bank branch employment is expected to drop from 2.9 million to 1.8 million.

According to Citi, as more transactions are automated and done on mobile phones, bank staff will be shifted from performing transactions to advisory roles. But the question is whether banks will able to do that fast enough, and whether that move will save them.

New firms like OnDeck  ONDK -5.52% , Coinbase, Lending Club  LC -3.13% , and Square  SQ -6.94% have begun to gnaw away at many of the activities that might have brought consumers and small business owners into bank branches. American and European banks are now—like taxi drivers a few years ago—facing what the report calls their “Uber moment.”

“In the U.S. and Europe, only a very small fraction of the current consumer banking wallet has been disrupted by FinTech so far. However, this is likely to rise,” the report says. “An open question remains as to whether incumbent banks in the U.S. and Europe can embrace innovation, not just talk about Blockchain and hack-a-thons, before FinTech competitors gain scale and distribution.”

FinTech companies are going after banks’ most profitable services. Citi says that personal and small and medium enterprise (SME) banking accounts for about half of the banking industry’s profits, and over 70% of the FinTech investments have gone into those segments.


Other countries have passed their “Uber moment” tipping point. According to Citi, China is the world’s peer-to-peer lending leader, at $66.9 billion. FinTech companies there also have as many customers as do traditional banks, the report says.

Not everybody is confident that U.S. and European banks can evolve. “In my view only a few [incumbent banks] will have the courage and decisiveness to win in this new field,” Antony Jenkins, the former CEO of Barclays, said in a recent speech in London. “I predict that the number of branches and people employed in the financial services sector may decline by as much as 50% over the next 10 years, and even in a less harsh scenario I expect a decline of at least 20%.”

Github – Scaling on Ruby with a remote tech team Github – Scaling on Ruby with a remote tech team

Github – Scaling on Ruby with a remote tech team

Sam Lambert, one of the rising stars at Github, and their first Database Admin, talks scaling, remote work and using Hubot. Courtesy of Github


Sam Lambert joined GitHub in 2013 as the company’s first database administrator, and is now the company’s director of technology. In this interview, he discusses how the service — which now boasts more than 10 million users and 25 million projects — is able to keep on scaling with a relatively simple technology stack. He also talks about GitHub’s largely officeless workplace — about 60 percent of its employees work remotely, using a powerful homemade chatbot, called Hubot, to collaborate.

SCALE: I usually think of GitHub as more of a technology provider and less of a technology user, but that’s probably unfair. Can you walk me through the technology and philosophies that underpin GitHub?

SAM LAMBERT: We take a very Unix philosophy to how we develop software and services internally. We like to be continually proud of the simplicity of a lot of our infrastructure. We really do try and shy away from complexity and over-engineering. We like to make more pragmatic choices about how we work and what we work on.

For a long time, very key bits of our infrastructure were strung together with Shell scripts and simple scripting, and it’s surprisingly effective and still works really very well for us.

What does that result in, in terms of your technology stack?

The core of what you see and use as a GitHub user is a Ruby on Rails application. It’s seven-year-old app now, created by founders when they started the company. That’s the core of the application, but obviously there’s a ton of Git in the stack. We have custom C daemons that do things like proxy, Git requests, and data aggregation.

MySQL is our core data store that we used for storing all data that powers the site as well as the metadata around the users. We also use Redis a little bit for some non-persistent caching, and things like memcached.

C, Shell, Ruby — quite a simple, monolithic stack. We’re really not an overcomplex shop, we don’t intend to try and drop new languages for every small project.

We’ve got core Ruby committers that work for us, and that allows us to scale what we have and keep a pragmatic view on all our technology choices and try to keep our stack smaller. Really, there’s nothing much you can’t do with the stack that we’ve already chosen. To keep at this game and keep it moving, we just have to keep applying varied techniques to what we’ve got.

That’s somewhat ironic considering all the projects and experiments that hosted on GitHub. Do you ever see new things and get tempted to change things up?

We certainly take a look at new technologies. Our employees have a large amount of freedom in what they do, and people will try all sorts of stuff and experiment. Often, it’s just to know why you’re not using technology. You could look at something, understand why it’s interesting, what the problems are that you’re trying to solve, maybe then take some of the approaches to extend what you’re already doing. Or maybe put it on the shelf for a little while while it matures.


But there is an interesting irony in that half the new projects in the world happen on GitHub and we tend to stick with a fairly conservative stack. Our CTO often jokes about when I was interviewed by him to join the company, as the first DBA at GitHub. I actually said in my interview, “I’m really surprised to be sitting here. I assumed GitHub is using some sort of new, hip datastore.” Then as the interview process went along, it was more revealed to me that this is actually a really pragmatic set of hackers that just hack on Ruby, hack on C and spend their time working on more interesting things using a more stable stack, rather than chasing after the latest and shiny tech.

Keeping up with all that Git

What are the challenges that keep your team busy?

A lot of it is volume. Obviously, our user base is growing. We also have a very technical user base and they seem to manage to find ways to use the API in an obscure manner. Using a standard framework, there’s a lot of stuff that you don’t get to see the extremes of until it’s a large use case. There’s a lot of patterns that Rails uses that are less optimal at a large scale. We might hit issues like that and have to rewrite certain bits of functionality.

Obviously, we also have a massive amount of Git. Scaling something like Git in a backend infrastructure is quite different. It’s not something that anyone else is trying to achieve. We’re actually on the frontier when it comes to scaling Git, the application itself, at our scale, which is fascinating.

We have an amazing team that works on that and works really hard to build in the extra functionality. We’re like a Git host in someone’s infrastructure, which means all sorts of work to balance public versus private repositories, and to make sure authentications and permissions are correct when users try to access code.

“For a Rails app, is a really, really quick site and we have a motto that ‘It’s not shipped until it’s fast.’”

So Git is the most unique aspect of your technical operations?

Absolutely. We don’t want to be unique in any other sense than what we’re known for. That’s also something we’re really proud of. I often say to people, “Let’s only write bespoke elements of our architecture that make sense for a company that stores Git data.”

We don’t need to reinvent the wheel, we don’t need to write our own databases, we don’t need to start writing our own frameworks — because they’re all in domains that are usual. It’s a website, it’s web hosting. In the domains that are unusual, we fully embrace the need to write custom applications or build bespoke apps for that.

What’s the metric that drives engineering most in terms of what your team works on?

We’re quite obsessed with performance. We want to make sure the site is always performant and continually fast. For a Rails app, is a really, really quick site and we have a motto that “It’s not shipped until it’s fast.”

In terms of a specific metric that we keep our eye on, it’s capacity for storing Git. That’s something that we have to continue to grow. As our usage spikes more and more, we’re starting to see more pressure on that kind of infrastructure. We have some really interesting projects being worked on at the moment that will let us keep scaling.

“Quite often with scaling problems, they just come around the corner. They don’t just slowly, gradually appear.”

No cloud computing here

What does the underlying infrastructure for GitHub look like? Are you in the cloud or on local servers?

We host in our own datacenters. We actually have an amazing provisioning story. We basically can provision hardware like it was the cloud. We have a really small, but amazingly dedicated, physical infrastructure team, and they do phenomenal work in providing us these amazing services that we can use.

If I need a new host, I can basically tell our chatbot, Hubot, that I need Xamount of host of this class on these chassis, and it will just build them and deploy back in minutes. We have this incredibly flat, flexible, but physical infrastructure. As someone who consumes that infrastructure, it’s phenomenal and to watch it working is brilliant.

It sounds like you’re avoiding the laborious and time-consuming procurement step often associated with physical gear.

We have some slack capacity for hardware, essentially. The physical infrastructure team will provision machines that are empty and ready to be provisioned by the teams that are going to use them. For example, the database infrastructure team can look at how many machines are in the pool of the class they need for databases, and essentially they’ve written their own Puppet roles and classifications about how those nodes work. Then they can just provision them themselves or just tag them so that it’s capacity for that team.

Are you scaling the server footprint on a regular basis, or is it a pretty controlled growth pattern at this point?

We keep it controlled in terms of how we order and how we provision. But in terms of usage of the site, is trending up at an increasing rate. More and more companies in the world are realizing that they’re tech companies, so the usage growth of GitHub is just going up and up and up.

We’re handling it well, though. Quite often with scaling problems, they just come around the corner. They don’t just slowly, gradually appear. They come quickly and we tackle them as they happen, and we have some interesting use cases at times. People bring strange things to the site and they’ll reveal slight scaling problems, but we have an application we can develop on quickly and people that understand the domain well. We get over those problems fairly fast and deploy fixes and continue going.

“I’ve got colleagues that don’t have a permanent location. They just fly from city to city and work from wherever. They’re just nomads and they’re all around the world.”

Building a global engineering network

Are there issues that keep you up at night, or long-term concerns always in the back of your mind?

Defending our infrastructure is certainly something that we always think about. I wouldn’t say it keeps you up at night, but it’s something we certainly think about.

And scaling our organization. The bigger we get, the more engineers we need, but we need to keep that growth in line with our culture. I think hiring is a challenge that every tech company has — continuing to get good, talented employees from different backgrounds from around the world. But you’ve got to find people that have your same engineering values and that like to work on things similar to what you have and what you can offer.

I’m also concerned with continuing to embrace the distributed nature of the company. The company is 60 percent remote currently; I’m in England at the moment. I’ve traveled around the world, working from different places. That’s something that is completely possible, based on our culture and distributed nature.

That seems pretty unique. Are those 60 percent of employees working from home or from branch offices?

They’re working from home. You can work from anywhere. Last year I worked probably in five or six different cities around the world. Just working from my laptop wherever we decided to go. A month ago I was working from a cabin in the woods in Wisconsin.

I’ve got colleagues that don’t have a permanent location. They just fly from city to city and work from wherever. They’re just nomads and they’re all around the world. That’s something we can offer to people that’s baked into our culture.

There’s no requirement to work in any office. Our office is actually more of a social space. We have areas for people to meet and enjoy being together, but there’s no necessity to be together.

Myself and a colleague shipped a gigantic refactor of our backend, essentially transforming it from a monolithic environment to a distributed one. We had a lot of decisions to make — a lot of re-factoring and new patterns. In that entire project we never had a face-to-face conversation. We just worked together through chat and issuing pull requests, and we then met each other at the end of that project — about 6 months into him joining GitHub.

Again, that’s just the way we work. That’s the way our culture works. There’s no requirement to be physically located in order to be productive and do what we do.

“For a long, long time your on-boarding was joining our chatroom watching what other people were doing. … I joined the company and I just idled in chat and just watched how people worked and what they did and I just learned that way.”

Hubot to the rescue

You mentioned Hubot earlier as the provisioning tool, but is there more to it? It sounds key to how the company is able to function with such a distributed workforce.

Hubot can do basically everything in GitHub. You can ask Hubot where a specific member of staff is and it will show you where they are in the world or what floor they’re on in one of our offices, for example.

There’s probably about 40 different provisioning commands. You can do a MySQL stack. You can do failovers, you can drop tables, you can backup tables, you can clone, you can run migrations, you can do everything. You can do mitigation of attacks.

Basically, everything you could ever possibly imagine to do in our infrastructure, you can do via Hubot. There are zero requirements to interface with any code. You can run it all through Hubot.


A humorous schematic for Hubot. Source: GitHub

So it’s a lot more than an automation engine …

Yeah, it’s automation, but it’s a lot more. It’s the context as well.

For a long, long time your on-boarding was joining our chatroom watching what other people were doing. Because we’re not physically located anywhere, when an issue comes up, you see the alert coming to chat, you can start pulling up graphs that everyone in the room can see, and everyone can see what you’re looking at. I joined the company and I just idled in chat and just watched how people worked and what they did and I just learned that way.

I try and reflect back on how previous teams I’ve worked on would debug stuff. Everyone would be in their own terminal or on their own dashboard looking at graphs and then trying to awkwardly share them with each other, or paste terminal outputs for example.

With chat, you just dive in and the context is all there. You’ve got this, basically, gigantic shared console for our company. For example, if you get an alert about a database failure and a couple people jump in, you can see that it’s already been diagnosed and already been worked on. There’s no duplication of effort and the people that need to know start getting context directly. When we go into large incidents (touch wood they don’t happen often), we’re able to really collaboratively work together.

If we tweet via Hubot out to our status page or out to the updates page, people can double-check what you’re going to write.

It’s just a fully collaborative experience, and it’s something that more and more companies are taking up. You hear of massive companies integrating Hubot to do these fantastic use cases. It’s just amazing to watch, really.

The old way of working, I just don’t think I could go back to anymore. I’m so used to being among all my colleagues and my team, collaborating on what we’re trying to do through chat. It’s a whole new way of working that adds so much and solves so many problems that I think a lot of traditional companies haven’t solved.

Technology Graduate – Market advice on making your profile fit-for-purpose

Making sense of the STEM skills “shortage”….

This is a guest blog from the esteemed Charlie Cunningham, Skills Advice Professional at the University of Warwickshire on the emerging graduates job market for Technology and how to make your graduate profile fit-for-purpose. May 13th, 2013 – Leo

As a STEM graduate, you’ve probably been told that you’re in demand, something of “a highly desirable commodity”. We keep hearing that the UK is facing a serious lack of skilled STEM workers that could act as a “brake on the economic recovery” (or so says the CBI).  The Royal Academy of Engineering suggested in 2012 that the UK needed 100,000 more STEM graduates by 2020 to maintain our economic competitivenes.

So, does that mean STEM jobs are there for the taking?

Stem Education

STEM grads – in demand?

As reported in a recent Times Higher Education article, Unilever had 130 applicants for every graduate STEM post in 2013 and 11,000 graduates applied for 138 STEM graduate and 92 STEM internship positions at BP last year.  Matt Hicks, Director of Linear Diagnostics Ltd (and former Warwick Research Fellow) has also seen this trickle down to the SME sector:

I recently advertised for a single graduate research technician post in biotechnology and received over 40 applications in the short period of time that it was advertised. Over 1/3 of these were, on paper, of sufficient quality to perform the role.

The subtext here is also a concern over the quality of the applicants. Yes, he received some good applications but over 60% had already screened themselves out of the process. I have also spoken to a number of recruiters I know in the automotive industry and they’re struggling to recruit engineering talent. What does that tell you?

Do we need more STEM grads….or better STEM grads?

Aaah……So is it that we need “fewer” but “better” graduates right now?  Research from the employment consultancy, Work Communications identified 65,000 STEM graduate scheme places available in 2012-2013, while 132,790 UK-domiciled students graduated with a first degree in STEM subjects in the previous year, so where are all these STEM graduates getting jobs?

Well the answer is undoubtedly that some are going into non-STEM roles (and found work outside formal graduate schemes), which has always happened.  However, this tendency may have become more pronounced. Research led by Derek Bosworth, associate fellow at the University of Warwick’s Institute for Employment Research in 2013 found that while in 2001 45% of STEM graduates entered core STEM jobs or sectors, the figure for 2011 was roughly one third.  Could this be because graduates are widening their search field in response to a perceived shortage of STEM roles? Or is it because the applicants lack the specialist – and employability – skills these employers require.


Bosworth and his team interviewed employers from a range of sectors.  Overall, “most of the people interviewed felt that the overall quality of UK graduates was as good as the rest of Europe” but in certain areas, employers had more concerns.  Biosciences, Engineering and IT were sectors reporting difficulties in finding the “quality of recruits they are seeking”.  Some employers felt universities have pulled back from offering certain specialist courses due to costs concerns, others that graduates lack practical STEM skills, for example in bioinformatics, health economics and statistics.

Put yourself in the driving seat

Your STEM skills and knowledge are in demand, no doubt about it. But what can you do to position yourself more strongly in the job market?

Listen to the employer message:

Whatever the debate between employers and HE about the “skills gap”  there is one clear message – “we the employers are looking for graduates with practical and commercial knowledge and skills”.  The core knowledge from your degree is essential but in a competitive job market, it is other skills that tip the balance.  The University of Reading’s Skills Transformer is a great way for STEM students to reflect on these skills and build for the future.  So reflect on your skills, seek out opportunities to build on practical and commercial aspects of your course.

Keep an open mind

You could argue that is up to STEM employers to make their job offering more attractive to graduates –  and Marcus Body of Work Commmunications agrees with you. Body suggests that the way that employers screen candidates is not helping them recruit the best talent.  So what?  Perhaps some STEM roles don’t sound as “flash” as a job in the city, but don’t get hung up on semantics.  Every job is different and just because the title is a little more mundane, it doesn’t mean the work will be. If you’re smart you’ll scratch beneath the surface and put your research skills to the test.

Exploit the SME sector

Don’t let graduate schemes define the limits of your job search. Applicants with great commercial and practical skills can excel in the SME sector. Over 99% of businesses in the UK are SMEs contributing 48% of private sector turnover. Typically, the recruitment process may be more personal than at larger firms.  How do you find these organisations?  Use your careers service, check industry publications, or even ask employers themselves.  As Steve Billingham, Director of Geotech, told students recently at a STEM event at Warwick, “there’s no list of who the other 40 businesses like mine are in the UK….but I know who they are….come and ask me.”

Decide what suits YOU

Whatever the message about where the jobs of the future may be, avoid salary and job growth dominating your thinking on future careers.  Labour Market Information is valuable (and getting more sophisticated) but careers advisors, employers and hopefully most parents would say – “Do what you enjoy and are good at…”.

Be open to opportunities

Although the “skills shortage” is still the subject of hot debate, there’s no question it’s a hot topic right now….so take advantage! The Government and industry are united in lobbying for further initiatives to “up-skill” the STEM workforce.  EDT, for example, has a scheme for those considering a year in industry.

And finally….

As a STEM graduate, what should concern you is not the ongoing discussion about a perceived skills crisis, but what the current job market means for you and how you can stand out above the crowd. You’ve already got a pretty good toolbox – just make sure you show employers you can use them!

How to Improve Business Communication with Behavior Driven Development How to Improve Business Communication with Behavior Driven Development

How to Improve Business Communication with Behavior Driven Development

This is a guest blog post by Lance Ennen, Co-Founder and Chief Technology Officer of Big Astronaut, originally for Codeship. Lance is driving Big Astronaut’s core technology values and development around virtual software development, running virtual teams, and delivering the ‘wow factor’ to his products and clients. As a former Obtiva consultant and entrepreneur, and eternal Ruby on Rails devotee, Lance has designed and built web and mobile applications for everything from start-ups to Fortune 100 companies.

The original article can be found here. Get in touch with Lance on twitter!


How to Improve Business Communication with Behavior Driven Development


Effective business communication heightens productivity. An important step in increasing communication and decreasing inefficiencies is to eliminate assumptions. A teacher of mine once told the class to never ASSUME. When you assume, you make an ass out of u and me.

Every entity, which has a hierarchy of employees, managers, presidents, etc, etc. is in danger of experiencing this assumption parody if the rules of the game, and more precisely, the rules of detailed tasks are left for interpretation.

This doesn’t mean people misunderstand or deliberately ignore directions. It means we are human beings using various ways to process information. The best way to improve business communication is to actually communicate and to follow-up frequently.

How do you improve business communication?

  1. Have an effective, clear cut plan to communicate
  2. Communicate
  3. Follow-up using questions to fact check

Common Communication Problems in the Software Business

It’s common practice for a software developer to get assigned a ticket (I explain this process in more detail here). The business doesn’t always do the best job of explaining the necessary steps/details that are associated with that ticket. They might just hand it to a developer and tell them to get it done. The developer will then in turn do what his or her take is on the ticket.

That’s not exactly the best way of developing something. As I said earlier, one person is going to interpret something differently than another. Here we have communication, but obviously it’s not effective communication. To make this process suitable for the software industry, we need to tweak it a bit.

Specific Communication Problem

On a site we are currently building, there is an area where you register for an event. Each event has options you can add. Once you have added/edited the event details, it totals up a price and allows you to check out for that event.

Originally, we put in a checkout button. Then we had to go back in and put a save and close button. After that, the business wanted to add a get invoice button, because some customers may want to get an invoice and then send a check in for that event.

The order button summed everything up. The save and close button saved where the user was in the process.

The developer’s version was to save the order “assuming” the person was actually completing the order.

The business didn’t want to actually complete the order until the check comes in, which adds more scenarios to the ticket.

These steps were all done on assumptions. Someone wrote a ticket. They did a mock-up and then assigned it to the developer. Based on what he saw, the developer did his version.

The Software Communication Solution

How to improve business communication with your software developers?

Using Behavior Driven Development tools like Cucumber, we can account for misinterpretations and make the inevitable clean-up process much easier, while increasing interaction and discovering additional details with the client.

  1. Start with a Feature (Story)
  2. Break it down into small steps (Scenarios)
  3. Follow up with questions
  4. Think of additional Scenarios
  5. Review Feature and approve for development

Cucumber is a behavior driven development framework that allows you to write features. Inside those features you have different scenarios.

On a current project I’ll write out these scenarios, then review them with the Project Manager and Business Analyst. They’ll read through them, and then approve them for development.

Once the Project Manager, Business Analyst, or Client has done the process a couple times, they should be able to write cucumber features for the developers before an initial meeting to improve workflow.

These cucumber features then become working documentation that any developer or business owner can pick up and in plain english understand not only what the software does, but what the exact code is executing during these scenarios.

Each line in a scenario gets broken down into what is called a Web Step. You’re able to write code within each Web Step that actually executes what you’re expecting and based on those expectations they either pass or fail.

A new feature would be, for example, building out coupon codes inside these orders. In turn, we’ll have a new feature called coupons.





Feature: Coupon Codes

As a user,

I should be able to apply a coupon code to an event



Scenario: Coupon codes expire

Given a logged in user is on the event page

When I register for an event

And I enter a expired coupon code

Then show the user a message that coupon code is expired



Scenario: Coupon can be for amount off

Given a logged in user is on the event page

When I register for an event

And I enter a coupon code for amount off

Then the amount off should be reflected in total price



Scenario: Coupon can be for precentage off

Given a logged in user is on the event page

When I register for an event

And I enter a coupon code for precentage off

Then the precentage off should be reflected in total price



Scenario: Coupon can be applied to emails

Given a logged in user is on the event page

When I register Mike from Mike & Mike for an event

Then I should see Mike is coming for free



Scenario: Super Admin user can create a coupon with amount off

Given a logged in super admin user

When I create an event

And I add a coupon code for an amount off

Then I should see a message that coupon code has been saved



Scenario: Super Admin user can create a coupon with a percentage off

Given a logged in super admin user

When I create an event

And I add a coupon code for an percentage off

Then I should see a message that coupon code has been saved



Scenario: Super Admin user can create a coupon that expires

Given a logged in super admin user

When I create an event

And I add a coupon code with an expiration date

Then I should see a message that coupon code has been saved



Scenario: Super Admin user can create a coupon and assign it to an email address

Given a logged in super admin user

When I create an event

And I add a coupon code to an existing users email address

Then I should see a message that coupon code has been saved

Here are all the different scenarios. Once we run this Feature with Cucumber it will give us Web Steps for each line. Cucumber can be used with Capybara, which uses Selenium Webdriver with FireFox, or you can use Selenium Webdriver for Chrome. Capybara allows developers to simulate real user interactions with the application inside these Web Steps. Using capybara, the developer can fill out a form and submit coupon codes using the webdriver, either headless or if there is Javascript on the page it can pop up the browser window during this Web Step and simulate that interaction. Which removes tedious QA (quality assurance) of these features in the long term, plus allows for the developer to iterate with the business and show them in plain english where they are with the development process.

This a great process for collaboration that makes it easy to change anything, quickly.

Business Communication 2.0

Using a written strategy is helpful. Writing software that encourages communication, interaction and spurs further creative solutions is going a step above and beyond. By iterating with the business using Behavior Driven Development tools like Cucumber, we’re able to account for discrepancies in interpretations and speed up the development process over the long term while allowing changing requirements from the business.

Introducing the Digital Butler App – “Cyman” Introducing the Digital Butler App – “Cyman”

Introducing the Digital Butler App – “Cyman”

This is a fascinating “Digital Butler” called Cyman, from the talented Technophile Ugo Anomelechi, reviewed on by Azzief Khaliq - Leo 25th January 2014

(Note: Cyman Mark 2 Android App is rated 4.6 on the free and paid versions!)


Cyman Mark 3 Assistant Dashboard – Your Personal Virtual Assistant / Digital Butler For Chrome

If you’ve seen the Iron Man movies, you’re probably familiar with Tony Stark’s J.A.R.V.I.S. system, the digital home assistant he built for himself. In the films, J.A.R.V.I.S. wakes Christine Everhart up, helps Tony Stark develop his Iron Man suits, engages him in conversations and generally manages almost everything in his life.

Cyman Mark 3 Assistant Dashboard

If you’ve always fancied having something like J.A.R.V.I.S. in your life then, the Cyman Mark 3 Assistant Dashboard might be right up your alley. It is a virtual assistant that will help you organise things, automate tasks, set reminders, find information and almost everything else you’d expect from your own digital personal assistant.

Getting Started

First, download both the Assistant Dashboard itself and its Cyman Mark 3x Chrome helper extension from the Chrome Web Store. After you’ve installed both the Dashboard and the helper extension, you can launch the Dashboard from Chrome’s App Launcher.

When you first launch it, you should be prompted to log in to Cyman. You don’t have to register an account, as it will log in to Cyman using the Google account you’re signed in with on Chrome. It will also remind you to install the Cyman Mark 3x extension as well.

Login To Cyman

Once you’ve logged in, you’re good to go. Before you begin using the Assistant Dashboard though, you have to tell it your name and optionally, your gender. You can also change the name it responds to, if you feel like it.

Cyman Mark 3 Assistant Dashboard Features

Note that the Cyman Mark 3 Assistant Dashboard responds to both voice and text commands. Of course, for the full J.A.R.V.I.S. experience, you’ll want to use voice commands, but rest assured that you’ll be able to use the Assistant Dashboard even if you don’t have a microphone – it just won’t be as cool, that’s all.

Now that that’s out of the way, let’s look at some of the things the Cyman Mark 3 Assistant Dashboard will do. As with any good real-life butler, the Mark 3 Assistant Dashboard can be asked tocreate lists and reminders. To set up a reminder, for instance, simply use the “remind me to” command.

Setting A Reminder

On a related note, you can also schedule actions for the Assistant Dashboard to do using the “remember to” command. You can, for instance, tell it to automatically open your favorite tech news site in 10 minutes or tell you a joke at 10 a.m. every day.

You can also ask Cyman to look for factual information. Just ask it something factual using commands such as “who is”, “how old is” or “what is” (amongst others) and Cyman will use sites such as Wikipedia, Google and Wolfram Alpha to get this information and then present it to you. It will also open a new Chrome window with the Wikipedia page of whatever you searched for. Cyman can also do a Google Images search for you if you ask it what a particular thing “looks like”.

Retrieving Factual Information

Of course, the J.A.R.V.I.S. comparison wouldn’t be complete without alarm functionality. It won’t automate your house like in the first Iron Man film, but Cyman will indeed wake you up almost exactly like J.A.R.V.I.S. did. Just tell it to “set my alarm” to a particular time. By default, the Assistant Dashboard will read out the latest general news headlines as part of the alarm, but you can change the news feed that it will read out.

Setting An Alarm

This isn’t all that Cyman Mark 3 Assistant Dashboard can do, of course. It can also retrieve and read out news headlines, translate text, open and close tabs in Chrome, tell you jokes, convert between different units of measurement as well as find nearby places of interest. There are a number of example queries in the Dashboard itself to help you get started, but the best way to find out what it can really do is to just give it a go yourself.

Upgrade Options

If you’re thinking that all of this convenience comes with a catch, you’re right. The free account (or “user profile”), called Prototype, limits you to 25 commands per day. That’s probably enough for general light usage, but if you want more then you’ll have to upgrade to either the Shell or Armour user profiles.

Shell costs $2.12/month and increases this command limit up to 75 per day. The Shell account also allows you to send texts and make calls through the Assistant Dashboard, as long as you have the Cyman Mark 2 Assistant app installed on your Android smartphone.

Armour costs $3.78/month and removes the query limit entirely. It also adds the ability toreceive mobile notifications on your computer as well as use your smartphone to control the desktop app, again in combination with the Cyman Mark 2 Assistant app.

Candidates who write SCRUM instead of Scrum? Candidates who write SCRUM instead of Scrum?

Candidates who write SCRUM instead of Scrum?

Why do so many Agile advocates write SCRUM on their CV, instead of Scrum? – Leo 9th Jan 2014


I was chatting with a Lead ScrumMaster the other day, within an organisation that’s going through a transition to Agile techniques for its software delivery. We were going through the machinations of a ScrumMaster role, and I asked him if he had any pet hates on CVs that he was wary of. “SCRUM” he replied. Naturally, I frowned, but on expanding, he explained that a pet hate he has is anyone who writes SCRUM instead of Scrum when talking about this mainstream Agile tool.

His argument was that no-one truly understanding of Agile practices within a project delivery role would overlook the way its presented and turn it into an acronym, when as we all know its Japanese rugby origins create the word. This got me thinking, and as I looked through CVs of various candidates I’d worked with over the years both previously and at Recommend Recruitment, I started to realise this was a common theme.

I went to the Kanban Exchange meet-up in London in December, hosted by the fantastic Dan Brown, and once we’d finished the various workflow games, asked the same question. Why is it people seem to acronymise Scrum for no good reason? Still nobody knew and it is something I am still struggling to find the answer to even now.  Something not exclusive to Project Managers & ScrumMasters, but Developers, Quality Assurance Specialists, Business Analysts and Product Managers I’ve seen.

I am keen to hear anyones thoughts on the subject as to where the phenomenon comes from. Maybe you’ve written SCRUM on your CV before due to something that you’ve read or just heard from others, or maybe you’re in the “pet-hate” category, or maybe there’s a genuine reason!

I realise this may be a little “storm in a teacup” but as i’ve seen it affect a persons chances of a role, i would be keen to understand and assess the root cause.




For this, more tech news and the latest UK and EU Technology jobs, visit us at



Dear Starbucks, I Like Clojure…. Dear Starbucks, I Like Clojure….

Dear Starbucks, I Like Clojure….

Why Kris Jenkins, creator of the West London Hack Night and key speaker at the London Clojure User Group, loves Clojure with his latte. From Kris’ blog - Leo, Jan 9th 2014


Dear Starbucks, I Like Clojure

Dear Starbucks,

I like Clojure. It’s a relatively new JVM-based language in the family of Lisp languages, which puts it in a heritage that’s practically as old as programming itself.

It favours modelling problems using simple, immutable data structures – and let me tell you, once you’ve got used to immutable data structures you’ll never want to go back to a language without them.

It uses a syntax that’s akin to writing your code directly as an AST, which enables some very handy language features and editing tricks and again, will be hard to give up once you’ve gotten used to it.

And it has a REPL-driven development model that give you such a tight feedback loop on your code it’s more like you’re having a conversation with your computer – one that results in working code. It’s so much fun.

And that reaction you have right now, the one that says, “I’m glad you’re happy, but I don’t know what you’re talking about and I don’t really care so please just quietly get on with whatever you do best without telling me the details ever again.”

That’s how I feel when you ask me which kind of coffee bean I want.