Articles

Technology The New Normal How We Work Customer Stories Testing | All Topics

Restate Your UI: Using State Machines to Simplify User Interface Development

OK, I'll admit it: Every time someone asks me to work on a user interface I cringe. And I'm not talking about a little elevation of the shoulders, I mean the kind of full eyes closed, head around the bellybutton motion that makes the bones in my spine pop. The programs behind user interfaces are usually hard to test, riddled with bugs and -- ironically -- boring. Most user interfaces I've encountered have evolved one haphazard feature after another. The result is code that's fragile, hard to understand, and even harder to maintain.

It doesn't have to be this way. A little up front, top-down design, combined with one of the fundamental models of computation, state machines, is all you need to create maintainable -- and even fun to work on -- user interfaces. I know, you've heard this before: From Swing to XUL to (lately) ReactJS, we are always on the verge of a UI framework that is going to make it all go away. This is not that. What I'm talking about is a technique that has helped make user interface programming fun and effective, in a way that transcends the framework du jour.

Yet Another UI

Let's start by focusing on the basic elements of user interfaces:

  • Event handling functions
  • Rendering functions
  • Application State ("app state")

Here, for example, is the new account registration form that we have all built at one time or other:

Since this is Clojure, our rendering function returns Hiccup data.

(defn render-form
  [app-state]
  [:div
   [:p "Email:"]
   [:input {:value (:email app-state) :on-change ...}]
   [:p "Password:"]
   [:input {:value (:password app-state) :on-change ...}]
   [:button {:on-click handle-submit
             :disabled (:disable-submit app-state)}]])

We'll also need a function to handle the submit button's click event, which disables the button while a request to the Register Service is in flight.

(defn do-register-service-call
  [data callback-fn]
  ...)

We are going to have some event handlers that take an app state and return a new app-state. Keep in mind that the event handlers can have side effects:

(defn handle-submit
  [app-state]
  (do-register-service-call (select-keys app-state [:email :password])
                            handle-submit-success)
  (assoc app-state :disable-submit true))

We also need a second event handling function to deal with the callback from our register service call above, to turn the button back on.

(defn handle-submit-success
  [app-state response]
  (dissoc app-state :disable-submit))

At this point, our app state looks like this:

{:email <string>
 :password <string>
 :disable-submit <boolean>}

More UI More Problems

So far, so good. Our next step is to add some error handling. The rule is that if either field is empty when the submit button is clicked, we show the user an error message. We clear that message when the field changes. We also disable the submit button when an error is visible:

(defn render-form
  [app-state]
  [:div
   (when (:error app-state) [:div (:error app-state)])
   [:p "Email:"]
   [:input {:value (:email app-state) :on-change ...}]
   [:p "Password:"]
   [:input {:value (:password app-state) :on-change ...}]
   [:button {:on-click handle-submit
             :disabled (:disable-submit app-state)}]])

And we need a div to render the error message:

{:email <string>
 :password <string>
 :disable-submit <boolean>
 :error <string>}

And we add an :error attribute to app state.

(defn error
  [{:keys [email password]}]
  (cond (empty? email) "email must not be empty"
        (empty? password) "password must not be empty")

(defn handle-submit
  [app-state]
  (if-let [error (error app-state)]
    ;; If there's an error, don't do the rest of the stuff in `handle-submit`.
    (assoc app-state :error error
                     :disable-submit true)
    ;; Otherwise, call the register service as before.
    (do (do-register-service-call ...)
        (assoc app-state :disable-submit true)))

(defn handle-password-change
  [app-state password]
  (if (starts-with? (:error app-state) "password")
    (-> app-state
        (assoc :password password)
        (dissoc :disable-submit)
        (dissoc :error)
    (assoc :password password)))

;; Similarly, `handle-email-change` also performs the same sort of
;; conditional logic.

We've gone far enough with this example to start seeing problems. Think about trying to add a progress spinner that will do its thing during submission, and you can see that this will get very ugly very quickly.

The problem is that every event handler function which deals with changing a form input's value has to clear both :disable-submit and :error. Not only is there an obvious coupling of these state variables, but worse, they don't even model the actual behavior we want. The submit button should never be enabled (:disable-submit = true) when there is an error present (:error not nil). Since the app state allows this, we're forced to duplicate code prevents it from happening thereby complicating our event handling functions.

Control flow in an event-driven system is based on the sequence of events generated by a user. One of the jobs performed by the event handling functions is to ensure that the program only accepts valid sequences of events and rejects the rest. For example, we disable the submit button to prevent duplicate calls to the register service. While it's common to just sprinkle these kinds of guards wherever we think they are needed, it's a bad idea for two reasons. First, with the key logic scattered here and there the the flow of control is difficult to understand. Second, we end up with complicated -- and tangled -- event-handling functions.

Step back and the larger problem is clear: We have our individual event handlers making decisions about control flow based on context. The decision, for example, to clear an error message when the input is changed depends on whether the error -- if there is one -- is related to the input. The app state, the event itself, and the conditional logic used to make the decision all form an implicit context. Our application is aware of several distinct contexts which it uses for making a variety of decisions. Without some abstraction to name these contexts explicitly, we're forced to build a lot of code in order to make decisions which should be trivial. Returning to our earlier example of a progress spinner, we'd need something like this.

(defn show-spinner?
  [app-state]
  (and (:disable-submit app-state)
       (not (:error app-state)))

If you have done much of this kind of programming you know that this kind of patch work is likely to come apart on the very next feature request, or certainly by the one after that.

Restate Your UI

There is a better way. The overall goal -- one that is common to all user interfaces and independent of whatever library or framework you are employing -- is that as a user navigates a UI, we allow only the actions that make sense in light of the current application state. In other words, as the user does stuff the application should move from one state to another while the set of available actions should change based on the state.

The key word here is state. In computer science, State Machines are abtract entities that have of a finite number of states. Associated with each state is a set of possible transitions. Each transition allows the machine to move to a new state. Sound familair?

Even better, the idea of a state machine fits well with the Clojure prime directive of focusing on the data -- to describe your problem declaratively -- in order to leverage Clojure's powers of data transformation. Our original UI is made up of mostly code. Let's see if we can't turn some of it into a data based state machine:

State         | Submit Button | Error Label | Success Label
 --------------+---------------+-------------+--------------
 Ready           Enabled         nil           nil
 Submitting      Disabled        nil           nil
 Password-Error  Disabled        not nil       nil
 Email-Error     Disabled        not nil       nil
 Success         Disabled        nil           not nil

The UI's events (some subset of them) become the state machine's transitions. For example, clicking the submit button is a transition that takes the UI from the Ready state to the Submitting state. The following table describes our UI's possible state transitions.

From          | Via              | To
 --------------+------------------+-----------
 Ready           missing-password   Password-Error
 Ready           missing-email      Email-Error
 Ready           submit             Submitting
 Password-Error  change-password    Ready
 Email-Error     change-email       Ready
 Submitting      receive-success    Success

I've built state machines for this purpose using a variety of languages and frameworks. Object Oriented representations are straightforward, but bloated. Here is where Clojure's data literals really shine. A compact representation of our UI's state machine might be as simple as the following EDN.

(def fsm {'Start          {:init             'Ready}
          'Ready          {:missing-password 'Password-Error
                           :missing-email    'Email-Error
                           :submit           'Submitting}
          'Password-Error {:change-password  'Ready}
          'Email-Error    {:change-email     'Ready}
          'Submitting     {:receive-success  'Success}})

Data certainly beats code in this instance, for several reasons, not least of which is our ability to visualize it. Rather than creating a diagram of our state machine by hand, we can easily turn the above Clojure map into a directed graph.

user=> (require 'fsmviz.core)
user=> (fsmviz.core/generate-image {'Ready {:missing-password ...} ...}
                                   "fsmui.png")

Both views -- the Clojure map literal, and the diagram -- help us to understand our design, and how we might improve it. In other words, they are tools to help us reason about our design. In addition to a centralized abstraction which describes our control flow, this design adds flexibility, making it easier for us to modify and extend later. In addition to fsm defined above, we'll need a helper function.

(defn next-state
  "Updates app-state to contain the state reached by transitioning from the
 current state."
  [app-state transition]
  (let [new-state (get-in fsm [(:state app-state) transition])]
    (assoc app-state :state new-state)))

Our app state is simplified to contain only a single state flag.

{:email <string>
 :password <string>
 :state <symbol>}

With our new design, we can lift most of the control flow out of our event handlers. In fact, we can even move the call to new-state into some sort of middleware, interceptor, or other hook, fired on every event. This would allow us to completely distill our event handlers down to their essence.

(defn handle-password-change
  [app-state password]
  (-> app-state
      (assoc :password password)
      (next-state :change-password))) ;; <- This probably moves to a hook.

;; Other event handlers become similarly anemic.

Best of all, our context is now explicit. Adding a spinner is now easy: We only need to look at the current state. The coupling between state attributes disappears. We've only got one attribute related to the state of the UI, :state.

(defn render-form
  [app-state]
  (let [state (:state app-state)]
    [:div
     [:p "Email:"]
     [:input {:value (:email app-state)}]
     [:p "Password:"]
     [:input {:value (:password app-state)}]
     [:button {:on-click handle-submit
               :disabled (not= 'Ready state)
               :image (when (= 'Submitting state) "spinner.png")}]])

Final Thoughts

The state machine based approach to building UI is both simple and extensible. There's less code. We've not only reduced the size of app state we've also removed all of the complex logic that we needed to keep it consistent. All gotten by using a simple abstraction -- the state machine -- that is widely understood by developers. So next time you have a user interface that just won't settle down, think about making your life easier with a simple state machine.

Separation of Concerns in Datomic Query: Datalog Query and Pull Expressions

One concept that newcomers to Clojure and Datomic hear an awful lot about is homoiconicity: the notion that code is data and data is code. This is one of several simple yet powerful concepts whose applications are so prevalent that it's easy to forget just how powerful they are.

One example of this is the choice of Datalog as Datomic's query language. Datalog queries are expressed as data, not strings, which means we can compose them, validate them, and pass them around much more simply than with strings.

When I first started working with Datomic, I found myself writing queries like:

(d/q '[:find [?lname ?fname]
       :in $ ?ssn
       :where
       [?e :person/ssn ?ssn]
       [?e :person/first-name ?fname]
       [?e :person/last-name ?lname]]
     (d/db conn)
     "123-45-6789")

This returns a result set like this:

["Murray" "William"]

Without seeing the query that generated this result, you might think it's a collection of first names. Even if you understand it to be the first and last name of one person, you might not know that the person's last name is "Murray" and the person's first name is "William," better known as "Bill."

We can clarify the intent by putting the query results in a map:

(->> (d/q '[:find [?lname ?fname]
            :in $ ?ssn
            :where
            [?e :person/ssn ?ssn]
            [?e :person/first-name ?fname]
            [?e :person/last-name ?lname]]
          (d/db conn)
          "123-45-6789")
     (zipmap [:last-name :first-name]))
;; => {:last-name "Murray" :first-name "William"}

That's a nicer outcome, but we'd have some of work to do if we decided to fetch :person/middle-name and add it to the map. Not too much work for that one attribute, but eventually we'd find out that we also need to include :person/ssn as well. And then the :address/zipcode of the :person/address referenced by this person entity, adding several where clauses, and ever increasing lists of logic variables and input bindings.

And then, when we want to search for all the person entities that have the last name '"Murray"', we have quite a bit of code to either duplicate or extract from the function definition.

Enter pull

The pull API can help here because we can search for an entity using a lookup ref, which is a data structure, and declare a hierarchical selection of attributes we want in a data structure as well:

(d/pull (d/db conn)
        [:person/first-name
         :person/last-name
         {:person/address [:address/zipcode]}]
        [:person/ssn "123-45-6789"])

The result is a clojure map that looks a lot like the pattern we submitted to pull:

{:person/first-name "William"
 :person/last-name "Murray"
 :person/address {:address/zipcode "02134"}}

See how nicely this separates how we search for the person from the details of the person we want to present? Also, who knew that Bill Murray lived where all the Zoom kids live? (Hint: he probably doesn't.)

But what if we want to find all of the persons that live in "02134"? pull requires an entity id or a lookup reference, so we'd have to search for those separately, and then invoke pull-many, resulting in two separate queries.

Pull expressions in queries

Luckily, Datomic supports pull expressions in queries, so we can find all of the persons that live in the "02134" zip code like this:

(d/q '[:find [(pull ?e [:person/first-name
                       :person/last-name
                       {:person/address [:address/zipcode]}]) ...]
       :in $ ?zip
       :where
       [?a :address/zipcode ?zip]
       [?e :person/address ?a]]
     (d/db conn)
     "02134")

The :where clauses in this example are all about search, and the presentation details are represented in the pull expression. This provides the same clean separation of concerns we get from the pull function, and does it in a single query. Nice!

Now, when the requirement comes in to add the :person/middle-name to results of this query, we can just add it to the pull expression:

(d/q '[:find [(pull ?e [:person/first-name
                        :person/middle-name
                        :person/last-name
                        {:person/address [:address/zipcode]}]) ...]
       :in $ ?zip
       :where
       [?a :address/zipcode ?zip]
       [?e :person/address ?a]]
     (d/db conn)
     "02134")

And, because the pull expression is just data, we can pass it in:

(defn find-by-zip [db zip pull-exp]
  (d/q '[:find [(pull ?e pull-exp) ...]
         :in $ ?zip pull-exp
         :where
         [?a :address/zipcode ?zip]
         [?e :person/address ?a]]
       db
       zip
       pull-exp))

(find-by-zip (d/db conn)
             "02134"
             [:person/first-name
              :person/middle-name
              :person/last-name
              {:person/address [:address/zipcode]}])

And compose it:

(def address-pattern [:address/street
                      :address/city
                      :address/state
                      :address/zipcode])

(find-by-zip (d/db conn)
             "02134"
             [:person/first-name
              :person/middle-name
              :person/last-name
              {:person/address address-pattern}])

Or support a default:

(defn find-by-zip
  ([db zip] (find-by-zip db zip '[*]))
  ([db zip pull-exp]
   (d/q '[:find [(pull ?e pull-exp) ...]
          :in $ ?zip pull-exp
          :where
          [?a :address/zipcode ?zip]
          [?e :person/address ?a]]
        db
        zip
        pull-exp)))

(find-by-zip (d/db conn) "02134")

Now clients can tailor the presentation details based on their specific needs in a declarative way without having any knowledge of the query language itself, but they're not forced to.

Summary

Separation of concerns makes code easier to reason about and refactor. The pull API separates search from attribute selection, but limits search to a known entity identifier. Despite that constraint, it's still a very good fit when you already know the entity id or the value of a unique attribute to use in a lookup ref.

Query supports this same separation of concerns, and it's up to you to write your queries this way, but doing so gets you the same benefits: simpler code that is easier to reason about and refactor. Plus you get the full power of Datalog query!

Clojure for Neovim for Clojure

I started programming in Clojure in 2011, more or less the early medieval period of the language. Back then I did what many other Vim based programmers did: I wrote code in a Vim buffer while running a REPL in a separate window. If I was feeling ambitious I might use Vimshell. This setup did work, but as a longtime Java and Python programmer I missed the smoothly integrated tooling of those languages.

By 2012 things had gotten better. Leiningen now had support for nREPL, which enabled Tim Pope to create vim-fireplace. Vim-fireplace didn't quite have feature parity with Emacs + Cider, but it was capable. For example, at the 2013 Lambda Jam Chris Ford gave an excellent, highly interactive talk, where he composed music using vim-fireplace.

An End to VimScript Hell

But like rust, programmers never sleep and the day soon came when I was ready to start tinkering with my Clojure tooling. And that's when I came face to face with the 2,000 lines of VimScript that is vim-fireplace. While I'm grateful to Tim Pope for the time and energy he spent creating vim-fireplace, all of that stuff just overwhelmed me. If you have never experienced it, let me put it this way: People have described VimScript as a "baroque, deranged and annoyingly useful mess" and the agonies of the vim-fireplace code seemed to bear this out. So as excited as I was about Lisp and Clojure, taking the time to learn VimScript was not part of the plan.

Everything changed mid-2014 when I discovered Neovim. Neovim is a drop-in replacement for Vim, built for users who want the good parts of Vim -- and more -- without the cruft. Neovim is not a Vim clone so much as its sibling: Neovim rolls in all of the Vim patches and adds some nifty new features. Over time, some of Neovim's features even found their way into Vim.

Neovim also adds some long awaited goodies to Vim:

  • Built in terminal emulator
  • First class embedding
  • Multithreading

But the key feature for me was:

  • RPC API / Remote Plugins

Remote Plugins

Alongside Vim's original in-process plugin model, Neovim also offers a new remote plugin model. Now, plugins can be implemented as arbitrary programs which Neovim runs as co-processes. On top of being able to work asynchronously, remote plugins are also safer: Isolated in a separate process, it's much harder for a remote plugin to block or crash the whole editor. Best of all, since remote plugins are arbitrary programs, you can write them in any language.

Remote plugins establish a direct communication channel to the Neovim process using Neovim's new RPC API, allowing them to:

  • call API functions
  • listen for events from Neovim
  • receive remote calls from Neovim

This new remote API is accessible via a TCP or Unix Domain socket, as well as Standard IO, and uses MessagePack-RPC, an asynchronous protocol. This means other types of "clients" can access the RPC API as well: GUIs, scripts, even another Neovim process!

This new remote API offers functions to do what you'd expect: Read and update buffer contents, set cursor location, and execute any arbitrary Vim commands.

A Clojure API

Immediately after discovering this, I created a client library for the Neovim API using Clojure, allowing plugin authoring in Clojure. Finally, I had what I'd wanted: I could write entire plugins in Clojure, or simply fire up a REPL and interact with the running Neovim process. For example, to set the current buffer's text:

$> NVIM_LISTEN_ADDRESS=127.0.0.1:7777 nvim
user=> (require '[neovim-client.nvim :as nvim])
user=> (require '[neovim-client.1.api :as api])
user=> (require '[neovim-client.1.api.buffer-ext :as buffer-ext])

user=> (def conn (nvim/new 1 "localhost" 7777))
user=> (def b (api/get-current-buf conn))

user=> (api/command conn "e README.md") ;; open file
user=> (def lines (buffer-ext/get-lines conn b 0 -1))
user=> (buffer-ext/set-lines conn b 0 (count lines) (map clojure.string/reverse lines))

The example shows the synchronous functions, but the library also provides non-blocking semantics.

Even better, since Neovim can be executed headlessly via nvim --embed, I was able to write tests which exercise the API against an actual Neovim process! with-neovim is a macro which creates & tears down the nvim process (for each test).

(deftest change-buffer-text
  (with-neovim
    (let [{:keys [in out]} *neovim*
          conn (client.nvim/new* 1 in out false)]
      (let [b1 (api/get-current-buf conn)
            _ (api.buffer/set-lines conn b1 0 1 false ["foo"])
            _ (api/command conn "new")
            b2 (api/get-current-buf conn)
            _ (api.buffer/set-lines conn b2 0 1 false ["bar"])]
        (is (= ["foo"] (api.buffer/get-lines conn b1 0 1 false)))
        (is (= ["bar"] (api.buffer/get-lines conn b2 0 1 false)))))))

A Simple Tool

While many of the earlier Clojure integrations like vim-fireplace were based on nREPL, which was fairly complex, life got a lot easier with the Socket server REPL. Introduced with Clojure 1.8, the Socket server REPL meant that you could expose a REPL via a TCP socket from any existing Clojure application, with no additional dependencies, or even code! All you had to do was set a single JVM system property on startup.

$> java ... -Dclojure.server.repl="{:port 5555 :accept clojure.core.server/repl}"

Now, all the necessary pieces were there. With this new, dependency-free REPL, and my Neovim RPC API client library, I was able to create a Socket REPL plugin for Neovim. The plugin simply takes some code from a Neovim buffer, and writes it to the TCP socket connected to the REPL. Whatever it reads from the socket end up in a "results" buffer.

As I say, there are no dependencies beyond the Clojure runtime. And it's written in Clojure, not Vimscript, a drastic reduction in complexity.

To be fair, my Neovim integration provides just the basics:

  • Eval the form under the cursor
  • Eval / load-file the current buffer
  • doc the form under the cursor

In addition to having fewer features than vim-fireplace, clojure-socketrepl.nvim does less automatically.

For example, you always have to explicitly :Connect host:port, rather than automatically detecting the port of the nREPL instance presumably started by Leiningen. There is also only ever one REPL Connection, in contrast to vim-fireplace's ability to simultaneously connect to multiple REPLs and determine which one to use based on a buffer's file path.

On the other hand, when you evaluate code using clojure-socketrepl.nvim, Neovim does not block waiting for a response. All plugin interactions are completely asynchronous. Results accumulate in a buffer as they are read from the Socket REPL connection.

There is also no special treatment for standard IO. When you print something in a go-loop, it shows up in your results buffer exactly as you'd expect when interacting with a REPL. Contrast this with vim-fireplace which returns a string representation of the channel (the result of evaluating the go-loop), and swallows the resulting print output.

As you'd expect, you can interactively develop the plugin at the REPL (using the plugin), but you do need to take care that you don't fall all the way down the rabbit hole.

Final Thoughts

This was a fun experiment, and a great way to squash some bugs in the Neovim API client library. I've been using this Socket REPL plugin for my daily development workflow successfully for a few months now. This may not be the tool for everyone, but I find it useful. It is an order of magnitude less complex than the vim-fireplace tool stack, while retaining much of the core functionality.

Far more importantly though, building clojure-socketrepl.nvim has helped strengthen the Neovim API client library for Clojure. My hope is the client library and possibly the plugin, will help add momentum to an effort to improve the state of Vim tooling for Clojure.

Developing the Language of the Domain

2U, a company that provides the technology, services and data architecture to transform its clients into digital universities, asked for Cognitect’s help making their operations more efficient. They had a deep client-onboarding pipeline, constrained by the speed of content-creation, and they wanted to eliminate unnecessary bottlenecks in their software systems.

Iteration Zero

During our Iteration Zero, we worked with 2U engineers to sketch out a high-level view of their software architecture, with a particular eye toward information flows. As this diagram grew to cover most of a conference-room wall, it became clear that the architecture had a lot of moving parts and information silos, not surprising for a successful startup after several years of rapid growth.

The key insight of the architecture review was that 2U relied on people to move data between software systems — mostly content, but also configuration, access credentials, and design artifacts. Streamlining these processes meant not only building software systems that could communicate directly with one another, but also incorporating human work-products into an automated workflow.

The Language of the Domain

Working with a mixed team of developers and operations specialists at 2U, we set out to develop a model of the business domain. It started with a simple question: What types of entities — what kinds of stuff — do you work with? Programmers may recognize this as a classic object-oriented design exercise, but we weren’t limited to object-oriented nomenclature or a complex modeling language such as UML. Instead, we took advantage of our primary tools, Clojure and Datomic, to let the customer lead us toward a model that was meaningful to their business.

As we collected answers to these questions, we could quickly encode them in Clojure’s Lisp-like syntax:

(entity University
  "An institution of higher learning"
  (attr :university/shortname string identity #"[a-z]{2,4}"
    "Unique, human-readable abbreviation for this university")
  (attr :university/full-name string non-blank unique
    "Fully-expanded name of the university for public display"))

(entity Program
  "Something offered by a school or college culminating in
  a degree or certificate."
  (attr :program/full-name string non-blank
    "Fully-expanded name of the program for public display"))

(relationship offers
  "Every Program is owned by a single University"
  [Program :program/university University]
  [University :university/programs Program many])

Although this looks like Clojure code, it’s just data. Clojure’s edn reader and clojure.spec made it easy to parse and load this data into a Datomic database. Then it was a simple matter of programming to transform it into a variety of representations.

The very first artifact we deployed at 2U was an internal web app to explore the domain model, including a graph visualization of entities and relationships:

To gather feedback, we printed a poster-sized version of the same diagram, hung it in a corridor next to a stack of post-its, and asked everyone in the company to suggest improvements.

As we collected feedback, we learned that almost every team had a unique perspective on the business. Some of the most commonly-used terms had widely-divergent definitions. It was not exactly a blind-men-and-elephant scenario: None of the definitions was actually incorrect, but neither were any of them complete.

The fable of the blind men and the elephant: Each perceives only one part, on which he bases his (incorrect) conception of the whole animal. By contrast, one team's understanding of a domain concept may be entirely correct for that team's work, yet still inadequate to represent that concept for the business as a whole.

The fable of the blind men and the elephant: Each perceives only one part, on which he bases his (incorrect) conception of the whole animal. By contrast, one team's understanding of a domain concept may be entirely correct for that team's work, yet still inadequate to represent that concept for the business as a whole.

Many of our collaborators told us this was the most valuable part of the process for them, as it expanded their understanding of the business and their role in it. We were applying agile software development practices to a fundamentally cognitive task. The goal was not to produce software but to better understand the domain, helping 2U to learn about itself. The software behind the model and visualizations was merely a means to that end.

A Foundation in Data

Even while the model was still evolving, Datomic’s flexible approach to schema enabled us to start work on a database of domain knowledge. Since Datomic schemas are themselves expressed as data, we could automate the process of keeping the database in sync with the domain model.

;; Sample Datomic schema elements generated from domain model
[{:db/ident       :university/shortname
  :db/doc         "Unique abbreviation for this university"
  :db/valueType   :db.type/string
  :db/cardinality :db.cardinality/one
  :db/unique      :db.unique/identity
  :db/index       true}

 {:db/ident       :program/university
  :db/valueType   :db.type/ref
  :db/cardinality :db.cardinality/one}]

When data is the foundation, repetitive programming tasks can often be replaced by higher-leverage meta-programming. With Datomic, Clojure, and clojure.spec, we took an abstract model and generated a database. Adding Pedestal into the mix, we generated web-service interfaces for managing information about entities in the database. Using ClojureScript, Om, and React, we deployed a series of “mini apps,” rich interactive forms tailored to particular business roles, all sharing the common back-end.

Since 2U was already using Slack for inter-team communication, we integrated a Slack “bot” into various team channels, posting notifications and reminders with direct links into the associated web apps. All of this information flowed back into a dashboard view to help project managers track work-in-progress across the organization.

Lessons Learned

When we started, our stakeholders at 2U had hoped to arrive at a “canonical” data model for a new system which would be the “source of truth” for the rest of the organization. We set out to iterate towards a canonical model on which everyone could agree.

Of course, building consensus takes time. We were fortunate to be paired with an energetic 2U project manager willing to spend long hours asking questions and collecting feedback on our evolving data model. Even so, it took months to arrive at something we felt confident in. Fortunately, the flexibility of Clojure and Datomic allowed us to keep our models up-to-date as we learned about the business domain, even when that required revisiting some of our earliest decisions.

In the end, we realized there wasn’t any one “canonical” model. The various definitions of core business concepts were all equally valid and equally necessary. This led us to our final data model, in which the core entities were represented not as single objects but as faceted collections of views. Each team, each software system could shine a spotlight on one face of the entity, and the sum of those views made up the complete picture. The new services we developed with 2U allowed each team to continue working with the model that best served them while also bridging the communications gap between formerly-siloed systems.

Ultimately, we learned that “canonical” and “truth” are asymptotic targets. In any organization, teams tend to view the rest of the business through the lens of the system they interact with. A diversity of models is inevitable as each team adapts their understanding of the domain to fit the task at hand. For human beings, it’s a necessary cognitive shortcut. For software (and the people who write it) it’s a call to action: to adapt to reality rather than trying to reshape it.

Living the Lean Startup with clojure.spec

I remember when I first read the The Lean Startup. Afterwards, it seemed so obvious, (as is common with very good ideas). It was all about applying the scientific method to startups, coming up with hypotheses about what you think your product should be. Then building tests to validate them and, in the process, avoiding wasted time, effort, and resources. In software terms:

Build the minimum functioning features that can get the needed feedback.

One of the most interesting things that I've seen over the last year is the Lean Startup methodology enhanced by the use of clojure.spec. During an engagement with Reify Health, an innovative healthcare startup, it was used to great effect to get faster feedback and to build a better product.

Building Lean

A common modern web application is the single-page application (SPA) paired with a server API. Building out the entire API and front end application is a good chunk of work and requires not only a good understanding of the domain but also knowing what data is going to be important. To really know that takes feedback from users and requires putting a working system in their hands. From a Lean point of view, this should be done is the smartest way possible with the least waste.

This is where one of the unique aspects of clojure.spec comes in. By creating specs for our data using custom generators, we can make the fake data come to life. For example, consider the spec for a name. We can define a list of sample names:

(def sample-names #{"Arlena" "Ilona" "Randi" "Doreatha" "Shayne" ...})

Then create a spec for the name with a custom generator using it.

(s/def ::name (s/with-gen
                       (and string? not-empty)
                       #(s/gen sample-names)))

Finally, we can generate sample data for it. In this case, we are just getting one but we could ask for as many as we want.

(ffirst (s/exercise ::name))
;=> "Ilona"

These specs can be composed together in nested data structures that describe the application. Best of all, using the generators, the data is produced in a random but consistent way. It allows us to create a fake local database that can back the SPA and have it function in a realistic way. More importantly, the product owners can get feedback on the application without having to build out the back end part of it yet.

Building an application this way is smart. The front end features can evolve rapidly in response to actual user feedback and not incur any unneeded effort on the server side. Eventually, when the feature/hypothesis is proved out, the back end can be constructed and the specs can be leveraged even more with sharing.

Sharing specs across the front and back end

One of the killer features of Clojure is that you can share code between Clojure and ClojureScript with .cljc files. This means you can also share specs. Sharing specs in this way allows for the validation of the data flowing across the boundaries of system. For example, the data for a customer endpoint can be validated before it goes across the wire on the server side and again once it hits the front end application. Specs can be validated with the s/valid? function:

(s/valid? spec value)

If a spec fails this validation, an error can be integrated with the full explanation of what was wrong with the s/explain-data function:

(s/explain-data spec value)

For an quick example, let's take the case of an endpoint returning a list of customers. Each customer is a map of data consisting of a customer-id, name, and state. Using our ::name from above, we'll create a couple more specs for the customer id and state. Finally, we'll create a spec for the ::customer map and the collection of ::customers.

(s/def ::id int?)
(defn state? [s]
   (-> (filter #(Character/isUpperCase %) s)
         count
         (= 2)))
(s/def ::state (s/and string? state?))
(s/def ::customer (s/keys :req-un [::id ::name ::state]))
(s/def ::customers (s/coll-of ::customer))

At this point, it would be useful to have a function to validate a value against a spec and if it isn't valid, throw an error with the full spec problem.

(defn validate
  [spec value message]
    (when-not (s/valid? spec value)
      (throw (ex-info message (s/explain-data spec value)))))

Now we can validate a list of customers on the api-server to make sure that they are valid before sending to the client.

(validate ::customers [{:id 1 :name "Susan" :state "OH"} {:id 2 :name "Brian" :state "CA"}] "Bad customers")
=> nil
(validate ::customers [{:id 1 :name "Susan" :state "OH"} {:id 2 :name "Brian"}] "Bad customers")
;=> 
clojure.lang.ExceptionInfo: Bad customers
clojure.spec/problems: ({:path [],
:pred (contains? % :state),
:val {:id 2, :name "Brian"},
:via
[:mynamespace/customers
:mynamespace/customer],
:in [1]})

Likewise, the client can validate the customers when it receives it with the same spec before it does any processing on it. Let's take a step back and look at what this lean, spec driven development process gives us.

Summary

  • Using clojure.spec to generate realistic looking data allows product owners to get fast feedback from users without incurring the development expense of building out the back end portion on unproved schema assumptions.
  • Specs can be shared across the front and back end in .cljc files, enabling code reuse and eliminating more wasted effort.
  • Use of spec to validate data across system boundaries at the server and client ensure application stability and integrity, building a quality product.

After seeing clojure.spec used this way to build an application, it seems obvious, (again like most good ideas). It's a powerful tool for startups, or any business, to build applications in a smart and lean way.

How We Work: Iteration Zero

As a Project Coach at Cognitect, I get to work with awesome people. My colleagues are talented, passionate, and -- occasionally -- opinionated people. Along with software and other sorts of engineers we have people with backgrounds in music, physics, radio announcing and mathematics. But we have all found a common purpose here at Cognitect: Our customers. I also get to work with the even more diverse crew we are lucky enough to call our customers. So when I start a project the only thing I can be sure of is that everyone, everyone wants it to succeed.

What goes into a successful software project? We’ve all seen teams of great people tackle hard technical problems and triumph. Sadly, if you’ve been in the software business any length of time, you have also seen teams of great people stumble. At Cognitect we believe that software projects are mainly about people. Look around at the industry today and you can find hard technical problems, problems that can defeat even the brightest. But much more often project failures are people failures. Way more systems had been brought down by mismatched expectations and personality conflicts than by off-by-one errors or dueling library versions.

If pulling off a successful software project is all about people, then you need to pay attention to the people issues all through the project. Sadly that’s not how it usually is: Projects usually get much more attention in the middle and at the end than they do in the beginning. It’s only human nature: At the beginning of a project, the deadline is as far away as it ever will be. At the start people are generally relaxed, looking forward to a new challenge. It’s usually in the middle when commitments and decisions have been made and the pages are flying off the calendar that we tend to sit up and take notice. But after 23 years of running or coaching projects, I can tell you that the beginning of a project is when you have your biggest opportunity to put your project on the right path.

Before the Project Kick-Off

Have you ever been to a project kick-off meeting that was just a formality to introduce the team, and announce that the project started? Maybe we will vote on a cool code name and if we’re lucky there will be pizza. I’ve been to a lot of those project kick-off meetings and THEY DO NOT WORK.

As I say, the secret to software project success is the people. It’s getting a room full of individuals to work together as a team. And the secret to getting a team to work is to start before the project kick-off meeting. Step one is to figure out who should participate in the project kick-off meeting.

Who should participate in the project kick-off? Obviously, the people who are going to do the work, the developers, designers, architects and anyone else who is going to pitch in should all be there. But you have to cast your net wider than just the people doing the work. You need to ask the key question: Cui bono? Who benefits? Who are you doing this project for? And you have to follow the money. Who is paying? A project kick-off meeting cannot be productive without all the stakeholders and the project sponsor’s participation. Getting all the key players in a room together can be a pain, but if you can’t pull everyone together you may as well vote on the code name while you wait for the pizza.

Notice that I keep saying participate, not attend. Have you ever been in a meeting where a key stakeholder is “attending” the meeting but isn’t really fully participating or even paying attention? I have, and it’s frustrating and disrespectful. Worst of all it’s distracting, distracting to the folks who are really involved.

Eye Zero

At Cognitect, we start our projects with Iteration Zero or I-0 for short. We call it iteration zero because it’s the iteration before the first “real” iteration. The first day of i-0 is the project kick-off meeting. After that, the i-0 activity focuses on learning about the project and business needs, technical details, and the beginnings of a project plan. A typical i-0 lasts two to four days, though we have done them in as little as one day for a small, straightforward project and had one stretch over two weeks for a massive effort.

The critical thing on that first day is to make sure the project goals, values, and motivations are clear to everyone. The whole point is to facilitate good conversations and get everyone participating. I use the classic talking stick technique to make sure that everyone has a say and that no one person (and I’m looking at you, Senior Executive Manager and Chief Architect) dominates the conversation. To get to this shared understanding, I use the three project framework exercises. I borrowed these exercises from Doug DeCarlo many years ago. I was lucky enough to work with Doug -- the author of eXtreme Project Management -- some years ago. Doug was a mentor to me and in the years since I’ve tweaked the exercises to fit my needs.

 

 

Exercise 1: Who is doing what for whom?

The first exercise seems very simple: It starts with this incomplete sentence:

<Who> will <Do> <What> for <Whom>.

The goal is to fill in the blanks and come up with a one-line description of the project. Kick things off by asking everyone to take a few minutes to come up with their own replacements for the <Who>, <Do>, <What> and the <Whom>.

For example, one version of the finished sentence might be:

The Genesis Application Team (Who) will design & build (Do) a new sales lead management software application with existing clients migrated (What) for the products sales team at the Orange company (Whom)

Or it might be:

The Genesis Application Team (Who) will code (Do) a new sales lead management module (What) for the Orange company operations team to deploy (Whom)

Or:

The Genesis Application Team (Who) will fix (Do) the issues around sales lead management (What) for the Orange company senior management (Whom)


Don’t be surprised at how many versions of the sentence your team produces -- it’s just a starting point. The next step is to get the team to agree on a single version of the sentence. Teams often have the most trouble filling in the <Whom>, but I have seen people struggle over every single word of this simple sentence. But if the project is to succeed we need to know who is doing what for whom. Again, don’t let any one person dictate: This should be a team discussion.

 


Exercise 2. The project will be complete when….

The goal of the second exercise is to complete the following sentence:

The project will be complete when…

The idea here is not to list all of the features, tasks, and requirements. Instead, focus on the top-level goals, summed up in two to five bullet points. Something like this:

  • The project will be complete when each account executive can log into the application and review the latest invoice status for their clients.
  • The project will be complete when most User Acceptance Testing feedback is implemented and is ready to be deployed to Production.
  • The project will be complete when there are no known severity-1 or severity-2 defects.

Again, work on sentences as a team, and drive towards team consensus. Emphasize that these criteria are important: When there is a check mark next to all the items, you are done. Or to put it another way, the project isn’t done until that last item is checked off.

 

Exercise 3. Win Conditions

We’ve done fill-in-the-blank and complete-the-sentence so it must be time for a multiple choice question.: Here’s a list of things we would like to get from our project:

  • Schedule
  • Scope
  • Quality
  • Budget
  • Customer Satisfaction
  • Teamwork / Learning

Ask everyone to take a few minutes to think about what are the top three win conditions for them. Then ask them to share their picks along with what they think each condition means. Typically the top three win conditions will vary from person to person.

For example, one answer might be:

  1. Budget is the most important thing since we only have $50,000 to spend.
  2. Customer Satisfaction is the next most important thing -- we need to make our users happy!
  3. And then quality: No critical defects.

The point of the exercise is to focus everyone’s mind on what will make the project a success.

The universal first reaction to this exercise is I choose all of the above. Sadly, there are times when we need to choose, say budget over scope or scope over schedule. Nobody wants to build a system that is behind schedule or lacks important features. As we work on the project we will do our absolute best to ensure it is as good as it can be in every possible way. But in every project I’ve ever been involved in, there has been at least one moment where we have had to choose. And at those moments you need to know what is important. By getting the What’s More Important? discussion out on the table right at the beginning, we won’t find ourselves improvising an answer two days before the deadline.

An equally important goal is to make sure that everyone understands what each of the win conditions means. Sue’s definition of budget might mean within 10% our estimate while Carol’s might mean absolutely no more than $50K. Bob might think that hitting the scope goal means getting the minimum viable product up and running in the lab, while his boss might think scope means all of the features deployed on the corporate cloud. Before you walk out of the room on that first day, make sure the team agrees on what three most important win conditions are and what they mean.

At one of my previous jobs, I joined a team that was already three months into their project. One of the first things I did was run the win condition exercise. The team chose the following:

  1. Schedule
  2. Scope
  3. Quality.

That afternoon I asked the project sponsor, who hadn’t attended the first meeting, to pick her top three win conditions. Her list was this:

  1. Scope
  2. Quality
  3. Teamwork.

Notice that while the rest of the team thought that schedule was the most important thing, it didn’t even make the sponsor’s top three. Good thing you worked that weekend…

But the differences didn’t stop there: To the team, Scope meant this:

We must stick to the feature & functional requirements as defined.

To the project sponsor, Scope -- her top priority-- meant this:

The requirements are changing all the time, and so we need to keep track of them accurately and adjust accordingly.

Once this all came to light, the team realized for the first time that the scope can and is expected to change throughout the project. Something you might want to know.

This example also shows how important it is to check in on the results of the exercises while the project is going on. If the landscape has changed, it’s important to note that, adjust the project framework, and communicate the changes to ensure everyone involved remains on the same page.

A Framework for Success

As you can see the focus of the project framework exercises is on the people first, and the other aspects of the project second. The exercises trigger important conversations, realizations, and lead to a shared understanding. They get people asking questions and talking through issues that might otherwise not come up until it is too late, if at all.

Time and time again I’ve found these set of exercises to be the key to a successful project. Time and time again my clients and colleagues have found them to be very useful. Project kick-off is - emphatically - not just a formality. A great kick-off is the first step to making your project a success.

State of Clojure 2016 Results and Analysis

Welcome back to the annual State of Clojure survey results. This year we held steady in our response rate as 2,420 of you took the time and effort to weigh in on your experience with Clojure - as always, we appreciate that time and effort very much. And, as always, thanks to Chas Emerick for starting this survey 7 years ago.

Clojure (and ClojureScript) were envisioned as tools that could make programming simple, productive, and fun. They were always aimed squarely at the working developer - someone being paid to solve complicated problems who needed to focus more on the solution and less on the unnecessary complexity surrounding it. While we love the academics, open source developers, and hobbyists who have flocked to Clojure, we are always happy to see signs of commercial adoption.

Last year, we had an outright majority of users (57%) using Clojure at work. This year, that number accelerates up to 67%.

Commercial Clojure use is for products, not just internal tools

A whopping 60% of respondents who use Clojure at work are building applications for people "outside my organization". We changed the wording of the answers to this question from the 2015 survey, so a direct head-to-head comparison isn't possible. However, in 2015, fully 70% of respondents said their use was for "personal" projects, while 42% said "company-wide/enterprise". This year, only 5% answered "just me". Even without the direct results comparison, the data shows a dramatic shift towards building products.

This year we also introduced a new question, asking what industry or industries people develop for. For commercial users, "Enterprise Software" was the leader (at 22%), followed by "Financial services/fintech", "Retail/ecommerce", "Consumer software", "Media/advertising", and "Healthcare". Everything else was at under 5% reporting. When we dig deeper and look at each of those industries in turn, we find that within each one, "outside my organization" is still the most common answer. In fact, only in "Financial services/fintech" do internal tools come within 15% of "outside my organization".

Clojure users are adopting the public cloud

Last year, 51% of respondents said they were deploying into the public cloud. This year, that number is up to 57%, coming almost entirely at the expense of "traditional infrastructure" (private/hybrid cloud was essentially unmoved). Recently, rescale released a report estimating that "we are in fact only at about 6% enterprise cloud penetration today" (https://blog.rescale.com/cloud-3-0-the-rise-of-big-compute/). Clojurists in the workforce are considerably ahead of this curve, if true.

There is, unsurprisingly, a heavy correlation between use of the public cloud and developing applications for use "outside my organization". The use of the public cloud also skews heavily towards smaller organizations (companies of fewer than 100 people make up 70% of the public cloud group, while only 55% of the "traditional infrastructure" fell into that category).

There were only two industries where traditional infrastructure dramatically beat public cloud: Government/Military (which seems obvious) and Academia (which seems sad, although it could be a reflection of universities' sunk investment in infrastructure).  And only Telecom had a majority of respondents indicating "private/hybrid", which is almost certainly a reflection of the fact that hybrid cloud offerings are, by and large, products from the Telecom sector.

Clojure has penetrated all kinds of companies, not just startups

If you look at the spread of response for size of organization, while there is a clear winner (11-100), the split is fairly even otherwise. A full 17% of responses were from companies of 1000+ people.

Web development and open source development are the dominant two domains regardless of company size, but coming in at a strong #3 is "building and delivering commercial services", except when you look at responses from 1000+ companies, in which case "enterprise apps" unsurprisingly moves ahead.

"Enterprise software" is the #1 industry regardless of company size. However, #2 is quite distinctly different across sizes -- in smaller companies (< 100 employees), "consumer software" is the strong #2, whereas for companies > 100 employees, financial services is the dominant #2.

(An interesting aside: most industries show a normal bell curve, with most respondents coming from the middle two categories, 11-100 and 101-1000. For example: 

Only two industries show the inverted bell curve, with the most respondents at the edges -- Academia, and Government/Military.

You will note that these are the two industries where "traditional infrastructure" also dominates, so the distribution of respondents either being from the largest [most conservative] and smallest [most disruptive] paints an interesting picture of how industries change.)

One of the biggest barriers to adoption is corporate aversion to new technologies

As was true the last two years, error messages and "hiring and staffing" are the top 2 reasons given for "What has been most frustrating or has prevented you from using Clojure more than you do now?" though both have fallen several percent since then. Interestingly, "Need docs/tutorials" has jumped from #5 in 2015 to #3 now, which corresponds well with a continuing growth of new entrants into the community.

When you break down respondents by size, each category is relatively uniform with one glaring exception: for some reason, companies of 100-1000+ people have a problem with the lack of static typing (it is a strong #3 in that cohort). Everyone else has a carbon copy distribution of the overall answers. When you look by industry, the "enterprise software" crowd would clearly benefit from more tools and a better IDE experience.

What we found fascinating was drilling through the free answer portion of the responses to this question. Next year, we'll be adding a new possible answer: "corporate aversion to new technologies". If it was captured as one of the main responses, it would come in #2 or #3 overall. We clearly have work to do as a community to arm the technologists who wish to adopt Clojure with the materials and support they need to overcome internal inertia or resistance. That's an area we'd love to both see more people contributing, but also letting us at Cognitect know what else we could provide that would be useful.

Summary

When you dig into these numbers, you see a technology that has been accepted as a viable tool for crafting solutions across industries, company types and sizes, and target domains. As you might expect, adoption of Clojure seems closely correlated with the adoption of other new technologies, like the public cloud, and Clojure is beset with some of the same headwinds, like corporate aversion to new things. We are encouraged by the maturation of the community and of the ability of the technology and its adherents to tackle the hard problems of commercial software development.

Detailed Results

In addition to the big themes above, this section highlights a few of the more interesting results for specific questions in the survey. For details on all questions, see the full results.

Which dialects of Clojure do you use?

The interesting detail here was that the percentage of respondents using ClojureScript rose yet again, such that 2/3 of users are now using both Clojure and ClojureScript together (this has continually risen from about 1/2 3 years ago):

Clojure increasingly delivers on the promise of a single unified language stack that can be used to cover an entire application.

Prior to using Clojure, ClojureScript, or ClojureCLR, what was your primary development language?

We've changed the way this question is asked and the options provided several times so it's difficult to assess trends. However, it's clear that developers come to Clojure either from imperative/OO languages (Java, C#, C/C++) or from dynamic languages (Ruby, Python, JavaScript, etc) with only small numbers coming from functional programming languages like Scala, Common Lisp, Haskell, Erlang, etc.

What is your *primary* Clojure, ClojureScript, or ClojureCLR development environment?

Due to the general volatility of tools, it's interesting to see how this changes year to year. However, this year things were mostly pretty static with the three most common choices again Emacs/CIDER, Cursive/IntelliJ, and Vim with no major changes in percent use. Sublime, Light Table, and Eclipse/Counterclockwise all became a bit less common. The most interesting development was the rise in the use of Atom which was a new choice and selected by 6% of respondents.

What Clojure, ClojureScript, or ClojureCLR community forums have you used or attended in the last year?

This was a new question this year, trying to get a sense of how people are interacting with other members of the community. The Clojurians slack channel was the most frequently used - this is a great place to connect with others and has taken the place of IRC for many. About half of respondents are using the original language mailing lists, and almost that many have looked at the Clojure subreddit.

Interestingly, most respondents have not attended either local Clojure meetups or Clojure conferences either in-person or remotely. There are many active Clojure meetups and conferences in the world - if you'd like to talk to other Clojurists, take a look and see if one is near you!

Which versions of Clojure do you currently use in development or production?

Library maintainers are often interested in how quickly users are migrating to newer versions of Clojure as they decide whether they can use new features. We can see in this year's survey that most users are on the latest stable version (1.8.0) - 83%, with a third of respondents already using the 1.9 prereleases prior to final release. Less than 5% are using a Clojure version older than Clojure 1.7, which is good news for those that wish to rely on 1.7 features like cljc files or transducers.

What versions of the JDK do you target?

Similar to the prior question, it's useful to track what versions of the JDK are in use in the community. We saw significant consolidation to Java 1.8 over the past year (with Java 1.9 on the horizon) - 95% of users are using it with only about 2% using a version older than Java 1.7. For the moment, Clojure is still supported on Java 1.6 but eventually that support will be dropped.

What tools do you use to compile/package/deploy/release your Clojure projects?

While Leiningen continues to be ubiquitous, boot made significant advances this year, moving from 13% usage to 22% usage.

What has been most frustrating or has prevented you from using Clojure more than you do now?

Error messages continued to be the top frustration for people and we will continue to improve those with the integration of spec in Clojure 1.9. Interestingly, the majority of the other frustrations went down this year compared to last year:

  • Hiring/staffing - from 33% to 30%
  • Scripting - from 33% to 18% (maybe due to the rise of Planck and Lumo)
  • Docs - from 25% to 22% (hopefully the new Clojure and ClojureScript web sites have helped)
  • Static typing - from 23% to 16% (maybe due to the release of spec)
  • Long-term viability - from 20% to 10%
  • Finding libraries - from 16% to 11%
  • Portability - from 10% to 5% (continued uptake of cljc / reader conditionals)

Which JavaЅcript environments do you target?

The most interesting story here is the rise in three areas:

  • React Native - 18% (new choice this year)
  • Electron - 11% (new choice this year)
  • AWS Lambda - 9% (vs 5% last year)

As JavaScript continues to seep into every area of computing, ClojureScript is following along with it and seeing new and interesting uses. 

Which tools do you use to compile/package/deploy/release your ClojureScript projects?

We saw a small increase in Figwheel this year (after a huge jump after its release) with about 2/3 of ClojureScript users now using it. And as we saw in the prior tools question, there is a big jump in the number of ClojureScript developers using boot (from 15 to 23%).

Which ClojureScript REPL do you use most often?

Again, even more usage of Figwheel here (76%, up from 71% last year). We added Planck this year and it registered at 9%. The Lumo repl was not listed as a choice but did make a showing in the comments.

How are you running your ClojureScript tests?

We added this question to gather some information on what seems like an underserved area of the ecosystem. Of those who responded, we saw:

However, there was a lot of information in the "Other" responses as well. At least 60 people (more than replied for the Nashorn choice above) responded that they were either not testing at all or were relying on testing their ClojureScript via cljc tests that ran in Clojure. This is a great area for future improvements with no real consensus and a lot of developers not even doing it at all. Some other choices seen in the comments were Devcards, Karma, Phantom, and doo.

What has been most frustrating or has prevented you from using ClojureScript more than you do now?

The top answer here was "Using JavaЅcript libs with ClojureScript / Google Closure", which was a new choice we added this year. David Nolen and the ClojureScript community have been working hard on some of the biggest pain points in this area, which culminated in the recent release of a new ClojureScript version with better support for externs and modules.

Some of the other choices fell in importance this year (similar to Clojure):

  • "Using ClojureScript REPLs" went from 45% to 34% (rise of Figwheel, Planck, Lumo)
  • "Availability of docs" went from 39% to 31% (new ClojureScript web site)
  • "Long-term viability" went from 15% to 10%

Here you can add any final comments or opinions...

The majority of responses (~62%) here either expressed sentiments of happiness or gratitude (always good to see). Other categories centered around expected themes (many are areas of current or future work): docs/tutorials, error messages, tooling, startup time, etc. One relatively stronger theme this year was the need for better marketing for the purposes of expanding or introducing Clojure within organizations, which is a great area for contribution from the entire community.

The data

If you'd like to dig into the results more deeply, you can find the complete set of data from this and former years here:

Thanks again for providing your responses to help form this picture of our growing community!

Unlocking hidden value in your data

"Sustainable competitive advantage has to be won by creating the internal capacity to improve and innovate - fast and without letup." -- Spear, The High-Velocity Edge

Today, we are making available Vase, a tool we use to unleash our team’s data-driven superpowers.

The constant evolution of technology has a direct impact on business - innovate and deliver value or be left behind.  There's a lot of business value buried in your data.  The quicker and easier it is to unlock that data, the faster you get at that value and use it to do great things for your company and your customers.

At Cognitect, we live and breathe data-driven innovation. Every day we help organizations:

  • Unlock the potential of their data
  • Move into new markets quickly
  • Deliver measurable value

Vase is an example of the ways in which our teams find solutions to these challenges.  Microservices that used to take weeks or months to create only take minutes with Vase. While Vase has proven to be a valuable tool for two years as we continually developed it, it is still beta software, which will continue to evolve.

Vase: Data-driven microservices

Vase is a library for writing declarative, data-driven microservices.  A single HTTP service, complete with database integration and data validation, can be created within minutes.

We achieve this acceleration through Vase’s declarative nature- Vase does all of the mundane data-plumbing of a service, so you can focus on delivering value to your customers.  The microservices we build with Vase easily evolve and grow to meet new business demands.  Individual teams can each evolve their Vase services independently, ensuring that no team is blocked from delivering value.

A Vase Service describes three core parts: your data model, data validation, and HTTP API endpoints.  In upcoming blog posts we’ll walk through how to write each of these sections. To get you going in the meantime, we’ve got a Vase “Todo” sample as a guide and other basic documentation.

Getting started with Vase

Details for getting started with Vase can be found on the project’s GitHub page.  The Getting Started guide will take you through project creation (with the provided Leiningen/Boot template) and general development.

We’ll happily answer questions on the Pedestal mailing list, or on the #pedestal Clojurians slack.

Contact us to find out more about how Cognitect's teams of architects and developers can help your organization unlock the potential of your data. 

Creating a spec for destructuring

clojure.png

A while back David Nolen had a thoughtful post about using spec as a tool for thought, which included an exploration of creating a spec for clojure.core/let.

The latest Clojure alpha actually includes a spec for let that covers destructuring and I thought it might be interesting to walk through the details of how it is implemented.

I'll pick up approximately where David left off. A typical let looks like this:

(let [a 1
      b 2]
  (+ a b))

We can define an initial spec for clojure.core/let by splitting it into bindings and body:

(require '[clojure.spec :as s]
         '[clojure.spec.gen :as gen])

(s/fdef let
  :args (s/cat :bindings ::bindings
               :body (s/* any?)))

We then need to more fully define bindings as a vector of individual bindings. Each binding is made of a binding-form and an init-expr that computes the value of the local binding:

(s/def ::bindings (s/and vector? (s/* ::binding)))
(s/def ::binding (s/cat :binding ::binding-form 
                        :init-expr any?))

The expressions can be anything so we leave those as any?. The binding-form is where things get interesting. Let's first allow for binding-form to be just simple (no namespace) symbols. That's enough to create something to work with.

;; WORK IN PROGRESS
(s/def ::binding-form simple-symbol?)

Now that we have a full spec, we can actually try a few things. Let's try an example of conforming our bindings.

(s/conform ::bindings '[a 1, b 2])
;;=> [{:binding a, :init-expr 1} {:binding b, :init-expr 2}]

Looks good! We get back a vector of binding maps broken into the binding and the initial expression.

Now we need to expand our spec to include sequential destructuring and map destructuring.

Sequential destructuring

Sequential destructuring binds a series of symbols to the corresponding elements in the bound value. Optionally, the symbols may be followed by a variadic argument (using &) and/or an alias for the overall sequence (using :as).

Some examples:

;; Sequential destructuring examples:
[a b]
[a b :as s]
[a b & r]

To describe a sequential spec we use the spec regex operators:

;; WORK IN PROGRESS
(s/def ::seq-binding-form
  (s/cat :elems (s/* simple-symbol?)
         :rest  (s/? (s/cat :amp #{'&} :form simple-symbol?))
         :as    (s/? (s/cat :as #{:as} :sym simple-symbol?))))

Let's try it out:

(s/conform ::seq-binding-form '[a b])
;;=> {:elems [a b]}
(s/conform ::seq-binding-form '[a b :as s])
;;=> {:elems [a b], :as {:as :as, :sym s}}
(s/conform ::seq-binding-form '[a b & r])
;;=> {:elems [a b & r]}

Hang on a sec, what happened in the last example? The elems snagged & r as well because & is a symbol. We need to redefine our notion of what a binding symbol is to exclude the symbol &, which is special in the language of destructuring:

;; WORK IN PROGRESS
(s/def ::local-name (s/and simple-symbol? #(not= '& %)))
(s/def ::seq-binding-form
  (s/cat :elems (s/* ::local-name)
         :rest  (s/? (s/cat :amp #{'&} :form ::local-name))
         :as    (s/? (s/cat :as #{:as} :sym ::local-name))))

(s/conform ::seq-binding-form '[a b & r :as s])
;;=> {:elems [a b], :rest {:amp &, :form r}, :as {:as :as, :sym s}}

That's better. But it turns out I've not really been spec'ing the full truth of sequential destructuring. Each of the ::elems can itself be sequentially destructured, and even the rest arg can be destructured.

We need to back up to the beginning and reconsider the definition of ::binding-form to add the possibility of either a ::local-name (our improved simple symbol) or a sequential destructuring form. (We'll add map later.)

(s/def ::local-name (s/and simple-symbol? #(not= '& %)))

;; WORK IN PROGRESS (still missing ::map-binding-form)
(s/def ::binding-form
  (s/or :sym ::local-name
        :seq ::seq-binding-form))

(s/def ::seq-binding-form
  (s/cat :elems (s/* ::binding-form)
         :rest  (s/? (s/cat :amp #{'&} :form ::binding-form))
         :as    (s/? (s/cat :as #{:as} :sym ::local-name))))

Now ::binding-form is a recursive specification. Binding-forms are either symbols or sequential forms, which may themselves contain binding-forms. The registry provides naming indirection which makes this possible.

Let's try our prior example again and see things have changed.

(s/conform ::seq-binding-form '[a b & r :as s])
;;=> {:elems [[:sym a] [:sym b]], :rest {:amp &, :form [:sym r]}, :as {:as :as, :sym s}}

Our conformed result is a bit more verbose as it now indicates for each binding form what kind of binding it is. While this is more verbose to read, it's also easier to process. Here's how a recursive binding form example looks:

(s/conform ::seq-binding-form '[a [b & c] [d :as e]])
;;=> {:elems [[:sym a]
;;            [:seq {:elems [[:sym b]], :rest {:amp &, :form [:sym c]}}]
;;            [:seq {:elems [[:sym d]], :as {:as :as, :sym e}}]]}

Finally we are ready to look at map destructuring.

Map destructuring

Map destructuring has a number of entry forms that can be used interchangeably:

  • <binding-form> key - for binding either a local name with (get m key) or recursively destructuring
  • :keys [key ...] - for binding locals with the same name as each key to the value retrieved from the map using the key as a keyword. In addition the specified keys can be either symbols or keywords and simple or qualified. In all cases, the local that gets bound is a short symbol and the value is looked up as a keyword.
  • :<ns>/keys [key ...] - same as :keys, but where ns is used as the namespace for every key
  • :syms [sym ...] - for binding locals with the same name as each sym to the value retrieved from the map using sym, which may be either simple or qualified.
  • :<ns>/syms [sym ...] - same as :syms, but where ns is used as the namespace for every symbol.
  • :strs [str ...] - for binding locals with the same name as each sym to the value retrieved from the map using str as a sym, which must be simple.
  • :or {sym expr} - for providing default values for any missing local that would have been bound based on other entries. The keys should always be simple symbols (the same as the bound locals) and the exprs are any expression.
  • :as sym - binds the entire map to a local named sym.

There is a lot of functionality packed into map binding forms and in fact there are really three different map specs combined into this single map. We call this a "hybrid" map spec.

The first part describes just the fixed well-known attributes in a typical s/keys spec:

(s/def ::keys (s/coll-of ident? :kind vector?))
(s/def ::syms (s/coll-of symbol? :kind vector?))
(s/def ::strs (s/coll-of simple-symbol? :kind vector?))
(s/def ::or (s/map-of simple-symbol? any?))
(s/def ::as ::local-name)

(s/def ::map-special-binding
  (s/keys :opt-un [::as ::or ::keys ::syms ::strs]))

The second part describes the basic binding form specs (examples like {n :name} ), although the left hand side here can further destructure.

(s/def ::map-binding (s/tuple ::binding-form any?))

And finally we need to handle the new functionality for namespaced key or symbol sets (like :<ns>/keys or :<ns>/syms) which we'll describe here as a map entry tuple:

(s/def ::ns-keys
  (s/tuple
    (s/and qualified-keyword? #(-> % name #{"keys" "syms"}))
    (s/coll-of simple-symbol? :kind vector?)))

Then we can put all of these together into the ::map-binding-form by combining them as an s/merge or the well-known attributes and a description of the possible tuple forms:

;; collection of tuple forms
(s/def ::map-bindings
  (s/every (s/or :mb ::map-binding
                 :nsk ::ns-keys
                 :msb (s/tuple #{:as :or :keys :syms :strs} any?)) :into {}))

(s/def ::map-binding-form (s/merge ::map-bindings ::map-special-binding))

And finally we need to go back and define our parent spec to include map bindings:

(s/def ::binding-form
  (s/or :sym ::local-name
        :seq ::seq-binding-form
        :map ::map-binding-form))

And that's it! Here's an example binding form that shows several features of destructuring:

(s/conform ::binding-form
  '[[x1 y1 :as p1]
    [x2 y2 :as p2]
    {:keys [color weight]
     :or {color :black weight :bold}
     :as opts}])
;;=> [:seq {:elems [[:seq {:elems [[:sym x1] [:sym y1]], :as {:as :as, :sym p1}}]
;;                  [:seq {:elems [[:sym x2] [:sym y2]], :as {:as :as, :sym p2}}]
;;                  [:map {:keys [color weight]
;;                         :or {color :black, weight :bold}
;;                         :as opts}]]}]

Now that we have a spec for destructuring, we can reuse it anywhere destructuring is allowed - in fn, defn, for, etc. We could even leverage it to implement destructuring itself. Rather than recursively parsing the binding form, we could simply conform it to receive a more regular structure described in terms of the parts we've defined in the spec.

The Next Five Years of ClojureScript

I delivered a talk at the well attended ClojuTRE conference in Tampere, Finland this past September titled "The Next Five Years of ClojureScript". The ClojureScript community is growing at a healthy clip and many recent adopters are unaware that the ClojureScript development effort is so mature. I decided it was time to highlight how far the project has come and celebrate the incredible work of the community. While it's certainly true that Cognitect has and continues to lead core development, it's the community that has collectively delivered on the promise of the project by filling in so many important details. And of course, outside of core development there's been an unbelievable amount of broader open source activity to ensure ClojureScript is able to achieve the reach Rich Hickey talked about five years ago. Whether web browser, iOS, or Android - the ClojureScript community is bringing the simplicity of Clojure where we need it most.

The talk ends with some thoughts about ClojureScript looking ahead into the next five years. In many ways ClojureScript has and continues to be ahead of the JavaScript mainstream with respect to best practices. Concepts which are only starting to break into the mainstream such as immutability, single atom application state, and agile UI development via robust hot-code reloading are old news to ClojureScript users. And thanks to the under appreciated Google Closure compiler, ClojureScript offers features like dead code elimination and precise code splitting that popular JavaScript tooling is unlikely to achieve anytime in the near future.

Still despite some of these continuing issues, the JavaScript ecosystem offers many riches and looking at 2017 we'll be focusing on deeper integration with the various JavaScript module formats. As with Clojure and Java, a core ClojureScript value proposition is a simpler programming model that allows users to frictionlessly integrate those solutions from a vast ecosystem that precisely fits their needs.

Works on My Machine: Self Healing Code with clojure.spec

Works On My Machine is the place where Cognitects reflect on tech, culture, and the work we do. The views expressed on Works On My Machine are those of the author and don’t necessarily reflect the official position of Cognitect.

How can we can make code smarter? One of the ways is to be more resilient to errors. Wouldn't it be great if a program could recover from an error and heal itself? This code would be able to rise above the mistakes of its humble programmer and make itself better.

The prospect of self-healing code has been heavily researched and long sought after. In this post, we will take a look at some of the key ingredients from research papers. Then, drawing inspiration for one of them, attempt an experiment in Clojure using clojure.spec.

Self Healing Code Ingredients

The paper Towards Design for Self-healing outlines a few main ingredients that we will need.

  • Failure Detection - This one is pretty straight forward. We need to detect the problem in order to fix it.
  • Fault Diagnosis - Once the failure has been detected, we need to be able to figure out exactly what the problem was so that we can find a solution.
  • Fault Healing - This involves the act of finding a solution and fixing the problem.
  • Validation - Some sort of testing that the solution does indeed solve the problem.

With our general requirements in hand, let's turn to another paper for some inspiration for a process to actually achieve our self healing goal.

Self Healing with Horizontal Donor Code Transfer

MIT developed a system called CodePhage which is a system inspired from the biological world with horizontal gene transfer of genetic material between different organisms. In it they use a "horizontal code transfer system" that fixes software errors by transferring correct code from a set of donor applications.

This is super cool. Could we do something like this in Clojure?

Clojure itself has the fundamental ability with macros to let the code modify itself. The programs can make programs! That is a key building block but clojure.spec is something new and has many other advantages that we can use.

  • clojure.spec gives code the ability to describe itself. With it we can describe the data the functions take as input and output in a concise and composable way.
  • clojure.spec gives us the ability to share these specifications with other code in the global registry.
  • clojure.spec gives us the ability to generate data from the specifications, so we can make example data that fits the function's description.

With the help of clojure.spec, we have all that we need to design and implement a self-healing code experiment.

Self Healing Clojure Experiment

We'll start with a simple problem.

Imagine a programmer has to write a small report program. It will be a function called report that is made up of three helper functions. It takes in a list of earnings and outputs a string summary of the average.

(defn report [earnings]
  (-> earnings
      (clean-bad-data)
      (calc-average)
      (display-report)))

The problem is that our programmer has made an error in the calc-average function. A divide by zero error will be triggered on a specific input.

Our goal will be to use clojure.spec to find a matching replacement function from a set of donor candidates.

img1

Then replace the bad calc-average function with a better one, and heal the report function for future calls.

img2

The Setup

Let's start with the report code. Throughout the code examples I will be using clojure.spec to describe the function and its data. If you haven't yet looked at it, I encourage you to check out the spec Guide.

The first helper function is called clean-bad-data. It takes in a vector of anything and filters out only those elements that are numbers.

(defn clean-bad-data [earnings]
  (filter number? earnings))

Let's create a couple of specs to help us describe it. The first, earnings will be a vector, (for the params) with another vector of anything.

(s/def ::earnings (s/cat :elements (s/coll-of any?)))

The next spec for the output of the function we will call cleaned-earnings. It is going to have a custom generator for the purposes of this experiment, which will constrain the generator to just returning the value [[1 2 3 4 5]] as its example data[^1].

(s/def ::cleaned-earnings (s/with-gen
                            (s/cat :clean-elements (s/coll-of number?))
                            #(gen/return [[1 2 3 4 5]]))

An example of running the function is:

(clean-bad-data [1 2 "cat" 3])
;=>(1 2 3)

If we call spec's exercise on it, it will return the custom sample data from the generator.

(s/exercise ::cleaned-earnings 1)
;=> ([[[1 2 3 4 5]] {:clean-elements [1 2 3 4 5]}])

Now we can spec the function itself with s/fdef. It takes the earnings spec for the args and the cleaned-earnings spec for the return value.

(s/fdef clean-bad-data
        :args ::earnings
        :ret ::cleaned-earnings)

We will do the same for the calc-average function, which has the flaw vital to our experiment - if we pass it an empty vector for the earnings, the count will be zero and result in a run time divide by zero error.

(defn calc-average [earnings]
  (/ (apply + earnings) (count earnings)))

(s/def ::average number?)

(s/fdef calc-average
    :args ::cleaned-earnings
    :ret ::average)

Finally, we will create the rest of the display-report function and finish specing the function for report.

(s/def ::report-format string?)

(defn display-report [avg]
  (str "The average is " avg))

(s/fdef display-report
        :args (s/cat :elements ::average)
        :ret ::report-format)

(defn report [earnings]
  (-> earnings
      (clean-bad-data)
      (calc-average)
      (display-report)))

(s/fdef report
        :args ::earnings
        :ret ::report-format)

Giving a test drive:

(report [1 2 3 4 5])
;=> "The average is 3"

And the fatal flaw:

(report [])
;=>  EXCEPTION! Divide by zero

Now we have our problem setup. We need to have our donor candidates.

The Donor Candidates

We are going to have a separate namespace with them. They will be a number of them, all function speced out. Some of them will not be a match for our spec at all. Those bad ones include:

  • bad-calc-average It returns the first number in the list and doesn't calc the average at all.
  • bad-calc-average2 It returns a good average function but the result is a string. It won't match the spec of our calc-average function.
  • adder It takes a number and adds 5 to it. It also won't match the spec of calc-average.

There is a matching function called better-calc-average that matches the spec of our calc-average function and has the additional check for divide by zero.

(s/def ::numbers (s/cat :elements (s/coll-of number?)))
(s/def ::result number?)

(defn better-calc-average [earnings]
  (if (empty? earnings)
    0
    (/ (apply + earnings) (count earnings))))

This is the one that we will want to use to replace our broken one.

We have the problem. We have the donor candidates. All we need is the self-healing code to detect the problem, select and validate the right replacement function, and replace it.

The Self Healing Process

Our process is going to go like this:

  • Try the report function and catch any exceptions.
  • If we get an exception, look through the stack trace and find the failing function name.
  • Retrieve the failing function's spec from the spec registry
  • Look for potential replacement matches in the donor candidates
    • Check the orig function's and the donor's :args spec and make sure that they are both valid for the failing input
    • Check the orig function's and the donor's :ret spec and make sure that they are both valid for the failing input
    • Call spec exercise for the original function and get a seed value. Check that the candidate function's result when called with the seed value is the same result when called with the original function.
  • If a donor match is found, then redefine the failing function as new function. Then call the top level report form again, this time using the healed good function.
  • Return the result!
(ns self-healing.healing
  (:require [clojure.spec :as s]
            [clojure.string :as string]))

(defn get-spec-data [spec-symb]
  (let [[_ _ args _ ret _ fn] (s/form spec-symb)]
       {:args args
        :ret ret
        :fn fn}))

(defn failing-function-name [e]
  (as-> (.getStackTrace e) ?
    (map #(.getClassName %) ?)
    (filter #(string/starts-with? % "self_healing.core") ?)
    (first ?)
    (string/split ? #"\$")
    (last ?)
    (string/replace ? #"_" "-")
    (str *ns* "/" ?)))

(defn spec-inputs-match? [args1 args2 input]
  (println "****Comparing args" args1 args2 "with input" input)
  (and (s/valid? args1 input)
       (s/valid? args2 input)))

(defn- try-fn [f input]
  (try (apply f input) (catch Exception e :failed)))

(defn spec-return-match? [fname c-fspec orig-fspec failing-input candidate]
  (let [rcandidate (resolve candidate)
        orig-fn (resolve (symbol fname))
        result-new (try-fn rcandidate failing-input)
        [[seed]] (s/exercise (:args orig-fspec) 1)
        result-old-seed (try-fn rcandidate seed)
        result-new-seed (try-fn orig-fn seed)]
    (println "****Comparing seed " seed "with new function")
    (println "****Result: old" result-old-seed "new" result-new-seed)
    (and (not= :failed result-new)
         (s/valid? (:ret c-fspec) result-new)
         (s/valid? (:ret orig-fspec) result-new)
         (= result-old-seed result-new-seed))))

(defn spec-matching? [fname orig-fspec failing-input candidate]
  (println "----------")
  (println "**Looking at candidate " candidate)
  (let [c-fspec (get-spec-data candidate)]
    (and (spec-inputs-match? (:args c-fspec) (:args orig-fspec) failing-input)
         (spec-return-match? fname c-fspec orig-fspec  failing-input candidate))))

(defn find-spec-candidate-match [fname fspec-data failing-input]
  (let [candidates (->> (s/registry)
                        keys
                        (filter #(string/starts-with? (namespace %) "self-healing.candidates"))
                        (filter symbol?))]
    (println "Checking candidates " candidates)
    (some #(if (spec-matching? fname fspec-data failing-input %) %) (shuffle candidates))))


(defn self-heal [e input orig-form]
  (let [fname (failing-function-name e)
        _ (println "ERROR in function" fname (.getMessage e) "-- looking for replacement")
        fspec-data (get-spec-data (symbol fname))
        _ (println "Retriving spec information for function " fspec-data)
        match (find-spec-candidate-match fname fspec-data [input])]
    (if match
      (do
        (println "Found a matching candidate replacement for failing function" fname " for input" input)
        (println "Replacing with candidate match" match)
        (println "----------")
        (eval `(def ~(symbol fname) ~match))
        (println "Calling function again")
        (let [new-result (eval orig-form)]
          (println "Healed function result is:" new-result)
          new-result))
      (println "No suitable replacment for failing function "  fname " with input " input ":("))))

(defmacro with-healing [body]
  (let [params (second body)]
    `(try ~body
          (catch Exception e# (self-heal e# ~params '~body)))))

What are we waiting for? Let's try it out.

Running the Experiment

First we call the report function with a non-empty vector.

(healing/with-healing (report [1 2 3 4 5 "a" "b"]))
;=>"The average is 3"

Now, the big test.

(healing/with-healing (report []))
; ERROR in function self-healing.core/calc-average Divide by zero -- looking for replacement
; Retrieving spec information for function  {:args :self-healing.core/cleaned-earnings, :ret :self-healing.core/average, :fn nil}
; Checking candidates  (self-healing.candidates/better-calc-average self-healing.candidates/adder self-healing.candidates/bad-calc-average self-healing.candidates/bad-calc-average2)
; ----------
; **Looking at candidate  self-healing.candidates/better-calc-average
; ****Comparing args :self-healing.candidates/numbers :self-healing.core/cleaned-earnings with input [[]]
; ****Comparing seed  [[1 2 3 4 5]] with new function
; ****Result: old 3 new 3
; Found a matching candidate replacement for failing function self-healing.core/calc-average  for input []
; Replacing with candidate match self-healing.candidates/better-calc-average
; ----------
; Calling function again
; Healed function result is: The average is 0
;=>"The average is 0"

Since the function is now healed we can call it again and it won't have the same issue.

(healing/with-healing (report []))
;=>"The average is 0"

It worked!

Taking a step back, let's a take a look at the bigger picture.

Summary

The self healing experiment we did was intentionally very simple. We didn't include any validation on the :fn component of the spec, which gives us yet another extra layer of compatibility checking. We also only checked one seed value from the spec's exercise generator. If we wanted to, we could have checked 10 or 100 values to ensure the replacement function's compatibility. Finally, (as mentioned in the footnote), we neglected to use any of spec's built in testing check functionality, which would have identified the divide by zero error before it happened.

Despite being just being a simple experiment, I think that it proves that clojure.spec adds another dimension to how we can solve problems in self-healing and other AI areas. In fact, I think we have just scratched the surface on all sorts of new and exciting ways of looking at the world.

For further exploration, there is a talk from EuroClojure about this as well as using clojure.spec with Genetic Programming

[^1]: The reason for this is that if the programmer in our made up example didn't have the custom generator and ran spec's check function, it would have reported the divide by zero function and we would have found the problem. Just like in the movies, where if the protagonist had just done x there would be no crisis that would require them to do something heroic.

2016 State of Clojure Community Survey Now Open

clojure.png

It's time for the annual State of Clojure Community survey!

If you are a user of Clojure, ClojureScript, or ClojureCLR, we are greatly interested in your responses to the following survey:

State of Clojure 2016

The survey contains four pages:

  1. General questions applicable to any user of Clojure, ClojureScript, or ClojureCLR
  2. Questions specific to the JVM Clojure (skip if not applicable)
  3. Questions specific to ClojureScript (skip if not applicable)
  4. Final comments

The survey will close December 23rd. We will release all of the data and our analysis in January. We are greatly appreciative of your input!

 

A Major Datomic Update

The latest release of Datomic includes some additive new features to enable more architectural flexibility for our customers, especially those building microservices platforms and projects.  With the advent of the new Client API, users have much more choice when it comes to their deployment topology.  I am also very pleased to announce the new simplified pricing model: Starter for explorers, Pro for production use, and Enterprise for customized licensing/support.  Customers at each level will now have access to identical features, including unrestricted Peer counts per Transactor.  For more, see the official announcement.

Works on My Machine: How We Work: Distributed

Working for a distributed company -- Cognitect is scattered across much of the United States and Europe -- does have its ups and downs. I love not having to commute. But I miss hanging out with my coworkers, live and in person. I love that my office is just upstairs, in that spare bedroom. But sometimes I wish I could put more distance between my job and the rest of my life.  I love that the Internet lets me talk to just about anyone, anywhere. And sometimes I wish I could throw my computer, complete with its bogged-down network connection out the window.

Working for a distributed company also means that I get asked "the question" a fair bit. Actually the question is really a family of questions: "What's it like?" is a common variation. So is "Isn't it hard to get things done?" Then there is "What skills do I need to work remotely?" and, of course "How do I talk my boss -- or potential boss -- into this?"

Interactive Development with Clojure.spec

clojure.spec provides seamless integration with clojure.test.check's generators. Write a spec, get a functioning generator, and you can use that generator in a REPL as you're developing, or in a generative test.

To explore this, we'll use clojure.spec to specify a scoring function for Codebreaker, a game based on an old game called Bulls and Cows, a predecessor to the board game, Mastermind. You might recognize this exercise if you've read The RSpec Book, however this will be a bit different.

Agility & Robustness: Clojure spec

You can program with high agility and end up with a robust, maintainable program. This talk will show you how to use Clojure and the new spec library to write programs that behave as expected, meet operational requirements, and have the flexibility to accommodate change.

Works On My Machine: Understanding Var Bindings and Roots

When designing applications and systems it can be important to understand the inner workings of certain aspects of the language being used by developers. One area of Clojure that is traditionally rather opaque and poorly understood is the inner workings of Vars, and how they interact with the Clojure Language. I recently encountered some behavior that seemed puzzling:

Focus on spec: Combining specs with s/or

In our last post, we looked at `s/and`, a way to combine multiple specs into a compound spec. It should come as no surprise that spec also provides `s/or` to represent a spec made of two or more alternatives. 

Focus on spec: Combining Specs with `and`

Clojure's new spec library provides the means to specify the structure of data and functions that take and return data. In this series, we'll take one Clojure spec feature at a time and examine it in more detail than you can find in the spec guide.

In our last post, we explored the simplest specs: predicate functions and sets. In this post we'll look at how you can start to combine specs using the and spec.

The New Normal: Tempo, Flow, and Maneuverability

Tempo. Most people are familiar with it in the musical sense. It’s the speed, cadence, rhythm that the music is played. It drives the music forward - and pulls it back.

But there’s more to tempo than a musical beat. In life, as author Venkatesh Rao described in his book, “Tempo,” it makes for some of the most memorable moments as it shifts faster or slower. In war, like in business, tempo - the speed at which you can transition from one task to the next - is a critical component for victory.