Transcript 117: Rebecca Parsons and Neal Ford

THE COGNICAST TRANSCRIPTS

EPISODE 117

In this episode, we talk to Rebecca Parsons and Neal Ford about evolutionary architecture,  their upcoming book and the Anita Borg Award. 

The complete transcript of this episode is below.

The audio of this episode of The Cognicast is available here.

Transcript

RUSS:    Hello, and welcome to Episode 117 of The Cognicast, the podcast by Cognitect, Inc. about software and the people who create it.  

This week we have a very special treat.  Before he went off to take on some new challenges, our original host, Craig Andera, was kind enough to pre-record a few episodes to help us through with the transition.  It’s with just a bit of sadness that I introduce this, the last Craig Andera episode of The Cognicast.

This week Craig is going to be talking to Rebecca Parsons and Neal Ford.  But before we get started, I do have a few events to mention.  First, there’s Clojure Bridge happening in Helsinki on January 28th.  In case you don’t know, Clojure Bridge is dedicated to increasing diversity within the program and community by offering free, beginner friendly, Clojure programming workshops to people from underrepresented groups.  I can tell you from personal experience that Clojure Bridge workshops are a lot of fun as well.

There’s also a Clojure Bridge happening in Amsterdam on February 11th, and there’s a Clojure Bridge happening in Buenos Aires, Argentina on March 10th and 11th.  You can find out more about these Clojure Bridge events and about Clojure Bridge in general by pointing your browser at ClojureBridge.org.

Finally, the Dutch Clojure Day will be held in Amsterdam on March 25th.  This free, one-day event describes itself as the annual gathering of Clojure enthusiasts and practitioners in the Netherlands.  Go to clojure.org/community/events for more information.

If you have a Clojure related event you would like us to mention, please drop us a line at podcast@cognitect.com.  

Well, that about wraps it up, so on to Episode 117 of The Cognicast. 

[Music: "Thumbs Up (for Rock N' Roll)" by Kill the Noise and Feed Me]

NEAL:    Yeah.

REBECCA:    Yep.

CRAIG:    Well, great.  Let’s go, then.  All right, everybody.  Welcome.  Today is Monday, November 21st in 2016, and this is The Cognicast. 

Today we are extremely pleased, extremely pleased to welcome to the show two guests, two ThoughtWorkers.  I’m talking of course about Dr. Rebecca Parsons, the CTO of ThoughtWorks, and Neal Ford, a returning guest, another ThoughtWorker.  In fact, he has the wonderful, wonderful title of Meme Wrangler at ThoughtWorks, but I’ll let him maybe speak to that a little bit more.  But before we go any further, let me repeat, welcome, Neal, and welcome, Rebecca, to the show. 

NEAL:    Thanks for having us.

REBECCA:    Thanks, Craig.

CRAIG:    Yeah, so we actually have a number of really interesting topics, I think, today.  We’ve been talking about doing the show for quite some time, and I’m glad the timing finally made sense and that you were able to take some time out to meet with us today.

We start every show with a question about art.  Specifically, we ask one of our guests to communicate some experience of art, whatever that might be.  It could be a painting, a book, a piece of music, really anything, a sunset, anything at all.  We flipped a coin or arm wrestled or--I don’t know--something before the show and we said, well, Neal, you’re going to take our opening question here.  So share with us, please, some experience of art.

NEAL:    Well, I volunteered for this one because I have a distinctive story about this because it started with a delayed flight in the Frankfurt airport.  I was in Frankfurt trying to fly home and the flight was delayed, so I was killing time.  They had an exhibit upstairs, a temporary exhibit from Guggenheim Berlin.  Okay, that’s cool, you know.  

It’s like three rooms full of art.  I went up there, and they had a painting by Mark Rothko.  I don’t know if you’re familiar with Mark Rothko.  He’s a 1950s abstract expressionist.  Mostly he painted fuzzy rectangles.  

I looked at that, and I said, “Okay.  I don’t get it.  I don’t understand why this is art.  I don’t understand why this is hanging in a museum somewhere.”  It annoyed me enough that I took a deep dive and actually learned a whole bunch of stuff about modern art.  

Now I understand why that’s art, why you have to understand the context of where it comes from, and what Rothko was trying to do.  That kind of happenstance of being delayed on my flight led me down this really interesting path.  And so I’m quite the enthusiast of modern art now and go to a lot of museums because of that deep dive that I did.  

CRAIG:    Well, that is awesome.  In fact, I’m well familiar with your interest in Mark Rothko because--even though it was many, many years ago at this point, maybe four years ago when you were last on the show, as our listeners might be aware, we do a custom cover for every show--you chose as the inspiration for your cover, Mark Rothko.  Our artist, Michael Parenteau, did what I thought was a really nice interpretation of Rothko’s style into the cover for your show.  I remember that one.  

NEAL:    I remember that.  Yeah, I appreciated that.  That was a nice touch.

CRAIG:    Yeah.  Well, that’s cool stuff.  Yeah, so it’s always tempting to just talk on and on about art, but I think the two of you have been working on a number of interesting things together and separately.  Let’s maybe turn to those.  Specifically, maybe we could start with the thing that kind of got us talking about doing a show in the first place, not that there weren't like 20 things that we could talk about.  

It turns out the two of you are writing a book together, which is why we said, oh, we should have Neal and Rebecca on the show to talk a bit about the book.  Rather than me trying to describe it any further, I will throw it to whichever of you would like to speak up first about this book that to me sounds very, very interesting indeed.

REBECCA:    Okay.   Well, I’ll start with that.  I’ve been talking a long with Neal for several years about this idea of evolutionary architecture.  In fact, I listened to a talk that Neal was giving.  I don’t even remember what conference it was now.  He was actually talking about emergent architecture.  

In true ThoughtWorks style, Neal and I had to have a discussion about that, and I felt quite strongly that although I think the concept of emergent design is the right one, talking about emergent architecture made it just seem too ad hoc.  I was always talking about it from the concept of evolutionary architecture.  The distinction that I draw there is that when you think about evolutionary computation, things like genetic algorithms, genetic programming, et cetera, you have an objective fitness function, something that constitutes your goal, and you’re attempting, through the various operations and such, to move towards, closer to that goal.  

That’s how we’re starting to think about evolutionary architecture is to say, for a particular system and a particular organization at a particular time, we can actually define a fitness function for what we’re trying to achieve in the architecture.  This is this notion of evolutionary architecture.  And so with another colleague of ours, Pat Kua, from the U.K., we are putting together a book on evolutionary architecture to describe the techniques that support an evolutionary architecture, what it takes to create and maintain one, what these fitness functions look like and what role they play in an architecture, as well as some of the team considerations and organizational structure and such.  Actually, and I’ll turn it over to Neal for this, one of the important parts was actually being able to come up with a concise definition of what evolutionary architecture is.  

NEAL:    Mm-hmm.  That was actually when we -- like she said, the three of us have been talking about different aspects of this for a while.  When we finally got together, kind of pooled all of our stuff and put it together, and then started thinking hard about a definition, that really made us understand really kind of the implications of this a lot more deeply.  I could give you our working definition, which all the rest of our stuff is based on, which is an evolutionary architecture supports incremental guided change as a first principle across multiple dimensions.  Let me pull apart different pieces of that definition.  

The first was the guided change part of this definition.  That’s what Rebecca was talking about when she was talking about fitness functions.  When you think about the act of software architecture, one of the things that distinguishes architecture from design work are these nonfunctional requirements or quality of service attributes or “ilities” that you have to define.  

When you choose a particular architectural pattern like a layered monolith or microkernel or microservices or service oriented architecture, you’re choosing that architectural pattern at least in part, and in many cases in large part, because of the ilities that you need to support.  You need to support scalability, security, performance, or some sort of characteristic like that.

What we’re trying to do is make evolvability one of those kind of ilities, and that’s what you can use fitness functions for in an architecture is when you’ve chosen what those ilities are, those are kind of the dimensions you’re trying to protect.  As your architecture evolves, you can write fitness functions in the form of tests or metrics or other sorts of verification mechanisms to make sure that, as your architecture evolves in unexpected ways, it doesn’t break the core reasons you chose this architecture to begin with.  That’s kind of the guided change and the multiple dimensions part.

The incremental part of this definition, the definition again was it supports incremental guided change as a first principle across multiple dimensions.  The incremental part of this is really about operational agility and the ability to get new versions of your software into your architecture as rapidly as possible.  This is what we talk about in the continuous delivery world as cycle time.  That’s one of the key things we measure in continuous delivery is you measure cycle time by when the clock starts, when someone starts working on some new feature, and ends when it shows up in production.  One of the overarching goals is to make cycle times shorter and shorter.  If you have a really short cycle time, that means that you can evolve your architecture much more quickly because your cycle time is actually how fast you can get new generations into your architecture. 

CRAIG:    Hmm.

REBECCA:    There’s also an important distinction.  Neal mentioned that to the ilities we want to add evolvability, and that is distinct from adaptability.  The reason that distinction is important is, when you talk about an adaptive system, people very often think, “Okay.  I’m going to build in knobs and levers and dials of things that I might want to configure.”  But that presupposes you know where that variability is going to come from.

By focusing on evolvability, what we’re saying is we don’t necessarily know what avenues of change we’re going to have to support because we don’t know what changes are going to come in the business environment or from the business model, regulatory changes, customer expectations, or technology changes.  And so rather than trying to predict it so that we can build in a switch to throw, a dial to turn, a parameter, or anything like that, we want the system as a whole to have the characteristic of evolvability.  That regardless of the change, we’re in a position to make that change with as little pain as possible. 

We obviously can’t do away completely with the pain.  This isn’t any more of a silver bullet than anything else is, but we want to focus on the ability to make fundamental changes to the architecture, whether it’s to the technology stack, whether it’s to the integration boundaries or the integration technique.  All of those things are up for grabs.

NEAL:    Kind of another way of saying that is we want to be able to make changes to the architecture without significantly increasing technical debt because that’s really what adaptability does is it gives you change at the cost of some sort of technical debt.  For example, you can change all sorts of behaviors in an architecture with things like feature toggles, and you could make pretty broad changes, but that’s really a form of technical debt because you’re adding all of this extra logic within your system, and it’s kind of a temporary patch.  Whereas Rebecca said, we’re talking about fundamental change here where microservice architecture is a good example of one of these where because all these services are so extremely decoupled from one another, you can fundamentally change one of them without having a negative impact on any of the others and truly evolve that system rather than just adapting something like putting, you know, duct tape and bailing wire on it.  

CRAIG:    This is really interesting stuff, I think.  I’ve been a consultant for a long time.  I know that obviously you both have extensive consulting experience as well, so you know that this is the type of thing that a lot of organizations desperately want, right?  They feel saddled.  They feel constrained.  They feel weighed down by a lot of their existing systems and decisions, past decisions that have been made.

I think, if I’m understanding you correctly, this is about moving towards a place where your decisions are not your decisions, but your infrastructure, your assets, they feel lighter, right?  They’re not holding you back so that you can move more quickly.  Does that kind of capture kind of the high level feeling of the advantages you’re hoping to achieve?

NEAL:    Well, absolutely.  One of the things that we’re attacking, so kind of another way of saying that is we think predictability is shot in software architecture, and it’s because of -- I think Rebecca originally kind of identified the verbiage around this.  It’s because of the dynamic equilibrium of the software development ecosystem. 

When you think about the software development ecosystem, and that encompasses all the tools, frameworks, libraries, best practices, and all those things that we use to build software, it’s certainly at an equilibrium state because we can build stuff against it.  But it’s also extremely dynamic in that sudden changes can occur in it that fundamentally change the way you think about it.

A great example of a recent occurrence is when Docker showed up at our ecosystem because when Docker is there, you can’t not think about that and the implication of that as you start building stuff.  If you’re an enterprise architect and two years ago you wrote a five-year plan and you didn’t include Docker, as soon as Docker hits the ecosystem the equilibrium changes.  Now you can take that five-year plan and throw it away.  

Exactly to your point, we believe that companies are very interested in this because you can no longer predict against this dynamic ecosystem.  If you can’t predict any more, then if you can evolve in a lot less pain, then no matter how it evolves and changes, you’ve got a much better fighting chance of being able to accommodate that change.

CRAIG:    Go ahead.

REBECCA:    Yep, and the interesting thing is I agree that enterprises and organizations have wanted to be free of their legacy.  But it’s actually only been recent, recently that organizations didn’t see evolutionary architecture as somehow irresponsible.  The architects and the organizations wanted to cling to those five-year roadmaps because they thought that was what it meant to do good architecture.  You had this path that you were marching down, and you had these architectural initiatives that you were undertaking.  

It is only recently, and I think some of this has to do with the advent of continuous delivery, which allows people to feel a bit better about their ability to make fundamental changes and to put things into production more safely.  But we’ve been fighting a battle, really, with people who call themselves architects about whether this is even a good idea.

CRAIG:    What’s the fear there?  I mean it seems to me that if you have an idea and you know you’re going to do it, if you can say -- and I know I’m simplifying the concept you presented, but if you can say, “You can have it in production in N weeks as opposed to 2N weeks, reducing cycle time to half what it was,” that seems like a good thing.  But there must be some objection that is seemingly reasonable to the people that are making it.

REBECCA:    Well, I think it gets back to the predictability argument.  The role of an architect in an enterprise is to protect the asset value of the technology stack, of the technology landscape.  Conventional wisdom was that you wanted to be able to predict how much that was going to cost, where it was going to go, what resources the organization would have to make available for this, and for a while that actually worked.  Convincing an organization that you can’t necessarily tell them where they’re going to be in two years when they’re used to being able to do that, well, that is sometimes hard for them to get over.  

NEAL:    The other thing too is that if you want to really start implementing this, you have to start changing a lot of behaviors and the way you develop software, the way operations works, the way your DBAs work.  You’ve been a consultant.  I tell you what.  We’ll give you the job of go telling all the DBAs that they have to fundamentally change the way they’ve been working for the past few decades because it’s going to be better for us.  Then we’ll let you let us know how that goes.

CRAIG:    We may want to have a discussion about rates before I undertake that particular engagement.  But yeah, no, I hear you completely.  Yeah, so I wonder if you could.  Let’s maybe go one more step deeper in the rabbit hole, right?

We’ve decided this is a good idea.  Now we’re starting to look at our processes, because this seems like a very -- it involves technology, but it seems like more than something like, say, what language I’m choosing.  It has to do with process.  What sort of -- as I move in this direction in this world, what do my processes start to look like?  How is my organization different from what it was before I made the decision to adopt this approach?

NEAL:    Well, in many ways through a process standpoint, so this is one of the payoffs of continuous delivery in all these engineering practices and continuous delivery around modern devops and getting a lot better at being more agile with provisioning machines and getting things into deployment pipelines and getting better at testing, so you have better automatic kind of verifiability.  That incremental change part of our definition actually encompasses all those kind of operational concerns.  

A lot of people, when they started doing continuous delivery, they did it just for the benefit of doing continuous delivery stuff because it increases their engineering efficiency.  But also, once you have all those things in place, you start getting ancillary benefits.  Building an evolutionary architecture is one of the ancillary benefits that you can reap from that because it gives you the mechanism by which you can start making a lot more aggressive change at the architectural level.  

Let me give you an example of that, one of these examples of what we call incremental change.  Let’s say that you are Widget Co.  Actually, I’ve come up with new names for this today at the gym.  There are going to be two companies, Acme Co and Penultimate Co.  Penultimate, of course, is going to be always in the second place.  

Acme Co has a website where they show all of their widgets.  One of the things that their customers can do is star rate the widgets that Acme Co is selling on their website.  Within their architecture--it’s a microservice architecture with 21st Century devops practices--they have a star rating service that handles all those chores.  

Then one day they release a better star rating service that lets you do half star ratings.  Then they don’t force any of the other services to start using it.  They just make it available in their architecture.  Then the other services, as they want to, will start migrating over to that better rating service until no one is using the old one any more.  Then you kind of get rid of that one because you don’t need it anymore.  

That’s an example of the operational side of an evolutionary architecture because you have to have an architecture that has the right sized granularity so that you can make changes on an ongoing basis.  Really how aggressive you want to be with change drives how granular you want the pieces of your architecture to be.  If you want to change just a chunk of your things, you can do that.  But if you want to get down to really fine things, you can change those.  Of course, the smaller things are typically the faster you can make changes to them.  

CRAIG:    Yeah.  There was a bunch of interesting things in there.  The aspect of evolutionary architecture, the way that you know that you’ve arrived at evolutionary architecture was the fact that you were able to make that change.  Is that a fair assessment?

NEAL:    That’s certainly part of it is really the ability to be able to make change in your architecture at a foundational way without driving your technical debt way up.  In other words, being able to change the structure of your architecture without having a negative impact on any of the characteristics you chose that architecture for.

CRAIG:    Okay.  You mentioned continuous delivery, which includes, among other things, the ability to test your system in an automated way and other forms of automation.

NEAL:    Mm-hmm.

CRAIG:    Is one enabling key to that, but are there other decisions that the organization would have had to have made that would have let them get there, or is continuous delivery of practices sufficient?

REBECCA:    From a technical perspective, a lot of what you need is enabled by continuous delivery.  But I think organizationally there has to be the commitment to time to market being a critical factor and that it is actually okay to have the changes in the underlying architecture.  Very often that is in part an acknowledgement that some of these things might, from a monetary perspective or a headcount perspective, cost more.  

If you’re going to -- we all know that something that is locked down and standardized is less expensive than something that allows for more variability and flexibility.  If you have changes in the technology stack, if you have a document database as well as a relational database, you need someone who knows about document database performance.  You need someone who knows about relational database performance.  The organization has to value the ability to make these changes more rapidly enough to accept the cost that comes along with this.  

NEAL:    You said, well, is it anything beyond continuous delivery, but it’s continuous delivery beyond just what developers experience because you have to get all of your data stuff in line with this as well because if you’ve got a nontrivial system, it’s probably going to have data dependencies.  And so you have to have an evolutionary data approach to that as well.  

One of our colleagues, Pramod, and Scott Ambler wrote a book called Refactoring Databases, and the subtitle of that book is Evolutionary Database Design.  It’s just had its ten-year anniversary.  We’re now saying that that’s a book that was just about exactly ten years ahead of its time because that goes hand-in-hand with all this evolutionary architecture stuff is the ability to evolve your data as well.

REBECCA:    Yeah, and you do need to take quite an expansive definition of continuous delivery.  Some people think of continuous delivery as only, okay, well, I’ve got an automated script to production, and there is so much more to it than that.  You really do have to think of the various aspects of provisioning of machines, the automated testing, the automated deployment, and the change management that goes into managing the data pipelines, the data dependencies, and the organizational structures that you put around that continuous delivery process.  It is a very broad definition of continuous delivery that is really the technical enabler of this.  

NEAL:    I was talking about continuous delivery as a building block to an evolutionary architecture.  Let’s talk about what evolutionary architecture is a building block to because once you have that kind of architecture in place, it makes it much easier to do things like multivariate testing against your users.  And so now you can do a lot more data driven development because now you’ve made it trivial.  

In fact, you’ve made it part of your architecture to be able to put two minor variations of things out and have some people see one and some see the other, and gather data about them.  One of the building blocks to that kind of capability is having an architecture that’s flexible enough to allow you to make incremental change and to be able to operationalize different, slightly different versions of things without any trouble.

CRAIG:    Mm-hmm.

REBECCA:    And to reduce the risk of that experimentation because one of the things that we’re learning as well is it’s becoming -- the pace of change is such that we need to be able to respond quickly, grab new ideas from wherever we can find them, and experiment with them.  Any time you try to do innovation, any time you run an experiment, that experiment may fail.  If you’re unsure about your ability to safely put something into production, running an experiment is even scarier because not only may the experiment fail, but we might actually break something when we’re running the experiment because we don’t have confidence in that overall process.  

Part of what and many organizations drive such a long cycle time is that they aren’t confident in their ability to safely deliver into production.  And so getting that rigor around how you put things in production and then how you roll them back is part of what is enabled by evolutionary architecture, by continuous delivery, and it does then, as Neal points out, allows you to become a much more data driven organization than you could have then, and it allows you to be much more innovative.  

CRAIG:    Yeah, that’s interesting.  I like the example.  I was nodding.  It’s a podcast, and I’m nodding, but I was listening.  

The example of being able to do those types of experiments, the A/B testing.  I’ve been at organizations where they were able to do that, and it was very valuable.  I think you make excellent points about that, and that helps.  

I wonder if we could take it down.  Actually, let me ask this question.  Where do you see the industry at?  Are we at the point where this is an obvious next step, but maybe nobody has really put this into play yet, or it’s something that you’re starting to see out there in the world that certain forward-thinking players have begun to see the benefits of or it’s common and we just don’t hear about it?  Where are we at with this thinking?

NEAL:    I think what we’re doing is putting a name and unifying some ideas that are kind of floating around out there right now that don’t have kind of a unifying framework to stitch them together.  Let me give you a good example of that.  

We were talking about fitness functions earlier.  One of the things you do in an evolutionary architecture is define fitness functions around the parts of your architecture that you don’t want to break.  We’ve further identified two kinds of fitness functions, atomic ones, which can be run against kind of a singular context, but also ones that we call holistic fitness functions that need to be run within a shared context.  

Let me give you a great example of a holistic fitness function that you probably didn’t think of it this way before now, but that’s a perfect description of what it is, which is Netflix’s Chaos Monkey.  You know Netflix runs entirely on AWS, and they got worried about what happens if AWS starts misbehaving, and so they created this utility, this tool, the Chaos Monkey, that they set loose in their ecosystem.  It drives latency up, and it makes things misbehave within their ecosystem.  They’re basically testing their resiliency because one of the important ilities that Netflix identified was we want our system to be really resilient because it’s so widely distributed.  

But the Chaos Monkey is not something that they run at 3:00 p.m. on Tuesday.  Oh, yeah, we’re going to run the Chaos Monkey.  It actually runs all the time in their ecosystem.  Every service that they build, they have to build with the understanding that it has to withstand the Chaos Monkey, which means it also incidentally withstands all those other service disruption stuff that might happen that the Chaos Monkey is such a representative of.  We’re calling that a holistic fitness function.  We’re actually giving that a name.

A lot of architects think about ilities and being able to test ilities like scalability, et cetera, but they don’t have a good, unified way of thinking about how often do we test these things and how regularly do we test these things.  Do we do these on cadence?  A lot of them are done on an ad hoc basis or, because they’re kind of cumbersome, they’re done.  Security is one of those things.  We have an anti-pattern called a Security Sandwich, where you just do some at the beginning and some at the end and hope nothing bad happens in the middle. 

But if you think about all those things as fitness functions at the architectural level, now you have sort of a unifying way of thinking about them because another part of this incremental change idea and also related to continuous delivery is this deployment pipeline mechanism that’s quite common on those projects.  One of the things the deployment pipeline does is run a series of tests, including fitness function tests.  That’s the mechanics of how you make sure these fitness functions are applied on a regular basis is use things like the deployment pipeline to run these fitness functions.  Of course, you may have some ongoing things like the Chaos Monkey running.  

We’re basically just taking these disparate kinds of things and saying, well, these are really just that.  It’s a fitness function for verifying how resilient or how ex-ility your architecture is.

CRAIG:    I have a question about this.  I’m trying to understand.  When I hear fitness function and it’s a great concept, but when you’re using it in a strict, strictly quantitative environment, it yields a scaler number or at least some quantity that can be compared, right?  You can hill climb.  You can say this solution was strictly better than the one that came before it.  Therefore we will replace it with the prior one.  

But I think about something like Chaos Monkey or the other things that you mentioned.  It’s hard for me to think about how you arrive at a scaler measure.  I mean you could say, well, my throughput is higher, but my latency is also a little higher.  Is that better or worse?  It’s not really a scaler quantity that you can say is strictly better or worse necessarily.  So is there some art in arriving at that, or is this an analogy rather than an exact comparison?  What am I missing?

REBECCA:    Well, you need to think about it not as a simple, basically one dimensional fitness function.  This is where this across multiple dimensions comes in.  The conversation you have to have is to decide: What is the tradeoff?  What is more important to me?  Is latency more important than throughput, yes or no?

    If latency is more important, then if latency goes up, but throughput is better, you’re probably going to back the change out because that doesn’t respect your overall balancing.  And over time, the balancing equation might change, and so we can think of the atomic fitness functions as measuring things like throughput or latency, et cetera.  But then you also have to say, from a collective perspective, am I moving in the right direction?  Am I making the right decisions?  What are the things that are most important to me in my architecture?  Those are the things I’m going to prioritize.  Those are the things that we’re going to make decisions around.

Definitely it’s not a single dimensional fitness function.  It’s actually a multidimensional fitness function that has to balance across all of those different ilities.  And sometimes you’re going to trade them off because you can’t maximize all of them.  Too many of those things are incompatible with each other. 

NEAL:    Well, in some ways you can think of something like Chaos Monkey as being a whole collection of Boolean scalers.  Does it resist when I turn the server off, true or false?  True.  Okay.  Well, that means it was successful, so we are testing something there.  

A lot of time people mathematically hear “function,” and they expect something like, especially all you functional programming guys, you expect a very specific kind of connotation to that, but we’re using a looser function, fitness function definition from evolutionary computing, which is basically evaluating is this one better than the last one.

CRAIG:    Right.  Right.  Yeah.  No, that makes total sense, actually.  I just wanted to make sure I understood the analogy.

NEAL:    Yep.

CRAIG:    I think, Neal, you and Rebecca helped me understand that there is in fact, as you say, a fitness value that comes out, but that a human or a team of humans are involved in making that judgment based on the quantitative data, whether it’s Boolean or some sort of continuous value like a throughput that you collect.  Of course we do this sort of thing all the time.  I really still like analogies like that, personally, because it gives you--and I think you’ve used this word a couple times--sort of a mental or phrased a mental framework for doing it rigorously as opposed to something that you kind of knew you were doing already, but you never really thought about it.  Yeah, that actually helps me understand that a lot.  

NEAL:    In fact, we identify some fitness functions as being manual--

CRAIG:    Sure.

NEAL:    --on purpose like legal requirements.  You may have a medical system that, as your architecture evolves, there’s certain legal things that you can’t violate.  And so deployment pipelines have the mechanics, the ability to have manual stages.  And so let’s say again that’s one of those things that you’re not really -- most places are not unifying, thinking about scalability and thinking about verifying the legal requirements as the same mechanism.

CRAIG:    Mm-hmm.

NEAL:    But they really are the same mechanism.  You’re protecting some aspect of your architecture or your system.  By unifying them, that gives you a regular way to think about them and a regular cadence to apply them with.

CRAIG:    Mm-hmm.

REBECCA:    Well, and the other thing, going back to that mental framework, the discussion that the organization has about how do we balance these ilities, you know, what are the ones that are most important to us for this particular system at this particular time.  That discussion is critical to understanding what it is that is important that’s going to drive our architectural decisions, and it’s a great way to get alignment.  And it’s also a great way to allow teams to be empowered to make more local decisions.  

If you have a global standard that says, “This is what we’re trying to achieve.  These are our various architectural fitness functions,” within this structure, respecting this fitness function, do what’s the right thing for your individual project.  It provides a communication mechanism and something that is objectively testable that teams can use to make more local architectural decisions.  

CRAIG:    This is good stuff.  When’s the book out?  

NEAL:    My guesstimate is that we’re probably 80%, 85% done with it now, so we’re hoping sometime early next year since we’ll have some time to do some integration and other stuff over the holidays.

CRAIG:    Awesome.  I’m really looking forward to reading it.

NEAL:    We are too.

CRAIG:    Yeah.  Yeah, right.  Of course.  No, that’s good.  I’m really glad that you were able to come and talk about it because I think it’s definitely one to watch out for.  

Gosh, so I wonder if we could take maybe a right turn and come back to the book, perhaps, but there’s something I definitely want to make sure we talk about.  I’m sorry I don’t have the reference in front of me, but I believe that ThoughtWorks was recently the recipient of an award related to diversity in technology.  Am I correct about that?

REBECCA:    Yes.  There’s an organization called the Anita Borg Institute for Women and Technology.  One of their programs is the top company award, which is -- they give it away annually.  It is very numerically driven, so they ask a series of questions of organizations about the representation of women in technical roles across organizational levels.  

This year 60 companies submitted their data, and that data is very rigorously vetted.  Then they select a winner from that group.  We were the winner this year.

CRAIG:    Congratulations.

REBECCA:    We’re very excited about it.  Yes.

CRAIG:    I wonder if you could talk a little bit more about the things that the award considers, and then I’d love to hear your perspective on the broader issue as well.

REBECCA:    The award looks at the representation of women at the entry level, at the mid level, the senior level, and the executive level.  Currently the award is North America based.  Even though many global organizations participate, they look at your North America numbers.  They look at the overall percentage of women at these various levels.  

They’ve done a lot of work to standardize the definition, so we know that our definition of midlevel, how that relates to, say, BNY Mellon, who was last year’s winner, how those things compare to each other.  Then it’s just a straightforward comparison.  In this case we were above everybody at all four levels, which was really quite exciting.

CRAIG:    Mm-hmm.

REBECCA:    In terms of what it takes, I think there are several factors.  One of the things they put out in their report as well is an analysis of what’s in common amongst companies that do better than average versus worse than average within that sample group.  Explicit training around the importance of diversity, things like unconscious bias, is one of the factors that is prevalent among the organizations that came out better than average.  The focus on advancement, addressing issues of unconscious bias, are some of the other things that you’ll see.  

I think one of the things that really distinguishes us is the extent to which we try to move out of the normal hiring pools to look for talent.  We invest a lot.  Our graduate program, we have something called ThoughtWorks University that runs in India, and all of our graduate hires go through that.  It’s a five-week program. 

In addition, particularly when we’re bringing people from non-computer science backgrounds, we invest a lot in training them how to be good software developers.  I think that’s one of the ways that we’ve been able to attract and retain good people, both men and women, is the level of investment that we make and training, career development, and leadership development opportunities.

CRAIG:    The training you’re talking about is training related to the direct job skills like creating software, or it’s training related to understanding diversity in the workplace, or it’s both?  I’m not sure I sorted that out.

REBECCA:    Yes, it’s both.

CRAIG:    Gotcha.  Okay.

REBECCA:    It’s both.  Yeah.

CRAIG:    Interesting.  Rebecca, I’m wondering.  Of all the questions I could ask, maybe I’ll ask what -- I mean, it seems like ThoughtWorks is doing pretty well on this count.  I’m sure, like all organizations that are focused on things like continuous improvement and software, you no doubt would say that there’s farther to go, but I wonder whether you have any advice for other organizations as to how best to get started down the same road, if that makes sense.

REBECCA:    Yeah.  There are a couple of things.  The first obvious point is, if you don’t measure it, you’re not going to move the needle.  But there are a lot of companies that measure it and say, “Oh, my.  Isn’t that terrible?  It hasn’t gotten any better?  Next slide.”  That’s not helpful either.

It actually takes a lot of focus over an extended period of time to move the needle.  In particular, it requires focus when there’s a short-term need and your short-term business case says we need to fill these positions right away.  I need to staff up 20 new developers.  It’s easy to find 19 men and one woman, and then all of a sudden you’re back to where you were before.  

It’s sustained focus and making some of those hard decisions that, no, you have to ensure you have a diverse pool.  When we do external searches, we require our external search agencies to bring us a diverse pool, and they haven’t fulfilled their contract unless they bring us a pool that is more representative of the population.  We do that internally when we staff positions or when we form management groups and such.  

We look at the composition.  What is the gender balance?  What is the representation from our global south countries?  Are we pulling from those diverse perspectives?  

I think part of what really has made a difference for us, there’s all sorts of literature on the business case for diversity that companies with more diverse leadership teams, they do better.  Particularly when the economy is going south or when a company is at risk, they do better with diverse leadership.  The business case is clear, but we also feel quite strongly that it’s the right thing to do.  

When you look at the statistics of people leaving technology, women leave technology at a significantly greater rate, upwards of -- one study I saw it was 41% of the women who start in technology will drop out of high-tech, and the rate is about 17% for men.  Obviously there’s something in our industry that is driving women away.  Conventional wisdom says it has to do with additional family responsibilities, but many surveys have shown that only about 25% of the women that leave, leave because of their family circumstances.  

More of them are about the culture of our industry.  There are feelings of career stagnation that they don’t have opportunities for advancement.  And it’s the culture that’s driving them away.  I don’t agree with people who assert that all of the women who want to be in technology are in technology because I’ve seen too many of them who have left because it’s just not worth putting up with some of the behavior, some of the toxic environment that does exist in places in our industry.  

CRAIG:    I wonder.  I hate to put you on the spot, but I wonder if I could get you to give advice to someone like me who is interested in diversity in technology.  I have two daughters.  The older one says she wants to be a programmer.  She’s 12, so who knows, but I would like her to go into an environment where she can be successful and feel comfortable and happy.  It’s filled with a lot of people who look like me and who maybe a lot of them even feel like me, which is, I feel like I agree with you.  I want those things.

My fear is that I have unconscious behaviors or I’m uneducated in certain ways that I am contributing to that culture in ways that I was just not aware of.  What would you say to someone like me around how to assess your behavior or maybe even specific behaviors that you’ve seen that you believe to be unconscious?  Do you know what I mean?  I’m trying to get at I’m sure there’s a ton of our audience out there who are male, who are nodding, and are saying, yes, this is good stuff.  But I’m sure that, collectively, we are still somehow part of the problem.  I’m wondering if you can help us out in any way.

NEAL:    I can start out with one, which any behavior that you feel like would feel right at home in a fraternity is probably not a good thing on a software project.  

CRAIG:    Sure, and I think that one is one of the -- I think that class of things is fairly obvious.

NEAL:    Mm-hmm.

CRAIG:    But I feel like there’s got to be more to it than that.  

NEAL:    Sure.

CRAIG:    Maybe I’ll just put -- well, Rebecca, I think you understand the question perfectly well.

NEAL:    Yeah.

CRAIG:    I’ll throw it to you.

REBECCA:    Yeah, and I think one of the first things is to acknowledge the existence of bias.  We’re all human beings, and human beings don’t -- we get defensive when we feel like we’re being attacked or like we’re being judged.  People hear that word “bias,” and they immediately say, “Oh, I’m not biased.”  Consciously you might be right, but we all exhibit unconscious bias.  Women are just as guilty of this as men.  Not to quite the same degree, but all of the studies that you’ll see, evaluation of résumés for a job, and they’ll change it from an obviously male to an obviously female name, and both men and women are less likely to hire the woman.  If they recommend hiring the woman, they will recommend a lower starting salary for the woman.  Both men and women do this.

We have to first get past this idea that we’re bad people because we’re biased.  We’re not bad people because we’re biased.  We’re bad if we don’t acknowledge the bias and try to overcome it.

When you’re faced with a situation, look at how you are evaluating people.  One of the things many studies show, for example, is men are promoted on the basis of potential and women are promoted by the basis of achievements.  When you’re looking to fill a position or to promote somebody and you say, “Oh, well, I’m not sure she’s ready for it, but I know he can do it,” stop yourself and say, “Okay.  Is that -- am I actually applying the same criteria to evaluate the readiness of these two people?”  

Stop yourself when you say, “Oh, she’s just not a cultural fit.”  Well, what’s the basis for that?  Maybe it’s true.  Maybe there is something about that individual and the culture, but very often this lack of culture fit is shorthand for she just doesn’t look like me.  We’re all more comfortable hiring people who look like us.  

NEAL:    There’s a great non-tech example of that that just came up.  The show is Full Frontal with Samantha B.  When she hired her writing staff, she purposefully did blind auditions, and she ended up with the most diverse writing staff in all of TV.  They’ve done some write-ups about how good her writing staff is because she has so much diversity.

REBECCA:    Yep.  Doing things like that, blind reviews where possible, having a diverse pool of people interview, forcing yourself to look at a diverse pool, and then also thinking about just some of the minor things.  Very often when I’m in a meeting, I’m the only woman.  Men will come up and ask me for a cup of coffee because they assume I have to be an admin.  You probably don’t want to drink my coffee.  

[Laughter]

REBECCA:    I don’t make coffee.  I don’t drink it myself, so it’s not going to be very good.  So it’s things like that.  When you meet a woman at a conference, don’t assume she’s a plus one for a man who is there.  Assume she’s there because she’s interested.  Have the same kind of conversation about the content of the conference as you would if you were talking to a man.  

CRAIG:    Mm-hmm.  That’s fantastic advice.  I really appreciate that.  I wonder if I could ask a meta-question, if you will, which is: How do I as a -- is there a good word for, like -- I mean I’m the majority, right?  I’m a white, middle-aged male in our industry.  Anyway.  Whatever the word for that is, is there a right--or even a good if there’s not a right--way to talk about this?  I mean I think we’re having a great conversation, but you know this stuff  you know I’m not apologizing, but I think it’s a fact that a lot of people in my demographic are uncomfortable talking about this because, as you said, there’s an acknowledge.  We have to acknowledge that you have bias.  We all have bias.  

In this case, I’m on the benefiting side of that bias, right?  It can be uncomfortable to talk about it, so do you have any advice to people who might be in that situation on how to bring this topic up?  I’d be curious if you agree.  It feels to me like conversations like this one are one of the things that can maybe make it be better.  

REBECCA:    Yes, I do think I can.  I think one of the approaches is to think about what really constitutes fair.  One of the discussions we very often get into is, well, if you’re faced with a hiring choice and you have an equally qualified man and woman, and you pick the woman because she’s a woman, you’re being unfair to the man.  One of the ways I reply to that is, well, what if they were both men and you only had one position, so you can only choose one?  And you’re going to pick something.  Maybe it’s the basketball team they root for, or maybe it’s the school that they went to, or maybe they play the same videogame you do, or maybe they don’t play the same videogame, so they won’t challenge your supremacy at the videogame.  You’re going to pick something.  

In that context, people never say it’s not fair.  But as soon as it’s a man and a woman, it’s getting -- it’s not fair to pick her because she’s a woman.  I think we need to rethink that notion of fairness.  Is it fair if people always recruit at only a certain set of schools?  Women who self-select into maybe Tier II schools because they don’t think they could ever get into a Tier I school.  Is it fair that Google and Amazon will never hire them because they’ll never see them?  How is it not -- how is it fair to the people that you can’t see?  And so I think that’s one way.

The other way and the other thing I think that’s important about these conversations is that we try to keep them respectful.  You can’t help the fact that you’re a white, middle-aged male.  I can’t help the fact that I’m a woman.  We have to be able to have a respectful conversation that isn’t accusatory, but that is also not dismissive because you are the person in the privileged position right now, as is Neal.  And by accepting that fact, then you can start to see, okay, well, what are the things that I take for granted because I am in this privileged position, and what can I do to extend protection or extend opportunity to someone who does not have that privilege?  

NEAL:    And the realization that contraction of privilege is not oppression.  

CRAIG:    Mm-hmm.  That’s great.  I love -- I especially love the example you gave at the beginning of that answer where you said, well, what if it was two men, and you’d have to choose.  I love it when people -- there are certain things.  I run into it most often in software where somebody says something.  It’s like, oh, now my perspective shifts and that’s really helpful.  That was one of those for me, so thanks a ton for that.  Man, lots of good stuff.  

I see that we’re coming up on close to an hour of talking together, and we don’t have to rush off right away.  I’m certainly happy to continue talking about this, but I also like to make sure that I give the guests an opportunity to talk about anything else or to spend more time on things that we’ve already talked about.  Neal, Rebecca, we have some time left.  Is there anything else we should talk about that we haven’t covered yet or aspects of things we have talked about that we should continue onto?

[Slight pause]

CRAIG:    No?

REBECCA:    Actually, Neal, I would be interested in sort of your perspective on how do you feel with ThoughtWorks winning this award?  What does it mean to you?

NEAL:    I think it’s great.  I think it shows the end result of acting with intention within ThoughtWorks because I know it’s been our intention for a long time to do that.  I think it’s a reflection of -- you know, it’s kind of silly to create hostile workplaces for half the population.  It just doesn’t make sense.  If you could create a company where half the population feels better working there than they do at other companies, then I think that there are real obvious benefits to that.  

From a ThoughtWorks standpoint, but I’ve been on projects.  There’s an often quoted Harvard study that having females on a project raises the collective intelligence of the people on the project, but I’ve seen that firsthand.  I’ve been on all male projects, and I’ve been on projects that have a mix of all sorts of things, not just gender, but all sorts of mixes, and you always get a broader perspective when you have a better mix of people.  Yeah, I think it’s good for ThoughtWorks as a whole that we have a more diverse population because we bring, I think, better capabilities to projects when we have a diverse group of people doing it.

CRAIG:    Cool.

REBECCA:    Great.  

CRAIG:    Yeah.  Awesome.  Well, I think we should have the both of you back on.  It would be great to hear kind of how your efforts because you know this sort of thing is always a journey, not a destination.  It’d be great to hear that.  Of course, there’s a million technical things that we could have talked about as well and still could.

This, of course, is our question about advice.  We always ask our guest to provide our audience with a piece of advice.  That could be anything at all that they think is relevant.  Of course, the fun part about this is that, as in most shows, we’ve already gotten a boatload of really good advice from our guests, but we still ask you do it one more time anyway.  Rebecca, what piece of advice would you like to share with our listeners?

REBECCA:    I think it’s important for people to continue learning.  Not just within their discipline, within their chosen discipline, but to expand their horizons.  Remain curious.  Remain confident in your ability to learn new things.  And explore things that you’re passionate about because it’s really quite rewarding in that context.

CRAIG:    Fantastic.  I love how  well that ties with Neal’s story at the beginning of how he wandered in a gallery and stumbled onto the whole field of modern art by taking himself outside of his area of familiarity, so that’s awesome.  You guys have both reinforced each other’s first and last questions.  That’s great.  

I will say thank you so much for taking the time to come on today.  I know you’re both super busy, but we love the fact that you were able to stop by.  Great conversations.  Really looking forward to reading your book when it comes out next year and, of course, the discussion about diversity was absolutely fantastic, so thanks for both those things.  

I will thank you individually.  Thank you, Neal, for coming on the show today.

NEAL:    Always a pleasure.

CRAIG:    And thank you, Rebecca, for coming on as well.  It was great to have you as well.

REBECCA:    Thank you so much, Craig.  Appreciate it.

CRAIG:    All right.  Great.  Well, we will close it down there.  This has been The Cognicast.

[Music: "Thumbs Up (for Rock N' Roll)" by Kill the Noise and Feed Me]

RUSS:    You have been listening to The Cognicast.  The Cognicast is a production of Cognitect, Inc.  Cognitect are the makers of Datomic, and we provide consulting services around it, Clojure, and a host of other technologies to businesses ranging from the smallest startups to the Fortune 50.  You can find us on the Web at cognitect.com and on Twitter at @Cognitect.  You can subscribe to The Cognicast, listen to past episodes, and view cover art, show notes, and episode transcripts at our home on the Web, cognitect.com/cognicast.  You can contact the show by tweeting @Cognicast or by emailing us at podcast@cognitect.com.  

Our guests today were Rebecca Parsons and Neal Ford.  Our host today, and sadly for the last time, was the magnificent Craig Andera.  Episode cover art is by Michael Parenteau.  Audio production is by Russ Olsen and Daemian Mack.  The Cognicast is produced by Kim Foster.  Our theme music is Thumbs Up (for Rock N' Roll) by Kill the Noise with Feed Me.  I'm Russ Olsen.  Thanks for listening.