The world is an oyster. It’s also a system. A complex system! Companies are components in that system, and they’re systems unto themselves! And marketing departments, and digital marketing, and the data therein, are systems, too. As analysts, we’re looking for pearls in these systems (and you were wondering where we were going with this)! Join Michael and Tim as they chat with Christopher Berry of the Canadian Broadcasting Corporation (CBC) about “systems thinking.” You’ll be smarter for it! As a special “feature” (not a bug!) for this episode, we’ve done a bit of a throwback to the earliest days of this podcast, in that Michael’s audio sounds a little bit like he was chatting through a tin can with a string tied to it. We apologize for that!
We also realized we bounced back and forth between a couple of “right vs. left” discussions in a way that may be confusing:
- When writing an equation, the dependent variable (the “y”) typically goes on the left of the equation, and the independent variables go on the right.
- When mapping out a system or a causal model, the dependent variable (the “outcome”) typically goes on the right of the diagram, and the components that drive the outcome go on the left.
Let’s just pretend we were intentionally forcing you to concentrate while listening, as we intermingled both of the above and referred to “on the right” and “on the left” several times in both cases.
People, Places, and Sites Mentioned in This Episode
- Systems Thinking and Marketing (2011 blog post by Christopher)
- Why Do We Frequently Question Data but Not Assumptions? (assumption governance post by Brent Dykes)
- CPG / FMCG
- Tesla’s vs. Google’s differing approaches to developing self-driving cars
- This Japanese Company Is Replacing Its Staff with Artificial Intelligence
- Machine Intelligence Toronto (meetup)
- Jennifer Bryan’s Github repository
- Happy Git and GitHub for the useR
- Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil
- Slate Money podcast
- The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t
- Moneyball: The Art of Winning an Unfair Game
00:04 Announcer: Welcome to the Digital Analytics Power Hour. Tim, Michael, and the occasional guest discussing digital analytics issues of the day. Find them on Facebook at facebook.com/analyticshour. And their website, analyticshour.io. And now the Digital Analytics Power Hour.
00:28 Michael Helbling: Hi everyone, welcome to the Digital Analytics Power Hour. This is episode 55. What is the definition of insanity? Albert Einstein said it was doing the same thing over and over again and expecting a different result. As analysts, we often want more, but we don’t take our game to the next level, we just keep doing the same things. Well, how do we go one up? One way to do that is with systems thinking. What is that? How do you do it? We don’t really know, so we reached out to the person that we think can help shine a light on it, and also hopefully be entertaining all at the same time. Watch out, he’s Canadian. Our guest is none other than Christopher Berry. Today, he is the director of Product Intelligence at the CBC. That basically means, and I’m simplifying drastically, he owns the computers that have the data. But he’s held numerous digital analytics data science leadership roles over the years, including Authentic, Syncapse and Critical Mass. He’s a great friend and now he is our guest. Welcome to the podcast, Christopher.
01:39 Christopher Berry: Thank you for having me.
01:40 MH: It’s a pleasure. And as always, I am joined by my co-host and sort of mother hen, Tim Wilson, senior partner at Analytics Demystified.
01:53 Tim Wilson: Too much hen-pecking leading up to… made it into this episode apparently. I guess I deserve that.
01:57 MH: No, no, no.
02:00 TW: Fuck you Helbling.
02:01 MH: Thank you. I deserve that. And I’m Michael Helbling, I lead the Analytics Practice at Search Discovery. Well, gentlemen, it is great to get another show on the road. Christopher, it’d be great to hear from you kind of what you do today, what got you interested in systems thinking, and maybe start to talk a little bit about what it is. And then we can go from there.
02:26 CB: Sure. So these days I’m at the Canadian Broadcasting Corporation. Like you said, computers have data on them. Many of them are mine. And we’re really about using that information to serve Canadians way better. I’m of the opinion that a lot of American civilization is really, really good at understanding Canadian civilization and Canadians seem to get equally, if not better, at understanding their own civilization. So to the extent that we can make that happen, we make that happen.
03:00 CB: My fascination with systems thinking goes back to grad school. It goes back to trying to understand public policy and the interaction between the public, the public service, political parties, policies that get enacted, and then what actually happens in the public. And if you try to eat all of that all at once, you will go completely mad. What’s necessary is you have to break it up into all of its components and understand the relationship between all of those components. Skill set is applicable to business. If it’s applicable to society, it’s applicable to business, and it’s applicable to management, and it’s applicable to marketing. And it’s especially powerful when you mix it with analytics. Facts, imagine that.
03:49 TW: So just defining systems thinking, how would you define… Part of me wants to make a crack that you started out trying to figure out society and politics, and that drove you mad so you jumped into analytics where it was nice and simple and clean.
04:06 TW: But what, when you say systems thinking, outside of it, it could be heard as buzzword, it could be something that people stand up on stage and talk about, but how do you really define thinking system-ly? Systematically. Nah, that’s not right.
04:23 CB: Well, sure. I think most people are comfortable with very simple if/then statements, with direct linkages between cause and effect. And we’re very comfortable and empowered in that world. The moment that you get three things, or four things, or five things interacting with one another, things get very complex. In general, most of the public, most people can follow you from A to B. Most people can’t follow you from A to B to C to D to E to F. It’s, in part, why the most persuasive arguments are made in a format of “If A, then B.” So it’s really a discipline of breaking something apart into its component parts, and then really examining the linkages in between those parts. And sometimes you discover that it’s not just a matter of going from A to B to C to D, linearly. Sometimes things go from A to B, and reinforce on B, and it’s helped along by C, and ultimately causes D, which feeds back into A. And when you break it out in that way, you’re in a way better position to understand how a system is working. And if you can understand how a system is working, you can make predictions about it. In general, those that make better predictions about the future have a sustainable, competitive advantage. They tend to be far more effective in what they’re trying to accomplish within a system.
05:57 MH: So has this historically been accomplished by people using things like intuition and experience?
06:03 CB: I would argue that the status quo of the way that most people come into an institution, it’s they are taught by their Yoda that this is your checklist. This is the way that you build a brief. This is the way that you build a website. This is the way that you build a piece of direct mail. This is the way that you build a brand. And then they faithfully execute that list. A lot of people can actually execute those orders without understanding the underlining ‘why,’ and that simplifying heuristic is really important. It enables so many people to be effective. It enables a lot of things to go forward. The understanding how a system breaks down requires quite a bit more pro-activity than what you’d commonly experience in baseline management or even baseline grinding out the work.
07:01 TW: So is trying to make it all tangible from a… Is looking at A/B testing almost… Let’s put multi-varied aside for a minute. If I’m doing an A/B test where I’m like, “You know what? I wanna change x. I wanna look at my test and my control and look at the outcome.” That seems like tactical, formulaic, not systems thinking as I’ve been trained to get a little smarter about statistics and how, I guess, interaction effects and how machine learning, where you sort of start and say, “This is kind of our model and then we’ve gotta have a feedback loop and we’re trying to learn.” Are those sort of the two ends of it, or no?
07:45 CB: One modification to it, right? A system all by itself might not have a point. It just might exist. If you’re executing an A/B test…
07:55 TW: Kinda like a podcast. [chuckle]
07:57 CB: Almost like it.
07:58 MH: Boom!
08:00 CB: It exist to perpetrate the myth that it exists. If you’re…
08:05 TW: Oh, wait, are we talking about the podcast or are we talking about… Sorry.
08:08 MH: Yeah, exactly.
08:10 CB: This is not really happening right now.
08:12 MH: Tim and I ventured into this without any knowledge of this system.
08:16 CB: Oh yeah, there it is. You said, “Hey, let’s have a podcast.” You gradually learned how to build a checklist to make it work, right?
08:24 TW: So what’s an example of a system that exists just because it is…
08:30 TW: That’s not…
08:31 CB: You’re gonna get me into trouble by way of listing any number of institutions or even…
08:37 MH: Oh, I like where this is goin’.
08:39 CB: Or even companies that… Like how many companies that you know of really shouldn’t exist?
08:45 MH: Well, most of our listeners are international, so feel free to name any US companies.
08:53 CB: What would you say you do here? There are, I think, one of the great things and they… Sorry, I’ll go back up onto the tree of reasoning here. But there so often a rule-set or a culture or a set of decisions that were made in the 1930s, or ’20s, or in the 1840s will persist in an institution for decades and decades and decades. They actually outlive the people that originally made them up. It’s incredible how insidious lock-in is. And this just happens repeatedly in our institutions, and in our companies, the way that these checklists get put together, the way that these management processes or these cultures come into being purely because that’s the way that it was before.
09:44 CB: Now, not every single system has a point. Some systems do have a point. The point of a start-up is to become a business. It’s really a massive hypothesis that’s really desperately looking for validation. The point of an A/B test, though, in general, for an A/B test to be successful, there’s a single optimization objective, there’s a single y variable. In the equation, y = mx + b or y = m1x1 + m2x2 + b and so on, there’s only one single y, right? There’s an actual point, there’s an actual objective that’s being looked to be maximized by the test, and I’d argue that that is closer to being a causal model.
10:33 CB: If you begin with a point, a y variable, it does require you to make a judgement about what is actually important. It requires you to make a judgement against, in excess of a trillion potential KPIs or metrics or outcomes that you could select. You need to select one and say, “This is the one that’s most important. This is the one that we’re gonna test about, and to be able to form a coherent judgement about why you’re picking that one thing. So that’s, I think, as elegantly and briefly as I can put it is the difference between what I’d say is a causal model and what you’re trying to accomplish with an A/B test versus pure, orthodox, academic systems thinking.
11:21 TW: So, it’s funny ’cause I feel like… Not to belabor the A/B test. I feel like the A/B testing platforms, and I’m specifically thinking of, I think, Monetate and Optimizely where they have gone down the path of saying, “Hey, by default, we’re gonna give you give these… ” You can just check, that you can look for significance of engagement or conversion rate or these two or three things, which to me is inherently not picking a single y, and whenever I see a, “Oh, does changing this drive more engagement?” That has this presumption, I think, that oh, we understand the system that more engagement will ultimately be a good thing, although I feel like often nobody has really thought through. Does it or have we validated whether it does? So I guess maybe the question is, if you’ve got us, if you’re thinking, if you’re doing systems thinking and you’ve outlined… Maybe this goes back to where Michael was. Do you start with an intuition? We think we’ve mapped out what this is and now we need to go and start, for the things that are the most critical assumptions. It’s like assumption governance. Who was it, it was Brent Dykes wrote a thing about that. Start saying, “Let’s test if the system works as we expect.” is that the first priority?
12:44 CB: You don’t start off without any sort of idea about why the world works the way that it works. Typically, you start off with a theory and certain… Most organizations, including the start-up. Even a start-up has a CEO that has a fundamental theory about why valuation happens to them. So in the scientific method, you start off with a theory, the theory has a whole bunch of axioms that may or may not go challenged and you have a set of hypotheses. And typically, in orthodox lean methods, you’re gonna wanna go after the riskiest assumptions. Those are the assumptions that you wanna tackle first, because those are the ones that can ultimately sink you. If you’re at a not for profit or if you’re at a very major corporation that is, its optimization objective globally is to be immortal, that set of assumptions is going to be completely different.
13:48 CB: So in science, I think that we have a governance model for being able to enumerate the assumptions and to have probably some fairly vicious discussions with data civilians, but if you’re just trying to execute a whole bunch of A/B tests for the sake of A/B tests, that’s certainly not a very strategic objective. It has to be for some sort of a point. And more broadly, I think that maybe one of the reasons why there is a large enumeration of the top seven things that you’ll want to optimize against in the industry, they very well be that people want to have all of those things optimized simultaneously. I think that there’s a very good reason why in data science and in science, we focus on a single y variable. All of our equations are y=mx+b for a reason. So independent of this, view it as being a needless complication to try to stuff a large number of Ys because the business or the environment is unable to make a judgement call about what is the optimization objective of a given test. Those might be fighting words though.
15:09 MH: That would mean we understood them in the first place, so…
15:18 TW: I guess we’re… Now I am struggling, I am a 100%… It makes me cock my head and say, “What are we doing?”, when you say these are the… We’re gonna lay out four Ys, and basically four Y variables and we’re basically gonna wait until one of them spikes, and we can claim significance, and there’s all sorts… A whole other test design and just waiting for significance and all the other issues. And I think the platforms are getting better about that, but if you pick one outcome, and I guess as you’ve described… I don’t think we actually acknowledge when you said you’ve been thinking about this for a while, you wrote a blog post in 2011. I encourage our listeners to go Google Christopher Berry, Systems Thinking and pull up and just blow up his analytics with, “Oh my god, this is getting traction.” So you really have been thinking about it for a while, but is that kind of the way you outline that as saying you do need to have your Y variable and you write that on the right. Is that a… Are there degrees of that? There is why do I as the direct marketing department exist? What’s my Y? What’s on the right side of that? And you can then go up to why does the company exist or why does marketing exist? Does it work that way that you can have systems of varying sizes that scale up?
16:44 CB: 100% yeah, yeah.
16:46 TW: Okay.
16:47 CB: You can keep on going to the right as much as you want. And the ultimate…
16:52 TW: As it happens the US election has kind of done that…
16:58 MH: Easy [17:00] ____.
17:00 CB: I’m not suggesting that anybody goes alt right that far.
17:05 MH: Oh man! Is that that thing in Excel where you accidentally hold the key down and it hits your arrow and it jumps all the way down?
17:14 TW: All they way to the right.
17:15 MH: All the way to the right?
17:16 TW: That’s what happens.
17:17 CB: If you keep on going, you might…
17:18 TW: Got stuck with the shift key.
17:19 CB: If you keep on going, you end up being on the left again. I think that you could, in the degrees, it depends on which degree of scale that you’re talking about. So if the optimization objective is to increase sign ups, then that’s a perfectly valid test. Then there’s loads and loads and loads of variables that can go in. There are loads of reasons why conversions can go up or email sign ups can go up, whatever you define as being the conversion event, or whatever you define as being the y variable. There’s plenty of value in doing that. And again, there is a very large rich body of commercial literature about why conversion happens and a large body of academic literature about why it happens.
18:07 CB: Now, if you wanna go further to the right and focus on something bigger, like how do I maximize shareholder value? How do I maximize my valuation at the end? How do I maximize profit? Those become quite a bit harder to test with standard software. Typically you kind of have to calibrate your focus further off to the left. So it does work. Sorry, this was a very long way of saying, yes, you’re right, it does work on different scales. But the key, I think, is to do one thing at a time and to do it really, really well in that context. It is already brutally hard to explain the 40 things that could go in to causing an event to occur or more of an event that you want to occur. Little though trying to disentangle a large, large number of outcomes that you want in a very large number of levers that you wanna pull and put it into a test.
19:11 TW: Essentially, ’cause I guess I was thinking that systems thinking was introducing, kind of casting a broader view, of maybe asking some larger questions and accepting some complexity, but it sounds like the complexity is all on the independent variable side of the equation that it’s also saying part of what a system is, is that recognize at the end of the day you wanna be spitting out one thing. One dependent variable, independent variables, plus the interaction effects which is sounding very, very mechanical. But my thinking was that it’s not necessarily to think about it that mechanically but to actually think through… Even things as simple as, what’s our… The classic, what does TV advertising do to our organic search? The classic silo thinking and that’s the almost cliche pointed to, but does that have… That happens in a 100 different ways and because people have been given, “Here’s your script. Live in this little silo,” there’s not organizationally encouragement of saying, “Step back. Think about the other factors. Hell, think about what your competitors are doing.” That drives me nuts, that when… What are environmental factors beyond your control? What’s the competitors doing? What’s the economy doing? Those are parts of the system. Now, unfortunately, you can’t control them. But if you just ignore them, then you’re also not really getting an understanding of the system in which you’re working. Am I off the…
20:47 CB: No, you’re bang on. I think that systems thinking enables an web analyst to see beyond just the most direct factors. Right? And I think that if we go through life as a group of people, if we go through life believing that everybody responds rationally and empirically and objectively to the information that they’re receiving, we’re going to have a very miserable life. We really are. That preferences and expectations and the way that competitive forces and the way that people feel in aggregate and the way that your customers feel and perceive are all important factors within the system. Now, some of those do not have obvious levers to them, but I think that real leaders try to understand the overall system. I think that they really try to find the reinforcing variables because those tend to be the ways that you can get the biggest bang for your buck. They enable you to scale or to level that much faster. So what you get with systems thinking is a way bigger box. What you get with a causal model is the ability to validate out some of your riskiest assumptions and to be able to optimize something a heuristic that you’re fairly convinced already works and even go so far as to generate the evidence and the confidence that something is truly working.
22:28 MH: Is it too dumb to say a systems thinker is more likely to innovate as a result of that?
22:37 CB: It’s… Oh, God, you’re gonna hate this because I’m gonna say it depends.
22:40 MH: No, that’s okay.
22:42 CB: I think that a tactical causal modeler has tremendous opportunity to be able to discover factors that were unknown. And if we define innovation as making something better, I think that if they can make the institution better or their machine intelligence better by way of discovery of a brand new factor that was previously unknown, then that’s a genuine novel insight and that’s fantastic. And I’m so excited for the current generation that they have so much opportunity to discover those things that haven’t been known. There’s so much commercial science that’s ready to be discovered. A systems thinker also has an opportunity to consider other factors that might not be evident, and in so doing they might also be able to generate quite a bit of incremental value for the organization or for customers or for users or for societies. So it… So sorry to do it to you, but it depends on the stance.
23:43 MH: No. I like that answer. And it’s interesting. So as you were talking before, I was kind of thinking about some of the work that I’m doing right now just in terms of personality and emotion and how that interplays…
23:57 TW: I’ll be back in a bit. How long are you gonna go on this? I’ll be back in about 10-15 minutes.
24:02 MH: The other thing Tim and I have been looking through is conflict. Yeah, conflict.
24:11 MH: Why are you so afraid to talk about this?
24:13 TW: Carry on.
24:16 MH: I love you too.
24:22 TW: [24:22] ____ response next week.
24:23 MH: But in essence, it’s my attempt to add this into my system of how do I make a business more effective at using data by understanding the impact of people on decision making and use of data and those kinds of things. And I don’t know, because I don’t know that I can claim to be a systems thinker. It sounds really cool and I’d like to be, but I think most of my… I was that guy who stumbled across something as I was going from A to B and that’s where I’ve made a great deal of good times. But in a certain sense, as I get older, I just… You wanna do more with all the pieces and you wanna keep learning about all the different pieces that are affecting you.
25:11 CB: No, I think it’s totally valid. And I think that most people come into the industry with their checklist. With their Yoda that teaches them how to do their checklist. And in recent years, our universities and certification courses have gotten a lot better. A lot more people have a common basis of what forms a checklist. And I think what’s very exciting, and I think probably why we’re very scary. If you’re from digital you’re already scary to non-digital people. If you’re from digital data, you’re absolutely terrifying to those people, is that because we understand how to make things better, just as a root course of what we do. And because we’re able to see additional systems, we deal in interactive. Many of us deal in interactive and with several pieces. It makes us that much better positioned to be able to handle those leadership opportunities as they begin to emerge here in the next 10-20 years. We’re probably going to be in a way better spot to be able to lead the kind of change that’s gonna be coming at us.
26:23 TW: That’s encouraging.
26:25 MH: Yeah, but now you got me nervous about this change that’s coming at us.
28:01 TW: So maybe at the risk of taking this off course, I feel like there are times when… I feel like I try to see the bigger picture. And there’s part of me that system thinking, I think is taking that step back and saying what are the pieces and parts that fit together? Even as we’re talking about digital, and digital coming along and being disruptive, in some ways it is digital being applied to a business where the business model is either not conducive to digital or the business model maybe is fundamentally struggling. Take social media coming in, and everybody is chasing social, everybody’s chasing social, but your business is selling adult diapers.
28:48 TW: A little tough to make… To say where does social fit on that? And I wonder how often there are businesses that, while digital was coming in, and there are certainly technology innovation operationally that they could do all sorts of things. But when it comes to digital, from a marketing and a brand awareness perspective if you’re truly stepping back and saying, this brand’s been around for 100 years, and people have perceptions about this household brand name, that clever commercials and witty Twitter feed and an interactive website and a great mobile experience, are largely gonna be a drop in the bucket. But that’s what I’m assigned to do. I’m working on the website. Is there a part for that to say, “When I step back and look at the whole system, there is no amount of impact I can have in this universe because I’m dealing with 100 years of brand history, I’m dealing with who owns the company, the way that it’s run. Part of me thinks that is where you say, “Wow, I should not be doing digital at this organization.” I’m not gonna have an impact, is that a legitimate outcome of that?
29:57 CB: I mean, it could. If a brand’s been around for a 100 years it means that it has successfully replaced it’s customer base a few times over. And when it comes to digital and social, we know that it’s where the millennials are at. So if the brand is going to curl up and die and the board has decided that immortality for the company just isn’t for them, that they are going to ride this carcass all the way down to the bottom of the hill, then that’s a strategy. And in that context maybe the website should just be brochure wear or a bunch of PDFs that you can download and you probably are not adding value in that environment and its probably valid to go. But its so rare that I’ll hear from any leader that immortality for their business is not something that they want and that they really want to reach out to the youngins with their Twitter and their InstaSnaps and their Youfaces. And it might be a very interesting brand key challenge in order to discover brand new customers to replace the ones that are going to the grave.
31:16 CB: So it can totally depend, and it can totally depend on your stance as well. If you do make the good fight that digital is a valid channel and that it deserves to have further investment and you’re in there in the trenches to get that incremental funding, that is absolutely fantastic and you should do it. If you get shut down and you reckon that the odds of success are below a threshold that you’re happy with you know what there is no shortage of opportunity for talent and for people that have initiatives, so you should take the initiative and get the hell out.
31:49 TW: Okay that’s… Yeah, I’m largely… And this is maybe because I’ve spent enough time in the CPG or FMCG world where it feels like they are definitely brands that you look at it and think, this brand if really said we’re trying to think 20 years down the road or 10 years down the road, we’re thinking about the next generation. A little light bulb went off when you said, “Who’s your next customer? Who’s… You know.” And even knowing that I think I still have a bias towards Colgate toothpaste ’cause that’s what I had growing up. Although I don’t do the shopping so I don’t think we actually have Colgate but, so overcoming that… But if I’m selling tooth paste or toilet paper or name a commodity, name a packaged good, that gets to where an organization that’s got the system level thinkers with the ability to say, yeah, we can’t just kind of say, “Oh, we’re shifting this radio spend into social media or this print spend into digital”, but what are we really doing? Who’s our customer? What’s really driving these decisions if some of it’s legacy brand knowledge?
33:00 TW: It would maybe to Michael’s point earlier asking the question of would this drive more innovation? I feel like there are a ton of marketers out there who are kind of living in the, ‘this is our channel. Oh, this is the new advertising channel’, if as we’re talking there is a stepping back and what are we really doing here guys? Are we trying to have immortality? Well, what does that mean for the millennials today, the kids, without getting creepy in marketing to kids? But how are those generations, how are those demographics evolving that we wanna market to next and what should we fundamentally doing? And it may not be oh make this one page web experience, this mobile responsive. You’d actually have a little bit of a bigger idea that may have a longer investment to actually get to it because you’re thinking about it more broadly.
33:51 CB: Yeah, or even in terms of it thinking beyond the campaign and thinking more systematically about how the brand key is used to create a large number of one to one relationships with customers, where customers actually de-anonymize themselves willingly and how do you manage that relationship over a very long period of time? I think that there is a lot of opportunity that’s yet to be fully explored when you take a product approach to the brand key and try to do it. Now I mean that said, if leadership at a given brand isn’t there, I think it can be extraordinarily hard to get somebody out of the rut. There’s inertia, they’re in the rut, they’re a part of a culture, can be very, very, very tough, to switch them out of that culture, very tough.
34:42 MH: So what are some… Like do you have top of mind some examples of companies that have used systems thinking both to see a disruption and take advantage of it or see a coming disruption to them and go around it? I’m just curious if you have any examples that come to mind.
35:05 CB: Yeah, like I gotta say Tesla. Their approach to narrow artificial intelligence or narrow machine intelligence with respect to self driving cars. They were able to close an incredible gap versus Google just by way of having the drivers drive Tesla cars and have the computers inside those cars report back to Tesla what was going on. And they’ve been able to actually generate some alright narrow artificial intelligence that’s able to drive a car. And they were able to bridge this gap against Google where they’ve had these self driving cars for a very long time because they were teaching a robot how to drive as opposed to using a large body of humans to actually do it. Complete shift of how we traditionally look at how we’re gonna train.
35:57 CB: And in the end another aspect is, to the Tesla example is, a lot of data scientists will stand back and say, “Yeah, but then you’re just teaching a machine to drive as badly as a human. Why the hell would we ever want to replicate these absolutely terrible asinine drivers on the road? We were really out to look for perfection.” Take a step back, in many cases, in many markets being as bad as a human is sometimes enough, especially when it comes to disruption. You’ve just have to be as bad as a person and then you can probably replace a median person. So I think that’s one aspect. Where you look at an overall system and think of ways to really get around the constraint and in Tesla’s case it was how to get around the constraint of a 10 year R&D gap.
36:53 MH: So what you’re saying is for the computers to take over, they don’t even have to be particularly smart, they just have to be slightly above average.
37:00 CB: They can be in many sectors, they can be as bad as a human. There was a story about, in Japan, 37 claims adjusters were axed, white collar jobs in Japan, axed. When you train a weak machine intelligence how to do a job as good as the worst one of those 37, every single transaction of that machine does from thereon out is almost free. Compared to paying a human being that goes home at night and requires benefits and a little thing like a pay check, it is far more cost effective to just do it that way. And I think that this is the real disruption that routinized tasks could be really effectively automated just by way of having some pretty lousy human trainers, quite frankly.
37:57 MH: Wow, that’s a good tip for McDonald’s there, I guess.
38:03 MH: So I think one of the last things would be good to ask, what are some ways that people can take introductory steps in just starting to build sort of a systems mentality or systematic way of thinking? I’m asking for a friend.
38:19 CB: Oh sure, sure.
38:24 CB: I think the best advice is to start off with a problem set that is very interesting to you and a system that you understand intimately and you’re really not effectively thinking unless you’re drawing it out either on a piece of paper or on a white board. If it’s relevant to you and you care, you’ll be able to draw it out. Begin drawing arrows, stand back and then ask yourself, what’s missing? And then on the next iteration add some of those boxes and see how it complicates your system overall, then stand back, take a look see what’s missing, add another component and keep on adding components until it hurts.
39:07 CB: And then at that point ask yourself, in your judgement, what’s the most important thing and then you kind of, you grab that most important thing, you pull it as far to the right as you can, redraw the arrows and you’ll have just transformed a system into a partial causal model that hopefully a bunch of very, very risky hypotheses falls out and then you can generate a listicle for yourself. You won’t believe the top seven riskiest assumptions that I never thought about questioning myself, number four shocked me. And that becomes something that’s actionable on your burn list.
39:46 MH: Now that’s awesome. That’s very practical. So great. Well this is a conversation that I feel like we need to go longer just so Tim and I can really get it. That being said, we don’t have more time but Christopher this is awesome. And I know our listeners are very intelligent so they’re gonna understand and be able to really [40:08] ____ sort this, so that’s good.
40:10 CB: They are. They’re brilliant listeners.
40:12 MH: That is our hypothesis in our system.
40:17 CB: Not very risky though.
40:18 MH: Not, no.
40:19 CB: That’s not a risky bet.
40:21 MH: No I guess not. So one of the things we do in this system is a last call where we go around the horn, talk about anything interesting that’s going on, that we’ve seen or heard. So I don’t know if you wanna start Christopher but I will hand the floor back to you.
40:36 CB: Yeah, absolutely. If you are in Toronto, once a month we have Machine Intelligence Toronto. We always have very, very interesting speakers that are out. It typically happens towards the early part of the month. So please come on out. We’re at the Mars Discovery District right in downtown Toronto. You are more than welcome to come out and meet our community.
41:00 TW: How do they find information as to…
41:03 CB: It would be at meetup.com and search for Machine Intelligence Toronto or even off of Google typically if you Google Machine Intelligence Toronto it will bring you straight to the meet up group.
41:14 MH: Very nice.
41:16 TW: Smart people in Canada.
41:17 MH: Yep.
41:18 TW: You wanna go next Michael?
41:20 MH: Sure, I’d love to only because it will make you be like, “What? !” So mine is, I ran across somebody who Tim already of course knew about but…
41:31 TW: Oh. Yeah, that’s a good one.
41:32 MH: And it’s not the person themselves but it’s their GitHub. So there’s a professor of statistics at UBC also, looks like they’re all Canadian, Jennifer Bryan who has an amazing R repository of tools and things like that. And I think, anyways, I was just reading through a bunch of the stuff last night and was very impressed and I was like, “I don’t know why I’ve never ran across this person before but I think it’s worth a lot more digging in 2017. So I was excited to learn about it. So Jennifer Bryan, GitHub, that’s my last call.
42:11 TW: I’ll say she also has a whole site she’s built of using R with GitHub, which we’ll put in the show notes. I tracked it down. But Jenny Bryan shows up on many searches. She wrote the preeminent package for getting data out of Google Sheets. She just seems like somebody I’d like to have a beer with and sit back and feel like an idiot. But enjoy feeling like an idiot.
42:34 MH: Christopher’s probably chuckling because he’s probably met her and knows who she is. No?
42:38 CB: No, I have not. No, I have not met her. No.
42:40 MH: Oh, okay. All right, well good.
42:42 TW: That’s the other side of Canada.
42:44 MH: Yeah, way on the other side of Canada.
42:48 MH: All right, Tim. What’s your last call?
42:51 TW: So this is a little weird, because this is by far from a hearty endorsement. I was ready for it to be a hearty endorsement, and instead it’s going to be a lukewarm… And I would love to hear if there are listeners who think this is the greatest book of 2016, but… And you guys may have heard of it, “Weapons of Math Destruction,” by Cathy O’Neil. The idea of the book being where Big Data and where predictive modeling and where the machines can go horribly awry. I like. I like the idea of there being a little bit of a corrective. The actual execution is pretty disappointing. I have yet to stumble across an example that I think, “Oh, that’s awesome. Like, oh, you’re going to tell me about how US News & World Report’s college ranking wrecked the US university system. I think I’ve heard that anecdote a 100 times.” So I am still struggling my way through it.
43:51 TW: It’s one of those where I’m finding myself saying, “I’m gonna read this fuckin’ chapter, because it’s gonna be good for me and I’m gonna discover something great.” I’ve started getting annoyed by the fact that she refers to ’em as WMDs, which, obviously, it’s a play on weapons of mass destruction, but I realized how horribly lame that is. I’m like, “You swapped out one word. And now you’re referring to as WMDs.” So it is kind of a non-endorsement when I look at ‘The Signal and the Noise’ or I look at ‘Moneyball’ or I look at some of those other geek-oriented books that have been… I’m getting anecdotes of things that I’ve never heard of. It’s making points I don’t understand parts of it. For me right now, this book is not it. I’m about a third of the way through it. I would love to have somebody just smack me down and tell me why that book is awesome. I’m sure there are some listeners who have actually been reading it.
44:42 MH: We might need to make a book review a normal part of the podcast. That was awesome.
44:53 MH: That’s so good. I’m reading this book. It’s not that good.
45:00 TW: I actually requested the book for Christmas. She is on the Slate Money podcast, which I enjoy as a podcast. But she’s kind of not the person I like on the podcast. It’s similar to this, it’s three people discussing stuff. I was like, well maybe… She’s got a data science background, she’s a python lover, she was in academia, she worked at DE Shaw, so she was in the hedge fund world for a while. So she should be bringing… She’s an occupier, basically, so she is all… On paper, there are many things that should make me think that this person is awesome and has great things to say, and so far I have not found it.
45:40 MH: Well that is one of the more interesting last calls that we’ve ever had.
45:48 MH: No, we should wrap up before [45:50] ____…
45:50 TW: Before you ask me any more questions where I get myself in worse trouble.
45:54 MH: No, it’s detracting at this point from what has been, I think, a really profound topic that I really liked. Christopher, thank you so much for coming on the show. Obviously, as you’ve all been listening, I guarantee you’ve got questions or thoughts about this and we would love to hear from you on our Facebook page, also on our website, analyticshour.io. Christopher is active both on Twitter and on the Measure Slack and if you can stand up to the harsh glare of his very, very sharp intelligence, you can converse directly with him.
46:35 MH: I don’t know, Christopher. Every time I hang out with you, I always leave thinking, “Man, that guy’s so smart.”
46:45 MH: And then I’m just like, “Why am I so dumb?” It’s okay, I’m gonna be all right. I keep coming back, so you’re not doing anything wrong. But I wonder if other people feel that way too, or if I’m the only one.
47:03 TW: Oh, no. It’s me, too, I’m right there with you. We’ve had that discussion, I think, behind Christopher’s back.
47:08 MH: What a privilege to have somebody like you, Christopher, on the show. Thank you so much for coming on.
47:13 CB: Thank you very much for having me. It’s always a lot of fun with both of you.
47:18 MH: Well, we do our best.
47:23 MH: Anyways, so for my co-host, Tim Wilson, and myself, remember everybody, keep analyzing.
47:33 S?: Thanks for listening, and don’t forget to join the conversation on Facebook, Twitter, or Measure Slack group. We welcome your comments and questions. Visit us on the web at analyticshour.io, facebook.com/analyticshour, or @analyticshour on Twitter.
47:53 Speaker 4: So smart guys want to fit in, so they made up a term called “analytics.” Analytics don’t work.
48:16 TW: I was thinking you were like moving on around with the CBC, you were gonna take a meteorology position there.
48:24 CB: No, this is just standard Canadian banter about weather.
48:29 CB: Why? Why wouldn’t you… If you like a cartoon, if you like cartoons, you must like models. Everybody’s convinced that it’s always a problem for future selves, right? You know, we’ll just let future CEO worry about that.
48:45 MH: Oh man, you just basically worked out my whole hiring strategy for 2017.
48:55 MH: I’d like to think so, it’s discovering what even social media pressure, but who am I kidding?
49:00 CB: Who are you to resist the mob? I mean, pitchforks. Pitchforks.
49:06 TW: We’re ready, Michael, if you’ve got something.
49:10 MH: [49:11] ____… You’re doing a really great job Christopher and you’re the only one.
49:24 TW: I mean, I would love for somebody who actually has enjoyed our podcast, who I just royally pissed them off because they think either the book or Cathy O’Neil is amazing, to like, to tell me that they now hate me.
49:38 MH: Where could they tell us that, Tim? No, I’m just kidding. [laughter]
49:42 TW: Ooh, am I gonna have to record a new last call?
49:52 CB: It was fantastic.
49:54 MH: It was amazing. It was so good. I feel like, in a certain sense, Tim, it’s the kind of thing you might wanna edit some round the edges but actually keep the meat of the content there. It is risky; it goes out on a ledge but I will go there with you and I will back you 100%. You have my commitment right now.
50:15 MH: The only part of the show I listen to regularly are the out-takes at the end.
50:22 TW: Rock flag and systems thinking…