My guest today works at the intersection of artificial intelligence, neurochemistry, and human behavior. Ramsay Brown is Co-Founder and COO of Boundless Mind (formerly Dopamine Labs), where he and his team work on persuasive AI and behavioral design. A NeuroTechnologist trained at USC’s Brain Architecture Center, Ramsay worked on Brain Mapping and pioneered an app that was something like Google Maps for the Brain.
Learn how persuasive technology is shaping human behavior with NeuroTechnologist @RAB1138. #neurotechnology #neuroscience Share on X
Ramsay is an emerging leader in Persuasive Technology & Neuroinformatics and has been featured on 60 Minutes, The Today Show, TechCrunch, Big Think, The Guardian, and GQ. In this episode, he shares how persuasive technology is shaping human behavior, what his company is doing to ensure they’re promoting change for the better, and how society as a whole can become more neuro-literate.
If you enjoy the show, please drop by iTunes and leave a review while you are feeling the love! Reviews help others discover this podcast and I greatly appreciate them!
Listen in:
Podcast: Play in new window | Download
On Today’s Episode We’ll Learn:
- Two great things that building persuasive technology (and being open about it) does for society.
- A common pitfall of blind optimization and machine learning techniques.
- Why it’s important to build a neuro-literate society.
- What initiatives Boundless Mind has taken to combat technology addiction.
- How they determine good engagement vs. too much engagement.
- What dark patterns are and the problem with these techniques.
- The unethical practice that made Ramsey want to walk away from technology as a whole.
- Examples of positive habit and behavior changes that can come from neurotechnology.
Key Resources for Ramsay Brown:
Share the Love:
If you like The Brainfluence Podcast…
- Never miss an episode by subscribing via iTunes, Stitcher or by RSS
- Help improve the show by leaving a Rating & Review in iTunes (Here’s How)
- Join the discussion for this episode in the comments section below
Full Episode Transcript:
Welcome to The Brainfluence Podcast, with Roger Dooley, author, speaker, and educator on neuromarketing and the psychology of persuasion. Every week, we talk with thought leaders that will help you improve your influence with factual evidence and concrete research. Introducing your host, Roger Dooley.
Roger Dooley: Welcome to The Brainfluence Podcast. I’m Roger Dooley. This week’s guest’s short bio claims he’s a neurotechnologist, futurist philosopher, and escaped circus bear. Ramsay Brown trained at USC’s Brain Architecture Center, worked on brain mapping, and pioneered an app that was something like Google Maps for the brain. Now he’s co-founder and chief operations officer at Boundless Mind, formerly Dopamine Labs, where he and his team work on persuasive AI and behavioral design. Welcome to the show, Ramsay.
Ramsay Brown: Thank you so much for having me, Roger. It’s really a pleasure.
Roger Dooley: Great, so are the circus owners still looking for you?
Ramsay Brown: They have, unsuccessfully. I’ve shaved down about 90 percent of it. Throws them off the trail, because they’re still looking for somebody who’s pretty furry.
Roger Dooley: Sounds really wise. Ramsay, you’re working at the intersection of artificial intelligence, neurochemistry, and human behavior. Can you explain what kind of projects you’re working on? And tell us if we should be really scared.
Ramsay Brown: I don’t think anyone should be scared. In fact, I think it’s a great cause for celebration that we now get to live in a world full of all this technology of behavior. The big project that my team and I have been working on for the past few years that we’ve released as our core service is an artificial intelligence system that any app and any technology can plug into to find those perfect moments of surprise and delight. Those little places where an app could shoot confetti out of it, or give someone 10 extra bonus points, or a one-time discount. Something small. Something that just kind of puts a smile on someone’s face. You don’t have to bribe them with a Porsche. You don’t have to give them a promotion. Just these little hits of dopamine. Our AI figures out, for everyone uniquely, in any app, on what rhythm and pattern to give them over time to keep them coming back longer and using the app more.
Ramsay Brown: The reason I say no one should be freaking out is because building this technology behavior and being very open about it does two really great things for society. It de-weaponizes these tools, because now anyone can go use them, which means they’re more democratic, they’re more inclusive in terms of who gets to build on top of them, instead of just being owned by Facebook, or Instagram, or these giant tech companies. Anyone gets to use them now, and it means we get to use them towards real human problems. We get to use them towards fighting type two diabetes, or fighting anxiety and depression, or fighting the opioid epidemic. We really get to make a really pro-social stance here about where this technology should be used for. To my team and I, that is unambiguously good.
Roger Dooley: I know my friends Nuriel and Robert Cialdini always use the word “ethical” and “ethics” about 20 times in any of their speeches.
Ramsay Brown: You have to. Yeah.
Roger Dooley: For better or for worse, behavioral science is sort of pointing the way to hack the human brain and how people behave, and it’s a difficult thing to keep from being applied in a way that benefits only, say, the business using it. We live in an age now where people actually fear phone addiction, device addiction. They talk about spending too much time on Facebook or Instagram, or many of the other somewhat addictive apps out there. How do you sort of balance that, where this is a real fear that people have these days?
Ramsay Brown: It’s a fantastic question, and we’ve taken two big initiatives to help people become more comfortable with that. The first is something that I think you and your audience are gonna be really familiar with. About 20, 30 years ago, there was this change in how we taught critical thinking in schools that started including media literacy. It started becoming the case that people became aware that other people were trying to sell them things, in advertisements, in product placements. The wool got removed from the eyes, so to speak, that people now were really aware that other people were trying to persuade them to do something.
Roger Dooley: It took that long to figure it out?
Ramsay Brown: Well, no, but we started teaching it. We started teaching it in schools.
Roger Dooley: I see. Okay. Mm-hmm (affirmative).
Ramsay Brown: That even children could identify, “Oh, that’s an advertisement. They just want my money.” What happened? Now we have things like experiential marketing and brand marketing, and places where companies have to actually better align what they have with what people actually want, as opposed to just beating them over the head with advertisements. That’s made for a better world. In that same way, we think that raising better public awareness about behavioral design and persuasive AI is the first step to building what my co-founder, Dr. Combs, calls a neuro-literate society. A society full of people who, in the same way that they can look at an ad and say, “Yeah, but that’s an ad. I don’t need to pay attention.” They can understand how technology is being explicitly built to shape them, to turn them into someone new.
Ramsay Brown: When we have that neuro-literate society, it means that people feel empowered to not just identify where tools are trying to persuade them to become someone else, but they actually then can possess their own tools to shape their own minds. From that perspective, we view it as a, “Get the word out there. Build this kind of media literacy equivalent for behavior change, so they can understand that their products are trying to shape them.”
Ramsay Brown: Then on the other side, we are directly building technologies that we give directly to people, to consumers, to help them take back charge from their phone. Our core business is built around behavior change, and that doesn’t always just need to be put in the hands of brands or companies. We’re also putting it in the hands of everyday people to take charge of that relationship, too. There’s no reason that someone just walking down the street should not possess really strong, neurologically capable technologies for defending their own attention span. And by giving really sharp sticks to both sides, to brands and to people, we can help brands and companies and apps achieve their goals, and we can make sure people technologically are equipped to maintain the kinds of relationships with their technology that they want.
Roger Dooley: Mm-hmm (affirmative). Yeah. Ramsay, how do you define the level of engagement that’s optimal? I know years ago, this was mostly pre the mobile revolution, although the website still exists, both as a website and an app. I co-founded a business called College Confidential, and it was a place where students and parents went to learn about the college process. At one point, a writer for the Chicago Tribune described it as a cross between porn, heroin, and crack, which was her way of saying that she was hooked on that site. We didn’t use any explicit dopamine strategies, but people would stick around to read other people’s content. They would post something, and they would then, you know, much as we see with the Facebook today, get responses to that, their own activity, and so on.
Roger Dooley: It was, for at least some people, quite addictive, but the difficult thing, I guess, is obviously somebody who arrived at the site and might have benefited from it, but took one look and said, “Nah, I don’t get it,” and bounced. That was clearly inadequate level of engagement, or somebody who tried it once and it didn’t really work, when, again, there could have been a benefit, because really, there was … The site was really set up to benefit people in that particular phase of life. But at the same time, there were certainly a few people who would spend hours and hours per day on it, but it probably was not the best use of their time. I mean, in some cases, these people were actually being extremely helpful to others, so you could argue that it was a good thing, but in other cases, they probably spent that amount of time just arguing with people, or doing the things that people do online, just passively reading content posted by others and so on.
Roger Dooley: In any given situation, how do you determine at what point it’s good to increase engagement, and then, “Okay, well maybe we’ve become a little bit too engaging, because now we’ve taken over this person’s life”?
Ramsay Brown: Absolutely. That’s a fantastic question. For each person, and for each behavior they’re trying to do that your technology can enable, there is a sweet spot for engagement. It changes within a person, between the different behaviors they do. For example, I have an amount that I would like to use Instagram, and then it has its goals, its own wants and needs about my engagement. I have an amount that I would like to use Strava, the app that I use to track my runs and my bike rides, and they have the amount that they’d like me to use it. Aligning business goals and business performance metrics with what users actually want out of this app, where it brings them value versus where it has a decelerating return for value, where it actually starts hurting them for overuse, I think is really important for teams to be able to suss out, but it’s often hard.
Ramsay Brown: My team and I are releasing a book shortly, Digital Behavior Design, that is going to walk through, for any team who’s interested in getting started with behavioral design, what tools are available out there, what different frameworks they can use, with special focuses on how to build habit-forming technologies. One of the things we discuss in this book is about how easy it is to overshoot as a team. When you were working on College Confidential, I’m sure at some point there was, on a whiteboard somewhere, the goal of “maximize user engagement.” That is good for the platform, and for some people that is what they wanted and needed, but for any app that’s had that goal, you’ll notice that at a certain point, there’s diminishing returns. The users start quitting. They start burning out, so to speak.
Ramsay Brown: An interesting question lies there, and it’s an interesting science and engineering question. How do you detect burnout? How do you predict when someone’s about to be like, “That’s too much. That’s enough”? If you’re actively using behavioral design techniques, how do you adapt those techniques on a person to person basis to keep them from burning out, but to get them close to that sweet spot that they want? That’s a lot of what my team and I outlined in this book. It’s a challenge, but it’s also a lot of what our software solves on its own, and why people come to us for those services.
Roger Dooley: Right. You know, I think that that reflects a difference between the situation we have today, where this data exists, and both the data and the techniques exist so that you can detect perhaps when a person is using too much, and then dial back the behavioral nudges, or nudge their behavior in a different direction, as opposed to my situation years ago, which was more or less happenstance. The behavioral design was what it was, and we didn’t really have the ability to either change that per user basis or anything else. That’s interesting stuff.
Ramsay Brown: Thank you, and it’s what makes me so excited about it, too, is we now, with the advent of cheaper and cheaper AI solutions and really cheap cloud computing, we can build platforms that do that. It doesn’t have to be the case that every user is in a one size fits all experience. If you have these systems like ours that know how to adapt and change for each user to keep them where that user wants to be, aligned with your business objectives, that’s a very different world than what it was 10 years ago.
Roger Dooley: Mm-hmm (affirmative). Although it’s hard to visualize many businesses saying, “Well, yeah. You know, I think people are using our stuff a little too much. We need to dial it down.” Well, I guess Facebook has just done that a little bit. Not so much, I think, to reduce use, but when they switched some of the stuff out of the newsfeed to emphasize more friends and family, that produced a decline in usage, but I don’t think that was necessarily their primary objective. They just wanted to get a little bit of … Put a little more pressure on the tendency to read a whole bunch of news, potentially fake news or incendiary news.
Ramsay Brown: I think you’re absolutely right, which is part of their responsibility, then, as one of the largest conduits of information people are consuming in their daily lives. But to your point about, what does it look like for a company to say, “You know what? We can walk away from a little engagement.” When done on a person to person basis, the way to think about it is that even if you are decreasing engagement today, you’re lengthening the lifetime value of any one user, any one customer, because you’re predictively intervening in a moment where they might leave. They might burn out and churn.
Ramsay Brown: You ask any chief revenue officer or chief marketing officer, “What’s gonna hurt more here? These customers that you’ve already paid for, losing 5% engagement per day? Or them quitting six months early because they hated your product, because you hit them too hard with it?”
Roger Dooley: Right. That’s assuming that you can individually alter the parameters, because I think that traditionally, if you dialed back engagement in any way, it was pretty much for the entire user base, not just individuals.
Ramsay Brown: Oh, you’re absolutely right, and that’s where we step in. That’s the opportunity we solve, is that everyone’s brain is different, so building a system that could respond to that and predictively know how to get in the way or get out of the way would be a godsend to these types of teams, because you’re absolutely right. If you have to do it across the whole board, you’re gonna end up leaving a lot of use on the table, and poorly addressing some people, and over-addressing others, and those one size fits alls just don’t work.
Roger Dooley: Mm-hmm (affirmative). Yeah. You know, I just saw an article that showed that not just Facebook had issues of people engaging with kind of extreme content, but YouTube’s algorithm actually steered people toward more extreme content, and it was not that writers of that algorithm or the bosses at YouTube thought that it was a good idea to show more extreme content, but it turns out that people engage with more extreme stuff based on what they like now, but if it’s just a little bit more extreme, they’re likely to engage with that, and that ends up sort of setting up a loop, that a conservative person who might be a rather moderate conservative ends up cycling through this a few times, and starts seeing suggestions for white supremacy content or something like that, and liberals will see crazy conspiracy theories, and it’s, again, not that anybody’s trying to make this happen. It’s just our tendency, I guess, to engage with that extreme content.
Roger Dooley: Have you seen any evidence of that at all in your work?
Ramsay Brown: We haven’t seen any evidence of it in our work, fortunately, but it’s a impact that I’m intimately familiar with from other work that I’ve done academically in artificial intelligence. This is one of the common pitfalls in what they call “blind optimization,” which is a consequence of the type of machine learning techniques that teams like YouTube are using, where these machines are learning from our behavior what we’re going to continue to want, and they make some small extrapolations. They haven’t been instructed to do anything but suggest the next video that we would be statistically most likely to keep watching. The problem is, exactly as you said, that kind of spins out of control.
Ramsay Brown: Because the machine isn’t optimizing to show you anything in particular, it’s just optimizing to show you something that’s like what you already enjoyed, it can get you in some weird places, and this is, I think, why a lot of people are beginning to come out with ethical frameworks, or I saw yesterday a Hippocratic oath for AI practitioners that’s talking about the importance of not just allowing these blind processes to run, but having human imperatives set in there about, “If they like these videos and you want to show them that next thing that might be going off the edge, maybe we should have some interventions here that don’t just exacerbate existing tendencies, or existing biases.”
Roger Dooley: Yeah. Some other common AI fails, there was just one I read about where in an effort to identify … I think it was terrorists or criminals or something, they had some AI system examine thousands or millions of photographs, including a lot of surveillance photographs, and it turned out that the AI ended up determining that somebody in a blurry photograph was probably a criminal, because the actual criminals were in the surveillance photos, so it-
Ramsay Brown: Oh, no. There anyone who’s blurry-
Roger Dooley: Yeah. It was actually more of a facial recognition thing, I think, but it turned out that the biggest determinant was blurriness. It just shows that it’s really important to not let AI run amok, and understanding how it’s making some of these decisions is important, even if these decisions are things that … You know, perhaps, I mean, the promise of AI is that it can do things better than humans can, and find those relationships that humans aren’t going to spot, but at the same time, it’s important to understand those, I guess.
Ramsay Brown: Absolutely, and that’s largely been the goal of artificial intelligence from its outset, was to be able to take any sort of cognitive process, any sort of decision making intelligence task that humans do, and find an equivalent automated solution for it. It’s largely doing very well, and when computational neuroscientists, which whom I studied and I discussed this, we don’t see an end to it, per se. It’s not like we’re gonna run up against a hard human task, because at the end, if the brain is this very, very intricate but still completely mechanical biological computing infrastructure, and with adequate resources, we can model what’s going on inside the brain, then it stands to reason that there isn’t anything that humans do that we can’t get a machine to do as well, given sufficient sophistication.
Ramsay Brown: That’s a little bit of a philosophical stance today, but it’s also practically what a lot of teams in neuroscience are starting to shoot for right now. The Blue Brain Initiative, in Europe, raised a billion euro a few years back to literally build a whole brain simulation in a box that would be able to talk, and weep, and pray, and write, do all the things that we do that we thought might be the more human aspects of ourselves. This isn’t sci-fi anymore. This is a line item on the EU’s science budget, so we’re definitely entering that age, and it will take a lot of really great ethics gut checks, and really important conversations, very, very, very soon, starting today, to see us through that transition well.
Roger Dooley: Yeah. Have you heard about progress on that Blue Brain Project?
Ramsay Brown: Oh. Yeah. The last time I looked in, it had kind of started stalling, because … And this is now me, the scientist. Myself and others in the neuro-anatomy community had qualms about how they were collecting their data, and how they were using that to build the models, dot dot dot, fill in the blanks. It’s kind of stalling, but it goes to show, at least, that this type of goal is something that people are willing to put money behind right now, people are willing to divert a huge amount of resources globally to working on, and I am certain that even if they don’t hit their stated target goal, they’re going to get us drastically closer to it than when they started.
Roger Dooley: They’ll probably find some interesting stuff along the way. I think the ultimate goal sounds rather distant and probably unlikely to be hit even with that many euros, but just put that much money into a scientific project and you’re bound to find some really cool stuff along the way.
Ramsay Brown: Absolutely, and of course their goal is very humanitarian. It was, “How could we use this to accelerate learning about things ranging from Parkinson’s and Alzheimer’s to schizophrenia and depression?” It was a very pro-social, very social good goal that they were trying to accomplish with this technology.
Roger Dooley: Yeah. I was thinking more along the lines of my desktop computer reacting to me emotionally and starting to cry if I didn’t humor its whim.
Ramsay Brown: Well, did you hear that Alexa big, where she started-
Roger Dooley: Yeah. Your idea’s so much better than that.
Ramsay Brown: Did you hear that about that Alexa bug where she just started randomly laughing at people?
Roger Dooley: Yeah. Yeah. I did.
Ramsay Brown: I love that.
Roger Dooley: Yeah. That had to be rather disconcerting, to have your Echo suddenly chuckle at you.
Ramsay Brown: Well, I’ve got a buddy who works at Google, and he has a young son, and they have a Google Home in their home, Google’s competitor for Alexa.
Roger Dooley: Right.
Ramsay Brown: They’re training their son, who loves to talk to the Google Home, to say “please” and “thank you” to the Google Home.
Roger Dooley: Mm-hmm (affirmative).
Ramsay Brown: To treat it like anyone. Treat it like a person.
Roger Dooley: Right. Well, you know, I always find it a little bit weird not to say “thank you” after I yell at Alexa to do something. I mean, I’m kind of bossy with her, and somehow it seems just a little bit wrong after she does something not to say “thank you,” but I tend not to.
Ramsay Brown: I think it’s a good habit to get in, because as these systems become more and more like us, and we start being able to model what in the brain is lying inside emotions, is lying inside persuasion, and then we can translate that into these types of systems, into these machines, I can envision a world pretty soon where someone’s gonna bark at Alexa one day and she’s gonna bark back, “Excuse me, can you at least say ‘please’?”
Roger Dooley: Great. Well, we have that to look forward to, I guess.
Ramsay Brown: That’s … Yeah.
Roger Dooley: Yeah. Ramsay, one topic that comes up occasionally at persuasive design conferences are dark patterns, using web design behavioral nudges and other little things to trick users into doing things that they really didn’t intend to do, but somehow benefit the company, or maybe prevent them from doing things that the company would prefer they not do, like complain or make a return or something. I’m curious whether you’re seeing much of this today.
Ramsay Brown: I am. I’m actually going and grabbing my iPad right now to see if I can’t find the list I’ve been generating. I’ve gone out recently and I have personally made it a little bit of a quest to go see how many of these dark patterns I’d be able to analyze from the net, because they’re pervasive. They’re everywhere, and they’re exactly the kinds of things that, as a behavioral design community, we need to start having really good conversations about whether or not we’re comfortable with these techniques actually being used. I found the list. What did we find? We found like 30 of these guys.
Roger Dooley: LinkedIn has been one of my favorites, where I think it’s improved a little bit now, but they were totally set up to trick you into sharing your mailing list, like your Gmail contacts with them, by having a very prominent “continue” button when, if you were distracted, you didn’t realize that by clicking “continue” after, say, adding a contact, you were actually continuing to add your email list. I think they’ve dialed that down a little bit, but I still find them very oriented to building connections, so that for instance, if I look at somebody who has sent me an invitation, so I look at their profile, there is a big “accept” button, but there is no “ignore” button. To actually do the “ignore” function, as near as I can tell, you actually have to go to the function where you’re viewing your invitations and use it there. There’s a little three dot menu, but “ignore” is not one of the options there either, so I don’t know. Maybe there’s something I’m missing, but to me, it seems like they’re still trying to do that.
Ramsay Brown: No. No. That’s a design decision. That’s a design decision. We call that one “burying.” For those in your audience who aren’t familiar, these dark patterns are these really predictable ways that you can modify an interface to more or less bait and switch a user into doing things. These range from inducing artificial competition, to camouflaging things, relabeling things, forcing conscription, burying the feature you want, really aggressive double speak, where the, “Do you not wish to almost not sign up from this mailing list?” They can work as many sentences in to make it totally unclear what’s really going on. Fourth wall destruction. Framing, where when they want you to agree to be on their mailing list, they’ll say things like, “Do you love the environment?” “Yes.” That was the implicature of, “Then sign up for a mailing list.” Or the other button is, “No. I hate the environment.” And it just lets you go to the website.
Roger Dooley: Right. Yeah. The classic two button opt-in, where the “no” button has some ridiculous thing, although I think that people have seen so many of those that now they see through that, and they probably find it annoying, because I’ve actually seen a reduction in those, where the “no” has gone from something really ridiculous, like, “No. I don’t like money.”
Ramsay Brown: Exactly. Exactly.
Roger Dooley: Something like that, to, “Not right now,” or something a little bit less aggressive, just because somebody presents you with that choice, it’s threatening. I think they’re sort of getting in your space, where they’re saying, “Okay. To read our wonderful article, you’ve got to either sign up for our mailing list, or click on this thing that you don’t really think to be true.” That’s not a good way to start with a potential customer.
Ramsay Brown: No. It’s a poor look, and when I was going through this analysis, and I found all these patterns, started identifying them, there was one that leapt out at me, that made me almost want to just put my phone down and walk away from technology as a whole. We called it “fourth wall destruction.” Inside the app Snapchat, there is part of the new story features, where if you see an advertisement, you can swipe up on the advertisement to learn more about the advertisement or maybe go to their website. Someone had created an advertisement such that it looked like a hair, a stray hair had fallen on your phone.
Roger Dooley: I love it.
Ramsay Brown: You’d go to swipe to get rid of the hair, but it would open up the ad then, and that destruction of the firm boundary between the real world and the digital world is nauseating, and those people who did that should be taken out back and scolded thoroughly, because it’s completely disrespectful to people’s understandings and norms of the structure of reality. That was an egregious misstep, and I hope this one does not catch on, because it was really … That’s unethical.
Roger Dooley: Right. It’s like the simulated fruit fly on your screen.
Ramsay Brown: Yeah, and then you swat at him to get him to move, and, “Oh, now I’ve opened up an ad. Great.” This is pushing it, and I don’t feel like that team should be proud of themselves. That was an egregious misstep.
Roger Dooley: Right.
Ramsay Brown: Because it’s breaking a lot of the agreements and a lot of the social contract between technology and its users, and I think that is what constitutes.
Roger Dooley: Yeah. Let’s switch to the ethical side of things, Ramsay. I agree totally, and I’m hoping that in general, businesses abandon some of these practices. Like I say, I think that LinkedIn, although they still have some of those, I think they’ve reduced their effort to get people to do what they want without really knowing what they’re doing, but switching to the more ethical side of things-
Roger Dooley: … how do you really encourage sort of pro-social behavior, or even just beneficial behavior for the individual? Because I’ve been to Nuriel’s Habit Summit, and everybody is all about trying to change habits, or change behavior for the better, but in my experience, it’s a lot easier to get people to say, keep watching cat videos or something than it is to exercise more or eat more broccoli. Do you have any examples of behavior change that has been very positive, that’s been done using these kinds of techniques?
Ramsay Brown: Absolutely, and that’s actually the only companies that my team and I work with. Within the toolkit in behavioral design, we maybe have 10 or 15 techniques that we can use to increase a behavior or decrease a behavior. The one that my team and I focused on, about habit formation, closely resembles Nuriel’s hook model. Our TAFR model, our trigger, action, feedback, reward system is associating a trigger with an action someone wants to take within a positive reinforcement program.
Ramsay Brown: When we’ve taken that out the door, beyond just the workshop, beyond the summit, beyond the consulting talk, but into their code base, to install that with them and to optimize that using our AI, we’ve seen that when users go do that hard thing, that eating that broccoli, that going for that run, that paying down your debt, that hard thing you want them to do, if you make them feel surprised and delighted immediately after doing so, that reward part of the system, if you can do that right, and that’s what our AI optimizes for our customers, we’ve seen a 60% improvement in the frequency of how often people go for a walk after surgery, we’ve seen a 14% lift in how often people pay down their debt early or on time, we’ve seen a 167% increase in how often people opened an app to help them fight cyberbullying, we’ve seen a 24% increase in how often people adhere to their paleo diet. We’ve seen people who are doing real life hard things, and the application of this optimized reinforcement, or that burst of dopamine, that is what translates that from a hard thing that I don’t really like doing to a habit, to just a part of who I am and a part of how I operate.
Roger Dooley: Yeah. Ramsay, one of your examples is microloan repayment. Explain in a little more detail how that might work for a user of the app.
Ramsay Brown: Sure. This was an experiment that we ran with a company here in Southern California called Tala. Now known as Tala, at the time known as InVenture. They have a microloan service for people in the emerging world, centered largely around Kenya, sub-Saharan Africa, and south-central Asia, largely over the SMS system, so over text messages. When you and I maybe go out and get dinner and it costs us $20, $30, $40, that might be a meal. When you’re in rural Kenya, $20 or $30 gets you a week of food and fuel, so for a lot of these people, these microloans that they take on, if done in economically responsible ways, which Tala does, and they should be very proud of themselves for, can help them build up the kind of financial success that they and their family need.
Ramsay Brown: But getting people to pay those back on time was a challenge, so they brought us in and we hooked up our AI to their system, and when someone would send in the text message confirmation that they were making their payment, most of the times they would receive a very flat, neutral “payment acknowledged” kind of response. But occasionally, and according to our optimization system, they would receive this much more positive and excited message from the Tala system about, “Hey, thank you. We really appreciate you sending that through. This is exactly what you need to do to help you and your family build the kind of lives and futures you want. We really appreciate this.” Even just that little bit extra of positivity in that messaging, that little bit of delight that we could induce for these people, even over a text message, was enough to increase how often people were doing this hard, real life behavior.
Roger Dooley: That was only some of the messages that had the sort of reward built in?
Ramsay Brown: Exactly, because that’s the key. If you give it to people every time, they’re going to come to expect it. If you give it to them none of the time, then you left some behavior change on the table. If you give it at random, the brain picks up the pattern, then filters it, and then it just becomes background noise and it doesn’t work. What you want is a system that would understand for each person, “Are they expecting it now?” “Okay, what about now?” “Okay, what about now?” “No, they’re not expecting it? Good. Hit them with the positivity.” That’s what delight is. That’s what surprise is. That is what, in our brain stems, activates the brain to release dopamine and increase the strand of wiring between these triggers and these actions in our basal ganglia, in this habit center in our brain, that makes us more likely to do these actions again in the future.
Roger Dooley: Mm-hmm (affirmative).
Ramsay Brown: That’s the key, is getting it surprising. Not random, not every time, but surprising. That’s what you’ve got to optimize for.
Roger Dooley: Right. Really what you’re saying, I mean, research shows that variable rewards are more powerful than consistent rewards, and we think of training dogs and such as well. You give them a treat five times in a row, and they learn the behavior, but for humans, our behavior is driven more by variable rewards. If every Facebook post was liked by the same 10 people, you’d stop logging in, because that would be boring, but when one gets no likes and the next one gets 30, that’s kind of fun, you know? It’s exciting, and it gives you that dopamine hit.
Roger Dooley: What you’re saying is, it’s possible to optimize just how variable those rewards are, find those points when they’re likely to have an impact, and deliver them then.
Ramsay Brown: Absolutely. For every person uniquely, and adapting over time. I mean, imagine you have a personal trainer at the gym, and personal trainers are great at this. They know when you need that little bit of positivity. They know when you need some encouragement, and they know when to back off. They also know that your first week at the gym is going to be really different than your 25th week at the gym, and they’re going to treat you differently in each. What we built was a way to not just figure that out for each person uniquely, but also to grow with them, and how they should change over time. How that variability should adapt. That’s been our core innovation, and that’s what gets our customers and our partners the kind of wins that we’re able to deliver.
Roger Dooley: Great. Well, hey, I want to be respectful of your time, Ramsay, so I will remind our listeners that we’re speaking with Ramsay Brown, co-founder and chief operations officer at Boundless Mind. Ramsay, how can our listeners find you?
Ramsay Brown: Awesome. They can reach out to my team and I at www.Boundless.ai, where we’d love to connect with them about what they’re working on, and how our persuasive AI system could help. They can follow us on Twitter and on Instagram @BoundlessAI. They can follow me on Twitter @RAB, like my initials, Ramsay Alexander Brown. RAB1138, and on Instagram @InstagramsayBrown. One word.
Roger Dooley: Great. Okay, well we will link to those places, and also I think that this will air after your book is available, so if you will send us a link to that, we’ll put that on the show notes page as well. The show notes page will be at RogerDooley.com/Podcast, and there will be a text version of our conversation there, too.
Roger Dooley: Ramsay, thanks for being on the show.
Ramsay Brown: Hey, Roger, thanks so much for having me. It was a blast, and wishing you and yours a wonderful rest of your week.
Roger Dooley: Thank you for joining me for this episode of The Brainfluence Podcast. To continue the discussion, and to find your own path to brainy success, please visit us at RogerDooley.com.