Mark Walsh: AI and Individual Behavior in the Media

On this edition of The Inc. Tank, Christina Elson talks with Mark Walsh about the extent to which artificial intelligence can predict an individual’s behavior. Mark Walsh is an activist, a businessman and a founder of FactSquared, a real time transparency engine. 


Full Transcript:

Announcer: The following program is brought to you by the Ed Snider Center for Enterprise and Markets, at University of Maryland’s Robert H. Smith School of Business. Welcome to The Inc. Tank. Stay with us to get the inside scoop about technologies that could disrupt and challenge the way you do business.

Christina: Hello, I’m Christina Elson, and on this edition of The Inc. Tank, we’ll be discussing the extent to which artificial intelligence can create transparency and influence opinions. AI is a powerful provider of information, but should we also use it to predict an individual’s behavior? Our guest today, Mark Walsh, is well-positioned to talk about these issues. Mark is an activist, a businessman, and a founder of FactSquared, a real time transparency engine. Hi, Mark. Thanks so much for being with me today on The Inc. Tank.

Mark: Glad to be here.

Christina: Let’s start off with just sort of grounding you and your perspective on AI. So, there’s a lot of talk about where AI is going, how quickly we’re heading towards general intelligence AI, what the ramifications of that could be. So, where are you in this sort of AI discussion? Are you an AI optimist? What would you love to see happen?

Mark: Well, I tell folks, the terms AI and machine learning are often interchangeable and I tell folks are the two most amazing oxymorons of all time because intelligence is not artificial and machines don’t learn. However, that being said, I’m somewhere in the middle of the road on both scared out of my mind and like, stupendously enthusiastic because it’ll change the world. And I think most people that are paying attention or somewhere around where I am, to be blunt, it is easy to become stupendously scared because as my team and I often would argue, the acceleration of AI and its capacity and even the way it’s being interwoven into our lives is itself accelerating. So, you almost can’t keep up with what’s going on. That being said, when it’s used in interesting, sometimes mundane ways, it makes an amazing difference in how we lead our lives from, you know, the traffic here in the Beltway in Washington, DC and managing infrastructural things like that. At the same time, what can be scary is that it’s very predictive. And most people don’t like to be thought of as being predictive. And when you see AI actually predict what you’re gonna do and then you’re like, “Damn, I was gonna do that.” It’s disconcerting. So, whenever a machine figures out what you were gonna do, that’s very disconcerting. From cookies back in 2002, when they knew what we were surfing, to today, when a machine can predict which product we’re gonna take off the shelf at Starbucks, or what kind of car we’re gonna buy, those types of trends can be, as I say, disconcerting. And I think the less you know about it, the more upset it makes you.

Christina: A lot of the talk around where AI is going, there are certain metrics that everyone’s looking at. So, AI now, can be predictive in terms of behavior. AI can’t necessarily write a great poem. It can’t necessarily do a good story or put together a beautiful movie. When we’re talking about where you are, you’re looking at things that are inherently tied to journalism, to transparency of information. So, perhaps you can dig a little deeper into that.

Mark: The two marketplaces that I would argue are most sensitive and most desirous of predictive behavior are financial services and politics. If the folks who own a stock of a company could tell how nervous the company was about its future performance, they would sell the stock or if the company was hiding good news, they would buy the stock. And in reverse, if the company knew what kind of questions the analyst was gonna ask at the quarterly earnings calls, they could prepare for those questions. So, this sort of interactive fog of not knowing, when AI can clarify that and make more transparent, stupendously huge impact on the financial services industry, the same with politics, both in the election and in legislation. So, if I knew what the hot buttons for voters in Southern Virginia and I was Tim Kaine running for senate again, I would hit those hot buttons. In elections, knowing the issues that matter to you, or to me, or to groups of people like me, look back to 2016 the use of Facebook by at least one of the candidates, those two marketplaces, politics and financial services, are two natural candidates for this idea of transparency that AI can create more and more of.

And then I’ll finish with this kicker, which is what AI or machine learning are really just sort of algos or… When you say algos, it shows you’re cool. If you say algorithms it shows you’re uncool. So, there’s your tip of the day. So, the algos that are coming out really are becoming almost like lie detectors and that’s a horrible phrase because a lie detector is a very scary phrase. People think of, you know, being caught. If I know a lot about you, if I hear 5,000 or more words of you speaking and I can analyze those 5,000 plus words be it on a YouTube video or here in a conversation like this, or even the way you write, the adjectives you choose, the adverbs you choose and extemporaneous stuff, I can start to build a profile of you that is remarkably correct. Now, that’s not a lie detector because a lie detector is a one time, where a detective is saying, “Where were you on the night of Tuesday?” But if I can build a profile on you, the more you say, the more you do, the more you interact with the profile the smarter it gets. So, the beauty of AI is it’s self-correcting to some extent that if I think I know something about you, and then you do something different, well, then I put that into the algo and it gets a little smarter. So, self-healing software is really what ends up looking like AI, and applied to people, I tell you, it can be really, really scary.

Christina: So, the point that you’re making in terms of how AI can compile information about someone, but then we want to talk about who’s gonna take that information, make decisions about it, right? It’s one thing to think about how we might wanna target someone to encourage them to buy something or even vote a certain way. It can become a little different when we’re thinking about making decisions about what that person should or perhaps should not be doing. So, should a person who has a particular kind of profile be allowed, for example, to hold a particular job, or to run for office, or things of that nature? That’s where I think what you’re saying it can get a little creepy because there is a decision-making process in there somewhere that has to happen outside of the AI and based on the context.

Mark: Algos are written by people, and people cannot remove the implicit bias of their psyche in how they write the code. There’s an old joke that happens to be true that given a sample size large enough, there’s a direct correlation between shoe size and IQ. Because a three-year-old has a lower IQ, right? So, outcomes are not necessarily pinned to inputs. That makes sense. But I think to the point of your question, you know, Joseph Stalin once famously said, “It’s not about who votes, it’s about who counts the votes,” right?

Christina: That’s right.

Mark: And it’s not about, you know, who builds the algorithm, it’s not about how the choices are made about whether someone should hold public office, or whether someone should be a school teacher, whether someone should be a Boy Scout leader, or an auto mechanic based upon something we figure out about them. It’s who decides what the figuring out is. The old phrase, “Who will watch the watchers?”” this is where my fear comes in. You could make an algo pretty much say anything you want.

Christina: Sure.

Mark: And it can look incredibly logical hence my IQ shoe size. It makes sense. A computer was once asked, “Which would you rather have, a watch that loses one second every week, or a broken watch?” And it said, “A broken watch, because it’s correct twice a day. The other watch is not.” Now, of course, these are the absurd examples in the bell curve of what we’re talking about. But they’re not that far out in the bell curve, where you’re gonna start to see them creep into the center of the bell curve and see these types of AI-based or machine-learning-based decisions start to intrude upon our day-to-day and I’ll finish with this, because I think you sort of brought it up implicitly, if I wanted to coach you soccer, what kind of test will AI soon ask of me so that I can pass and be a youth soccer coach? If it’s for a coed team, is it for an all-boy team, or is it for an all-girl team? Has there been anything in my past, is there anything about the way I answer a question with microtremor analysis for tension in my voice, the way my speech increases to a higher level showing nervousness when I’m asked about something that’s personal, I think we’re seeing this in the daily conversation, both political and business, and educational and athletics. We’re seeing these outcomes where a machine is gonna be used to test in many ways, whether you or I can do a job, and that gets to be a bummer.

Christina: Let’s talk a little bit about FactSquared. One of the things that really intrigued me is that you had mentioned that this is a startup that you’re doing and you were able, in some ways, to use what you refer to as off-the-shelf AI as part of this. And as a business person and someone who, you know, is working with small businesses, has a lot of experience, maybe you can tell us a little bit about what is this off-the-shelf AI? How are you using it? And how do you think other business people can start to think about the opportunities in the space as well?

Mark: So, there are three rules in technology that I would argue are always true, open beats closed, simple beats hard, and cheap beats expensive. Open, easy, and cheap. Always be close, hard, and expensive. And I don’t care what software program, what technology, right? Those always end up being true. They may be different timelines and other true, but they’re always true. So, the reason I say off-the-shelf is a company like mine or any company that’s trying to build some new ways of addressing a marketplace is never gonna compete with Google, or Microsoft, or with Oracle, or any of the big the big dogs, what they call fang now. You know, stay away from the fang, that’s basically what most people have concluded. So, when I say off-the-shelf, again, a goofy analogy, but it’s like a restaurant. Jose Andres takes carrots and other ingredients and makes an incredible dish. But it’s the same carrot that I may bring home to my lovely bride and she and I make a crappy stew, or maybe I make the crappy stew is a better way to phrase it. My point is, is that the ingredients can be the same, the outcomes can be dramatically different and that’s usually with the way the ingredients are mixed in a variety of, I’ll stay with this analogy, spices, and heat, and all that.

So, the idea of off-the-shelf AI implies code that’s written by way better financed, and way smarter people who tend to live way away from me or the East Coast. And that code is a set of building blocks that when you combined them and you have them bounce off each other and sometimes compete with each other, so, we have in our company we will compete with different transcription engines from various sources and which one is best. We call it “Thunderdome,” you know, the “Mad” Max movie where two men enter and one man leaves. So, sort of “Thunderdome” for technology. That way you’re using the best of incredibly well-financed tech giants and you’re applying it in a minimalist, hopefully competitive way to smaller applications like our company.

Christina: So, what are some of the things that business people should be asking those kinds of vendors when they’re looking at acquiring these tools? Because, you know, not really knowing, okay, what are the algs behind products that have been developed, are they continuously improving and learning on them, am I getting something that doesn’t necessarily have a lot of bias inherent in it? What should you know, as an executive, when you’re trying to really evaluate how to pick and choose these things?

Mark: I guess the bad answer is nothing. If I called up, I guess Eric Schmidt is gone, I used to know Eric Schmidt back in the day at Google, now Alphabet, but if I called up, Satya Nadella, who actually was on the board of a company that I ran, but way back in the day at Microsoft, or I called up the heads of Alphabet, Larry Page or so, you know, whatever name you would toss, whatever, you know, gargantuan king of technology or any of their direct reports, or any of their divisional managers or whatever, and tried to say, “Gee, could you twist this algo so it does that?” I mean, they’re gonna laugh, right? So, we’re just the poor minions out here in the trenches. So, to the point of your original question, I guess the original point of your question, there’s a huge gap between what they do and what folks like my team was gonna make. The difference is in the closer into us, and that is, as I mentioned, how do you build engines that compare these softwares to each other? Because if you’re a total Google shop, or you’re a total Oracle shop, or you’re a total Microsoft shop, which is fine, which is what they want by the way, you are locked into the way they see the world. And they do see the world a certain way.

What I think is important for smaller companies that are using technologies off-the-shelf is to constantly test them against each other. And you can do it in very simple ways, you can do it in more complex ways. And there are off-the-shelf pieces of software that can help you compare, don’t stay inside a silo. Shop around and the more you shop, the more sometimes you’ll find edge stuff that the companies are doing, trying new stuff in beta. So, for instance, we have a great relationship with many of the silos that I mentioned before, and sometimes they’re calling us and saying, “We’re thinking of this, would you try this out?” Now, we’re a little company, but it is kind of neat to try beta, sometimes alpha stuff because you become a feedback loop for them.

Christina: Yeah, so that’s really great because that sort of gets at this idea that a lot of these companies, they are very cognizant of disruption, of how they themselves could be disrupted, and trying to figure out how to be innovative and also look for what’s small, more agile and maybe more nimble that people are doing is perhaps a win-win, at least, hopefully, it has been for you and for what you’re doing.

Mark: So, Google didn’t get to where it was by deciding it knew everything. In fact, it got where it was today by deciding that they knew nothing. And they hire brilliant, twisted people. And I say twisted, because that’s probably true. And they make up stuff all day, 24/7, they’re making stuff up. And it’s pasta against the wall. Sometimes it works, sometimes it doesn’t. Now, just as an aside, they happen to be the luckiest corporation on the planet because 96% of their revenue comes from one brilliant thing called Google AdWords, which is when you and I search on Google, you know, for hunting boots, I don’t know if you’re buying hunting boots, but, you know, they have the option of who’s up front. Fabulous business, makes a gazillion dollars a day, and affords them the ability to try all sorts of other stuff and they’re slowly creeping into the other silos because if you use Gmail or Google Enterprise, I mean, they’re slowly attacking all the other enterprise silos, but it’s a wonderful war going out there in the West Coast, and I think it profits all of us as far as access, as far as capital, as far as capacity, as far as innovation, we’re all benefiting from that. I think we’re living through a period of time where internet use was considered a common carrier in the ’96 telecom law, and then all the players on it… Nice to work at AOL. We were, you know, we were just a place where good things could happen.

Christina: Sure.

Mark: You know, is Facebook responsible for what happens on Facebook? That’s the big argument. Maybe they are. Or more responsible than they would argue they had been in the past.

Christina: Yes. And also clearly the issue of transparency around, what are their intentions around using all this data that they’re collecting? So, that’s a long conversation.

Mark: It’s not about who votes, it’s about who counts the votes.

Christina: It’s about who counts the votes, exactly.

Mark: Right. That’s exactly right.

Christina: When I visited Facebook in March, it’s true there is a really clear culture there. We also were at Google, this was a trip that I took with the Quest group at University of Maryland. It is fascinating. And I do, like you, encourage people, to be able to go and learn a little more about what happens in these places. It’s very informative. Anyway…

Mark: Can I tell you a short story?

Christina: Yes, yes. Tell me a story.

Mark: In 1989, I did a deal with Microsoft, I went into Microsoft. We cut a deal and there was, you know, guys with ponytails, surfboards on tops of Jeeps, the stock going north, everybody is getting rich. In 1999, I’m at AOL, you know, Jeeps with surfboards on top, people with ponytails, stocks where everybody getting rich. 2009, I’m at Google walking around with my host, and they’re sort of the Googleplex, they’re in the middle. And they have one of those standalone pools with the giant fans where the water’s going one way and you can swim in a small pool, not much bigger than this table. So, they have this little, tiny, mini pool with the fan, and they have a lifeguard sitting up there. And she’s in this bathing suit with this stuff, white stuff in her nose, and the hat, and the giant and the whistle, but nobody can drown in this pool. So, I laughed and my host said, “What’s that?” He goes, “Well, it’s one of our little Google quirks.” And I was sitting there thinking, you know, surfboards and Jeeps, ponytails and [inaudible 00:15:46] I said, okay, so when the Roman generals came back victorious, they used to have people whisper in their ear, “Sic transit gloria mundi,” thus passes the glory of the earth. I said, “I hope you guys are enjoying it now because it’s not gonna last forever.” And I told him about AOL and Microsoft, he said, “Yeah, but we’re different.” And I said, “That’s what they said.”

Christina: Yeah, that’s true.

Mark: You know, there’s always an arc. And, you know, Facebook, they have the apartments there now, you know, it’s a complete city. It’s completely hermetically sealed. It’s amazing. I don’t… You know, look, my worry is the bubble. They live in a bubble. I mean, we all do to some extent. Here in Washington, DC, people say we’re in a bubble. But the technology bubble can be more impactful in many ways than the Capitol Hill bubble here.

Christina: It’s true, yes, because it’s a sort of direct reach globally. But let’s talk a little bit about disruption. You are the chairman of a company called Rocket.

Mark: Genius Rocket.

Christina: Yes. I thought, oh, that’s a really cool company because it crowdsources information to help people be able to produce great videos, great messaging…

Mark: Advertising too.

Christina: …advertising. And then I thought, wow, this is really interesting. This is a company that potentially could be disrupted by AI or maybe it’s as AI gets better and better at what it’s doing, it’s going to be more competitive with people in doing these kinds of things. So, I was curious what your thoughts are about that.

Mark: I spilled a lot of blood in the mud with Genius Rocket and tilting my lance at windmills called the advertising agency industry. And my father was in the advertising agency. He was executive at an ad agency, and I worked in an ad agency in between years in business school, a school I wish I’d gone to called Smith, by the way, just for the record. But anyway, so I thought the advertising agency business was bloated, and cost too much, and delivered crappy products. And I was correct and I am correct. Now, the agency business has migrated to some extent, but the idea of Genius Rocket was there is 25,000 talented videographers around the world, and if you have a cash prize, and Procter & Gamble, or IBM, or Mountain Dew puts up a cash prize, you’ll get lots of TV ads and they’ll be different. They’ll be funky, they’ll be…you know, some of them are usable, many of them are not and that’s what we do and that’s what we did. And your question is can AI start to generate TV ads? The answer is we’re this far away. Because what AI does is make up wacky ideas that it thinks makes sense because it doesn’t know what makes sense and what doesn’t. And sometimes the most memorable ads are wacky. I mean, for those of you who haven’t seen the Old Spice ads with the guy riding a unicorn backwards?

Christina: Yeah, that was so cool.

Mark: You remember. And the importance of advertising, it’s not just that you remember, but that you talk about it and say it to somebody else. And people do talk about those ads. So, AI I think is about one click away from actually generating television ads that will look stupendously odd, but you’ll go, “What the hell is that?” Then you remember the brand. And that’s where, I mean, I’m not saying ad agencies are gonna go out of business, because product managers need ad agencies.

Christina: Sure.

Mark: But it is gonna be a big deal.

Christina: Yeah, but, you know, just because one technology or something becomes obsolete, it opens usually opportunities for people to do other things or move into other areas. So, perhaps in these areas that you’re looking at, you’ve already seen that there are new opportunities for people who have certain skills to do other things than they’re doing right now. So, instead of sitting and being a video editor they may emerge as some other…something else.

Mark: I love that also that whenever an industry dies, it opens up a new industry. I’m a little worried that maybe we’ve reached the end of that equation as well. Again, this is one of the scary things about AI, or machine learning, or…when it’s used, it can often replace the types of jobs that used to take people… A goofy example and there are many of them is parking lot attendants. I came here from the dentist and it used to be a live person in a booth taking my dough, that person is gone. I challenge you just to start paying attention to how fewer live people you see in parking lots. Now, there’s a lot of parking lots in America. And where do they all go? I have no idea. Now, that’s lower level employment, but there are lots of skilled jobs and you just really touched on one, video editing, it’s really just… You know, it used to be, back in the day, expensive machines, people do it with slicing and all these stuff. And now, it’s code. So, how many jobs are being opened up that don’t revolve around code that disruptive technologies have replaced?

I think the equation is starting to get dangerous. Look, I’m tossing out some incredible firebombs, I know, but I’m very worried about education. The University of Maryland is a fabulous institution, land grant university, huge budget, massive outcomes that are fabulous for our nation and our world, incredible technologies, incredible innovation, leadership, everything, right? All good. Are they perfectly tuned for what needs to be done in the next 30 years in higher ed? I have no idea, but I can tell you I don’t think so. And I don’t think anyone is sitting there saying, “Hey, we’re good.” You know, I don’t think Wallace Loh is sitting at night going, “No problem. We’re all set.” So, when we talk about disruptive, what does higher education mean? Is a BA from a great university that teaches critical thinking worth the money? Unclear.

Christina: Yeah, it’s unclear and it’s a huge struggle and I think that it’s very important to be very involved and engaged with business people, you know, like you that can help bring a perspective of like, okay, what do I really need? I mean, like you’re growing a company, you’re looking for talent. So, how can we help understand what should we be teaching students? You know, and what level of knowledge do they need about AI? I mean, you know, clearly someone in computer science might go in very deep, but pretty much everybody needs to know something about, particularly AI, because it’s such a foundational technology for so many other things, and particularly in the business school.

Mark: So, my brother’s a doctor, and he focuses on palliative care and end of life issues, and when Watson came up from my IBM… As it turns out, the head of Watson’s research, their CTO is a classmate of mine from college. And he was like, “Mark, what do you see what it does for healthcare?”

Christina: Yeah.

Mark: It’s gonna be able to diagnose people better than a human. Of course I called my brother and the first thing he says is, you know, after a string of expletives saying how dare I say that, this is the next collision. Because, in fact, the IBM guy’s correct. What it asks a patient has better diagnostic outcomes than a 15 minute visit with a doctor who’s on the clock and has to churn through patients because of the payment system. Doctors like this? I can tell you the answer is no. Look, I know we’re bouncing around, but you’re right. You almost walk through every single major economic, vertical marketplace that matters to our day-to-day lives, be it B2B or business to consumer, and AI writ large, is creeping in at edges that is starting to crowd out some of the knowledge bases that professionals in that market are proud about, like doctors.

Christina: Yeah, it’s a huge, huge…

Mark: I feel like I’m bumming everybody out here. I should probably get a little bit…

Christina: No, no, no. No, no, not at all. I mean, it’s something that higher education really has to come to grips with. And, you know, I think, from Maryland’s perspective, I do see a lot of interest from the students, they really wanna understand where they can come in to this and what they need to do to be prepared for that world that they’re gonna be walking into. But how about you? How did your career path sort of take you into this AI space to really wanting to learn more about this and what are some of the skills and things that you’ve developed along the way?

Mark: So, I’m a little bit of an adrenaline junkie, and a disruption junkie, so I came out of college and got in the TV business, worked in broadcast television, and cable television with HBO. And HBO was changing TV back in the day, we were the first non-commercial television you pay for. And that was a big deal back in the ’80s. Then I got in the internet in ’86, before it was even called the internet, and worked for a whole bunch of interesting companies, including AOL, which in the early ’90s, AOL was the it. But I remember the first time I logged on in 1984 to Dow Jones News/Retrieval and saw information on a screen that is coming over a phone line. And I remember thinking, okay, I moved from a TV screen to a computer screen. But this is it. This is a big deal. Information on a screen that’s entertaining, that’s interactive, that I can control, yada, yada, yada. So, I’ve just played that string all the way out.

Now, the internet obviously became a really, really big deal over like a decade and a half. And not that I’m prescient, far from it. But I was sort of like, “Hey, this is gonna be a big deal maybe not tomorrow, but at some point.” And the first time I heard the term AI, and the first time I saw some low-level predictive stuff, I was like, it’s gonna be a big deal. The first time I met the founder of FactSquared, where now, I’m with, he started talking about how he was pulling in speech from people on YouTube and creating profiles of them. And he started to show how he could predict stuff. And I was like, this is one guy grabbing YouTube videos and analyzing the tension in their voice and rate of speech. I mean, all those stuff that to, you know, lie detector test used to measure and I thought this is like the first time you saw a cheap, crappy digital camera versus the $2,000 SLR that Minolta was trying to jam down your throat. And you’re like the digital does a pretty good job, right? And that one slice right there, the camera business completely decimated all that used to be with a Kodachrome moments and the SLR, so I thought if crappy, cheap technology using crappy, cheap videos from a crappy, cheap channel called YouTube by one guy can do that, where does this go? Where does this go?

Christina: [inaudible 00:24:59] cheaper, right?

Mark: It’s gonna go north. And this is, you know, open, and easy, and cheap. My three rules of technology. So, AI is just coming up in the bottom. All great technology that has a huge impact on our lives, I would argue, comes from the bottom up. I mean, think of eCommerce, eCommerce wasn’t driven by major corporations, you know, transferring funds back and forth in some secure digital way. It was you and I going home and logging on and paying bills online with our little bank, and paying our babysitter, and paying the house painter, and paying our mortgage, and all going, “Why can’t I do this at work?” All that stuff comes up in the bottom. So, I think AI is coming up in some ways from the bottom as much as it is with a giant, you know Watson and all these big AI things from the big companies. But the bottom is where the vitality is gonna be.

Christina: So, sort of last question, tell me in 20 years from now, how you think your day-to-day will be very different or different because of AI?

Mark: In 20 years, when I go in and buy a car, I’m gonna put a little app live in this device, whatever it looks like. I’ll put it in my pocket and I’ll walk into the car salesman on the floor and I’ll ask that person a bunch of baseline questions like, “Where’s your desk? How long have you worked here? What kind of car do you drive?” Stuff that they would not lie about. And I’ll get a baseline on their voice, on the timbre of their voice, on the rate of speech, on the types of words they use. Maybe after five or six baselines, I’ll get a little buzz in this phone meaning, I got it. Then I’ll start to ask the person, the salesperson about a car I’m interested in. And it’ll tell me when that person is nervous. Now, does nervousness mean lying? I don’t know. He might be having a bad morning, he might have had a fight with their spouse, whatever, they might have had a fender bender on the way to work, so there’s all sorts of reasons they could be nervous or upset. But if the baseline showed them to not be nervous and upset, or the baseline was X and they went to 3X and I know that, I can start to drill down on that. Moreover, I can excuse myself and go to the restroom and I can start looking at some of the keywords that I used that made them nervous. So, for instance, when I said is the mileage number correct? Or how long does it take to recharge? And their answer had some of those keywords, you know, made them nervous, I can keep asking those words and I can screw around with their head frankly, and I can really get into their head. And I think you’ll see those tools on devices like this start to be used by consumers back and forth in normal conversations. Look at the dating business.

Christina: Yeah, that’s a great one.

Mark: I mean, come on. How many people have gone to a lunch and wished they’d known beforehand through analysis of the text the person answered, and emails and all that some of the things that they should be wary of. Or when the person started to lie about their background, they had some sort of indicator that the conversation had some nervousness that they should be wary of. So, we can’t hide. We’re all open books. We can’t change our voice. I mean, the number of people that can actually lie effectively is really like one in a million.

Christina: Yes, they’re psychopaths basically.

Mark: That’s one word for it. Sometimes they say politician, but anyway, the idea of transparency, and devices, and software being available to us in real time to increase that transparency of human interaction with people we know, people we don’t know, it’s gonna get big.

Christina: Yeah, fascinating, and definitely something that can perhaps foster more honest communication. But at the same time, you know, really keep people from falling into traps because of lack of transparency.

Mark: So, it’s gonna get worse. You know, today, Mastercard can tell when a man is having an affair, they have hotels being purchased in the same city, and then within three hours, flowers being purchased, and then within a week or two of the first hotel purchased in the same city, that man joining health club. So, these are three indicators that are actually almost perfect indicators of the guy having an affair. So, what other data points does Mastercard gather about you and I when we use plastic that they can impute and create a conclusion? So, can a spouse ask Mastercard for data points about various misbehavior of their spouse and get it from Mastercard? Of course, they’ll be able to. So, I guess the moral of the story is don’t do bad things because it’s gonna find you out.

Christina: Because it’s gonna find you out, yeah. Which, ideally, we want people to live on as transparent and ethical lives, so hopefully AI is less of a mistake…

Mark: I mean, look, I’ve been in politics long enough. I’ve spent a lot of time politics as you probably solved in the background, and I tell people, if you turn the high beams up on anyone, anyone, you’re gonna find something that can be construed as very disconcerting about them. And one last analogy, I know we’re gonna wrap up here, is I tell people that Justice Ginsburg was nominated in the Supreme Court, I think, by George Herbert Walker Bush and stepped away from the nomination because he admitted to smoking marijuana. Then Bill Clinton was president and had to disavow smoking it by making the ridiculous excuse of he didn’t inhale. And then Barack Obama runs in 2008, writes a book and he says he did cocaine and smoked dope. So, the acceptance of, you know, behavior, the arc, and it’s an age thing too, the arc is going to be more forgiving. But still, the arc of transparency, and the demand for transparency, and the availability of transparency tools to interpret what you are like, or what I’m like, and what we’re saying and all that, it’s gonna continue. So, I think we’re gonna either have to be way more forgiving of foibles in our past, or people are gonna start to have to be stupendously conscious of avoiding behavior that can be tracked.

Christina: Well, that’s a great way to end the conversation, Mark. I really appreciate you taking the time and hopefully we’ll have a chance to check in with you again as FactSquared grows and becomes something that we may all be running into in our daily lives even, right?

Mark: For good reasons.

Christina: For good reasons. Exactly. Okay.

Mark: You’re the turtle.

Christina: Yes, we’re the turtle. The Inc. Tank sponsor, the Ed Snider Center for Enterprise and Markets, is dedicated to exploring knowledge about innovative technology that inspires broadly-shared prosperity. There’s so much more to uncover in the field of artificial intelligence, and we’re excited to continue this journey of discovery with you. Thanks to Mark Walsh for talking with me today. Until next time, this is Christina Elson, in The Inc. Tank.

Announcer: Subscribe to The Inc. Tank on Spotify, Google Play, and Apple Podcasts. A special thank you to the Kauffman Foundation for their support, from the Robert H. Smith School of Business at the University of Maryland, thank you for joining us in The Inc. Tank.

This episode of The Inc. Tank would not be possible without:

Christina Elson, Host and Executive Producer
Stevi Calandra, Executive Producer
Podcast Village Studios, Production/Edit/Sound Design

The Inc. Tank Theme Song “Key to the Foot” provided by Clean Cuts Music Library
The Inc. Tank logo was designed by Kasia Burns

This podcast is brought to you by The Ed Snider Center for Enterprise & Markets and the Kauffman Foundation.