AFA
AFA

LLMs, AI Agents, Deepfakes, and Psychographics

E3 | With Cyrano.ai's Scott Sandland
Updated Oct 6, 2023

LLMs, AI Agents, Deepfakes, and Psychographics

AI For All
|
E3
July 6, 2023
On this episode of the AI For All Podcast, Scott Sandland, CEO of Cyrano.ai, joins Ryan Chacon and Neil Sahota to discuss LLMs, AI agents, deep fakes, and psychographics in AI. They talk about how companies will utilize LLMs, public trust in LLMs, human-like AI, digital twins, inducing oxytocin over dopamine in future technology, sentiment analysis, and how AI can reduce loneliness in society.
About Scott Sandland
Scott Sandland is the former world's youngest hypnotherapist and the former CEO of a mental health clinic helping at risk teens and drug addicted adolescents. Because of his vision to help more people at scale, Scott shifted his focus and has become a multi-patent inventor in Artificial Intelligence. Now, as the CEO of a company focusing on strategic empathy and linguistic analysis, Scott uses his AI system to empower people having high-value conversations. He has been published in numerous peer-reviewed medical journals and has had his work at Cyrano mentioned in the Harvard Business Review, Psychology Today, Forbes, Entrepreneur Magazine, and more. Many tens of thousands of people have used his AI software to date.
Interested in connecting with Scott? Reach out on LinkedIn!
About Cyrano.ai
Cyrano.ai uses proprietary language models to understand a person's values, priorities, motivations, and commitment levels in real time. From there, it gives actionable insights to increase rapport and understanding as well as strategic advice to increase conversion or follow through. While the commercial applications of Cyrano are obvious in sales, the primary goal is to empower the conversations around healthcare and mental health.
Key Questions and Topics from This Episode:

Transcript
- [Ryan] Welcome everyone to another episode of the AI For All Podcast. I'm Ryan Chacon. With me today is my co-host, Neil Sahota, one of the founders of AI for Good and AI Advisor to the UN. Neil, how's it going?
- [Neil] I'm doing all right. I hope all our listeners are having a good time.
- [Ryan] Yeah, absolutely. Also have our producer, Nikolai, who's here to jump in with questions throughout the conversation.
- [Nikolai] Hello.
- [Ryan] On today's episode, so we're gonna be focused on large language models and conversational agents. So ChatGPT has really captivated the world, and with it a host of challenges are becoming more apparent.
So issues ranging from trust to loneliness have entered the public discourse. LLMs that are personalized and learn our values and motivations might be part of the solution. To discuss this, we have Scott Sandland, the CEO of Cyrano. Scott, how's it going?
- [Scott] Good. Thanks for having me back.
- [Ryan] Absolutely. Just to give a little background on the company, they're very much focused on building empathetic AI systems through the use of proprietary language models.
So let's start this off, Scott, and have you talk to us and just high level it for our audience, what are LLMs?
- [Scott] That's actually a moving target more than you think it would be. What LLMs were in February is different than what they are today. Which is a testament to the advancements. But functionally at the most simple level, LLMs are conversational AI systems that are really good at just chit chatting and following commands based on just plain English.
- [Ryan] Let me ask you this. So if I'm a company out there listening, and I've been reading about LLMs, I've been interacting with things like ChatGPT, how do you envision companies being able to really utilize LLMs? Like being able to either build their own, use ones that exist, like what is- what are the biggest benefits for companies when it comes to that?
How are they gonna be using them? Just talk to us a little bit about that.
- [Scott] There are- there's ChatGPT, which is the household name, but there's a lot of other ones that are coming out. Vicuna, Orca, a bunch of other names that sound silly but are pretty darn impressive. And I think what we're gonna see from SMBs and enterprise companies is they're not gonna build their own LLM.
They're going to build on top of these LLMs. So whether it's the one that Meta puts out or Google puts out or Microsoft puts out, et cetera companies are going to tune those and then they're going to build their own tooling and solutions, maybe based on their own dataset, maybe not, so that they can have new features internal and external and grow their company, not just in terms of efficiencies, which I think is the easiest thing to talk about, but actually auditing their existing KPIs and looking at what is now possible that wasn't possible this time last year.
- [Neil] When you talk about like new KPIs, are these language based KPIs? Is there something else?
- [Scott] Both. There's definitely language based KPIs. An easy one is one that's related to Cyrano, which is psychographic segmentation. So one of the things that Facebook made a lot of money from doing, and Google made a lot of money doing is having datasets on individuals. Browser history, email content, or their behaviors on Facebook respectively.
And with LLMs and using Cyrano as an example of what can be done with an LLM, we can segment your audience based on very little data, like a hundred words of data, like a couple tweets, and we've got enough to create a really robust profile on the person. And then customize all outbound communication with that person, whether it be marketing, support service, et cetera.
All of that can be customized at scale to that person's psychographics. So when you talk about KPIs and customer satisfaction, an existing one, CSAT, the way that's measured and the way that's addressed changes dramatically. Because you can have really one-to-one customization of those interactions.
So there's things like that you can do. But then also hiring practices, teaming practices, a lot of HR resources can use a handful of these technologies in concert and in doing so, create more efficient teams, better scores in terms of happiness in your work culture. So again, something that's already a KPI, but they're going to create subcategories within those KPIs that are much more actionable.
- [Neil] So maybe we can just, for a second, pause. I'm not sure all our listeners may be familiar with psychographic profiles. Maybe you could define that and help them understand what's actually in it.
- [Scott] The psychographic profiling. There's a few pieces to it, and there's a bunch of different flavors of it. As it relates to Cyrano specifically, and I'll just talk about this as an example and sort of a microcosm of it. No matter what a person is talking about, they are telling you about themselves.
So they're telling you just in plain English, there's an understanding of what their priorities are, how they build relationships, how they navigate the world. What their commitment level is right now, what matters to them. These sorts of things are just layered into the subtext of our conversations.
And the more you know a person, the more you can predict the way they're going to behave based on your internal assessment of conversations you've had and actions you've seen from them. And a psychographic assessment of a person is basically the official version of that. Something that we all naturally do, but this is showing its work.
So we can say, people who like this movie also like that movie, and people who purchase these products on Amazon will also search for these products on Amazon. Or people who like this TikTok video should be served up this next TikTok video. It's that on steroids.
- [Neil] I'm happy to throw myself as a Guinea pig. I've known Scott a long time, and I think the first time we really seriously had a conversation, he had actually profiled me.
- [Scott] I did. Yeah, I decided to grab your TED Talk. We hadn't- I don't think we'd met in person yet. Maybe we had but it was I think one of the first times we had like a real conversation. I went and grabbed your TED talk, ran it through our system, and created a linguistic fingerprint of you, and then compared it to other speeches.
And then used that to understand how to approach you in our conversation and how to make it more productive. And your TED Talk, just- I'll say it for you. Your TED Talk -
- [Neil] I don't know how I feel about it.
- [Scott] Your TED Talk was the closest speech we have found still to date actually to Steve Jobs initial iPhone announcement. So we graph all these conversations in dozens of different categories and Neil's TED Talk and the original iPhone announcement from Steve Jobs look incredibly proportionally similar, which is in some ways just the hallmark of good storytelling but also talking about what is possible and getting an audience excited without just being hype, but also proving it.
- [Ryan] When we talk about LLMs, obviously that comes into conversation with conversational agents, chatbots, things like that. How do you feel the current public's like trust is with conversational agents built on LLMs? Like where are we as a society and industry there?
- [Scott] I think it's appropriate where it is, which is optimistic skepticism. So there's this, oh my gosh, this is so cool, this is the future, and who's in charge of these things? I don't want it to have my data. And I think the big companies, especially the ones that have been related to surveillance capitalism, Google, Facebook, the ones I was talking about earlier, they have trained us all to not trust them.
We understand that our data is their product, and we understand that them knowing us is what's making them rich. And we get great services for it. It's a fair exchange, and we're all open about that. And I also think we were all there for the original chatbot hype cycle and that it was gonna be amazing and how much that was underwhelming.
And I think that was just a function of people rushing to be first to market when this is the thing that we really should have called a real chatbot. This is- what ChatGPT is I think the true standard of what a chatbot should be. Those other things were very much rules based and on rails and forcing people towards binary choices towards specific outcomes.
Whereas ChatGPT and Pi I think is another great example of- even though it's not an LLM, it's a chat interface. It's just a different architecture. Those kinds of tools are much more open-ended in how they can converse and the topics that they can talk about. We're now at what I think is the first square on the board of this board game.
And we're gonna make it through the board game of chat interfaces. And I think we just started a couple months ago.
- [Ryan] Totally agree with you. I think we- we've actually talked to a couple other people about this, and one of the things that always comes up is around how do you make these chat interfaces or just the experiences for people, how do you grow that trust? How do you make them more human-Like, how do you make them empathetic?
How do you help them build rapport? All those kinds of things that matter for people to form a connection with and have a more trusting experience with what they're interacting with. What are ways that you've seen or see that we're going in or maybe stuff that you all do to help develop that human-like element to make the experience better for the end user?
- [Scott] There's a counterintuitive piece here. We all want to anthropomorphize these systems because they speak well, because they articulate their ideas. We anthropomorphize them way more than we should. We think of them as entities and creatures and beings rather than tools. And so I think the first thing that needs to be done is to really call out, this is a machine. This is not trying to trick you into saying, hey, I'm a real person doing your customer service. I think as soon as that facade gets broken and it's that Wizard of Oz, man behind the curtain moment happens, that shatters a lot of trust and a lot of enthusiasm in the tool. But if we can say from the get go, hey, I'm an AI, that means you can spend as much time talking with me, you won't burn me out, you won't bore me. I exist to support and serve you. Owning that first and really bringing that back as a touchstone is I think the first thing that needs to happen, and I think without fail, there might be an exception to this, but I can't think of it, anytime a company has been using chat systems and tried to be deceptive about that, it has backfired.
And so we need to start with those companies earning the trust by calling out what they are and what they aren't. And I also think these chat systems need to be aware about what their- or disclose what their intended outcomes are. Lazy example is, hey, I'm a sales robot. My goal is to make you buy more of product X.
And I'm here to answer your questions about product X and the competitors if you like. The same way when you go into an Audi dealership, you don't expect them to say really nice things about a BMW. You expect them to talk about the Audi. And I think these chat systems, because they have the ability to be relatively objective, there's bias in there, but if they can overtly disclose what their intended purpose is, I think that helps people as well.
- [Neil] I agree with what you're saying, Scott. My experience, there's a slightly difference I think in what people's expectations are. Early days of AI and the early days with the Watson robots and stuff, we intentionally made them sound robotic, jerky motions because we didn't want to freak people out.
- [Scott] Yeah, uncanny valley stuff.
- [Neil] Yeah. Because back then we could have made the smooth motions, there were like 2000 human voices. Like we didn't want to freak people out. We start showing some of these people the stuff and their whole first reaction is yeah, that's pretty cool, but can you make it sound more human? Can you make it look more human, act more human, walk more human. They wanted human, human, human, which we were totally caught off guard with. And so why do you anthropomorphize it? I think it's not so much that they want to feel that sense of familiarity. I think they're just more comfortable with the known, which is the human and what I've seen with people interacting with chatbots over the past 12 years and other channels with AI,
I still expect the AI to behave like a human normally would in certain circumstances. Even in sales, their only real expectation is that they don't believe the AI is actually judging them and their behaviors. So it's this weird kind of dichotomy that's emerged from this.
- [Scott] I think you're right. I think we want it to behave like a person, but own that it isn't. And so we want it to be more increasingly familiar and less rigid. And- again, to the honesty, dishonesty piece, with deep fakes, we want those deep fakes to look incredibly realistic when it's entertainment and fun, and we're doing a stable diffusion thing and someone's making renaissance paintings of their friends riding horses.
And the more realistic it looks, the better. But as soon as it's something that is deceptive, where it's a person they know in a situation that person hasn't been in, or it's easy to talk about it in politics, then the distrust and the uneasiness and the fears show up and there's a lot of understandable latent fears that people have about the terminators and robot uprisings and all that we've gotten from Hollywood.
And so anytime someone trips on that fear, I think there's a pretty big reaction from people. And I think visuals- the stable diffusions and things like that have run into that a little bit more because it's easier to see stable diffusions and deep fakes being used for harm than good, whereas a chat system, it's- I think easier to see them being used for good than harm, even though both of them have potential in both directions.
- [Neil] Well, we've already seen chat systems and AI audio systems be deep fakes. And just for the listeners clarification, a deep fake is really a bad actor taking somebody's image, likeness, sounds, whatever it is, and using it for malicious intent. The twin of that is a digital twin where it's the person or the entity that owns the location, object, or likeness, whatever it is, giving their first permission for a specific use that's- it might be to chat with someone or do a virtual commercial or whatever it is. But they're twins of each other unfortunately. Digital twins and the mirror images deep fakes or vice versa.
- [Scott] Seeing the value of a digital twin, it requires a little bit more creativity than seeing the harm of a deep fake. And I think- or maybe that's just the people I'm talking with, I don't know. But I think it's- I think it's scarier even though linguistic analysis and customizing those interfaces has, I think, a lot more potential to impact the world.
- [Nikolai] I'm curious to get your thoughts on- we've seen all these large language models start to get augmented with like agents. So, ChatGPT can search Bing, and there's all these add-ons essentially to extend the capabilities of these language models. Could you- I'm curious just like what are- how do these agents differ from the language model itself technologically? What are they doing that the language model isn't? What's the- what's going on with all that?
- [Scott] Yeah, this is the thing that I am simultaneously really excited about and concerned by how fast it's happening. I think it's exactly the right direction, but its rate of implementation is pretty alarming because you have APIs and now function calling, and function calling is an even more extreme version of these language models can now impact change in the real world.
So an easy example is, I think it was Expedia who built an extension, an API extension, into ChatGPT. So you can just say, hey, plan a trip for me to somewhere in the Mediterranean, and I wanna spend this much money, and it'll come up with an itinerary and then you can say, now go buy this, and it opens up windows and makes it possible for that to happen.
So it's functionally entering into financial contracts for you. And the next level of that is you can now use it to, with certain APIs and function calling, you can have it actually impact change. Lazy example is turn your lights on and off via ChatGPT. Which feels like a Google Assistant, so it's not so scary. But its ability to do things. Stock trades for you, things like that.
That means simultaneously single people can have greater agency and be more productive, and their good ideas can come to life more readily and easily, which is it being used for good, which is fantastic. The problem is unforeseen consequences and bad actors both require more guardrails, regulations, frameworks, and these things are being innovated so fast that we don't have those yet.
And I think we need them. And I think when they created- I'll just pick on Facebook again. When they created Facebook, they said they were going to digitize and democratize the college experience so everyone could have it. And that's not what Facebook is. And it hasn't been that for a long, long time because it changed, it evolved naturally. But it's not about the college experience, it's about dopamine. It's a dopamine generating casino. And that was an unintended consequence, and the algorithms pulled it in that direction. YouTube's algorithms pulled it in the direction that was more radicalizing in both those cases. We've created more loneliness like Ryan brought up a moment ago. And so when we think about this opportunity we have right now, the last time we had really cool new social tech, we blew it, and we created deaths of despair and suicide rates skyrocketing and people under 40 being sadder than they've ever been.
And this is the next moment where we're interacting with algorithms and the 2.0 of those algorithms. And we need to be careful that as we're doing that, we learn from our history. And so like I said, I'm really excited about the APIs and function calling it and what that's gonna mean for productivity and especially education and healthcare. I think those two things are gonna be incredible, and I'm excited about that. And I wish there was a way to help slow things down just a little bit, so some sort of regulation, whether it be private or public can catch up.
- [Nikolai] That's a threat that's been flying under the radar a bit is agents doing like stock trading and handling finance. You can foresee an AI crashing the economy one day because it's just unleashed on the stock markets.
- [Scott] Yeah, I mean it- and we've seen things that are- years ago we've seen predecessors of that where robo trading has crashed a stock rapidly, and companies lose 80% of their value in 20 minutes because a handful of algorithms interact with each other and it gets shut down and halted.
But that was 10 years ago tech, and that wasn't the ability of a handful of guys on Wall Street Bets all getting their own LLM and putting together a tool with Orca or something similar and building it home brewed on their own computer and being able to run that at a scale that a hedge fund was able to do three years ago.
So I think it's a really interesting space, and I think financial services has a history of moving fast and breaking things when- because of the arms race that's inherent in that. And so I think that's gonna be- healthcare moves slow and education moves slow, but financial services tends to move fast.
And so I think it's gonna be an area where the rubber's gonna meet the road before the others.
- [Ryan] So I've read the article you posted. Forgot when you posted it exactly, but it was on LinkedIn, talking about dopamine versus oxytocin. I think that's how it said. And you've already mentioned dopamine a good bit, but when it comes to the talking about the human connection side, like we were talking about earlier, what changes do you think need to happen in AI to get us more focused around that oxytocin side of things as opposed to dopamine being the big driver? And I think you mentioned- you said something like a race to the bottom of the brain stem or something, may- might be paraphrasing it incorrectly, but just wanna learn more about or have you break that down because I found that super fascinating to think about it from just a user experience standpoint. You already talked about the dopamine side with the casinos and stuff like that but just talk about the other side of it.
- [Scott] Yeah, so when you think dopamine, think D for dopamine, D for drugs. Drugs, slot machines, addictive, impulsive behaviors. There's good sides of dopamine too. But for the sake of this conversation, it's stimulating, it's exciting, it's fun. That's why people like going to Vegas and losing money and feel good about it, oddly.
And a lot of these tech tools have been driving towards dopamine, but dopamine also creates isolation and withdrawal. And when you look at people who are addicted to anything, there is a receding pocket of things that bring them pleasure including relationships. And so that reduction is not good.
Oxytocin is the hormone of intimacy, connection, trust, love, the highest moment of oxytocin in a person's life is when they're giving birth. Things like that. That tribe pack animal primal best version of ourselves is where oxytocin is. And I think- when you think of- Elon Musk gets a lot of credit for building Tesla around first principles engineering, as the expression that gets thrown around a lot.
And I think we need to do first principles engineering for AI, especially LLMs because we run the world through conversations and communication, and there's a lot of animals much bigger than us and stronger than us and scarier, but because we can talk it out and plan together and coordinate, we can do amazing things and go to the moon. So the power is in communication. And so as we're talking about a tool that is that powerful, we want to plug it into the first principles of the best parts of humanity, and that's oxytocin generation more than it is dopamine. So the things that give us real pleasure and real honesty is the opposite of a Snapchat filter.
A filter that makes you look prettier than you actually are. That's great that you get a picture where you look nicer, but you also know I'm clicking this button because I feel I'm not good enough in reality, and I need a fake version of me that is good enough. And that's saccharine. That feels really hollow, and we have a massive economy built around that. And this is, I think, the moment where we reset, and we build tools that are the opposite of a Snapchat filter. Tools where people are better understood for who they really are and not at a surface level, but at a deeper level that creates real bond and friendship.
And you're a better friend with someone after 10 conversations than one. And what if we can help accelerate that and really understand each other at a deeper, more respectful level. I think that does a lot for discourse too. And a big part of the internal conversations at Cyrano are about oxytocin and how we can be auditing ourselves and our frameworks for is this a dopamine driver or an oxytocin driver? And as much as possible be pointing the algorithms and the company at oxytocin generation, which means trust, intimacy, connection, and all that. And that could be through a digital agent. So you're talking to a chatbot that helps generate oxytocin, and that's fine.
But even better is person to person communication being facilitated so that two people get more oxytocin off the same amount of compute.
- [Ryan] How does sentiment analysis play into this? Being able to recognize intent, emotion, negative, positive comments. Like how does that play into all of this? Because I imagine in order to be able to create that connection, you need to understand in a, from, let's say through the text, what the emotion is from the individual. What is this? Can you talk about that a little bit?
- [Scott] Yes, I would say sentiment and summary. I'd say those two things are the crawl- there's a crawl, walk, run, and sentiment and summary are the crawl of this. So if we can understand what's being said, the nouns and verbs and if the person likes it or not, and that's a pretty easy level. It's a lot of work that we've gotten here, but that it can summarize a conversation and tell if the person liked it or not.
That's great. Then we know what the nouns and verbs and what we're actually talking about. That's great. And that's crawling. The next step of that is psychographics, and that's walking and that's understanding the person on a deeper level appreciating the whys and hows of their decision and how those extrapolate out to other topics.
So you can say, okay, this person didn't like that. Why not? And what should we do about it? I don't know that, I just know he was unhappy. Psychographics allows us to say, this is why they're unhappy, and this is what they'll be happy about in the future, and here's what you can do to help make that happen.
That feels a lot more like empathy. That feels a lot more like a sincere, useful set of assessments. And then the run is multimodal, so speech, video, an assessment of the person's micro gestures and pupil dilation and rate of speech as well as the content of it. So all of those things become the run.
And so what we have right now with ChatGPT is somewhere in between that crawl and walk. There's some stuff that it's doing that is flirting with psychographic assessment. But right now it can be an API that gets added to any of these LLMs, and I see that being a big part of the direction hopefully towards oxytocin.
It could be used with dopamine incorrectly. But if it's done right, then it'll it'll be a real good thing.
- [Neil] Psychographics and linguistics. There's sciences that have been around for several decades. Why haven't we seen more widespread adoption or development of AI with them?
- [Scott] Few reasons. And here's an example of why. This isn't the reason, but it's an example, is this brilliant guy named Richard Socher. He's a professor at Stanford. He wrote a bunch of the AI that Salesforce bought from him and turned into Einstein and all that stuff. He is for sure a thought leader in AI and machine learning and trains a lot of data scientists to become machine learning, AI experts.
And he trained them to optimize for complete accurate responses. Which is not the same thing as effective communication. Because people were optimizing for legacy KPIs, because people were optimizing for efficient nouns and verbs on minimal compute, they were throwing out stuff that wasn't mission critical.
So all the nuance and subtext, they said, we're gonna table that. We'll get back to that. Let's just know what kind of car they're talking about, what color car they're talking about, and if they want to buy or lease. And if we can do that, we're providing value. And it wasn't until we started really labeling all the words in this sentence instead of just the nouns and verbs.
Once we started labeling everything, that's when the subtext started showing up. And that was- Richard Socher using the phrase complete accurate responses is what we should optimize for, that was a big driver behind me starting Cyrano because I saw that and I was like, there's an opportunity there. That's a zig and there's a zag on the other direction that is no. The idea of the why and the empathy is what's needed. And so people are just getting to that I'd say in the last year and a half, it's taken off a lot and since ChatGPT in the last six months, it's gone even more so.
- [Neil] So you alluded to one of the reasons you started Cyrano was this, Scott you're actually a therapist by trade. Why are you in this space at all?
- [Scott] I'm in this space because I was- I spent 20 years working with at-risk teens and drug-addicted adolescents, and I watched that problem scale, and I wanted to build solutions that could scale proportionally. And it's not training more therapists one at a time or even a classroom at a time. It is giving tools that can be simultaneously a copilot for the therapist, which is great.
And also tools that can be upstream from that in peer support, in teaching, in any other space when the problems are still small, and to help those kids not feel isolated, to not feel less than, to not feel dopamine centric and just doom scrolling like they can do. Instead to build resources and tools that can help be part of the solution in the opposite direction.
And so we built Cyrano to build linguistic models that can help people having high value conversations build intimacy and rapport.
- [Ryan] One of the things we've talked about a couple times throughout this is just in looking for solutions to solve the increase in loneliness that people are experiencing. I think there was a lot of that happened during the COVID time period. I have friends who are therapists, and they've brought that up with me directly, that there's- they've noticed this increasing loneliness in society.
And we talked about obviously using chatbots and interacting with different experiences like that online. Are there other solutions that you see that AI can play a role in bringing to the table to help solve that problem or other problems related to that?
- [Scott] There's gonna be some well-intentioned people who really screw this up. Just look at Greek mythology. Greek mythology is so good at teaching us lessons, and the two to pay attention- well, there's three to pay attention to. Two bad, one good. Narcissus is the story of the dude who fell in love with himself and fell in love with his own reflection and just stayed staring at himself and ignored everybody else.
And we've all seen that. We've all seen people review their own Facebook feed to see how many likes they're getting and revisit their LinkedIn post that got the most comments and just self-aggrandizing moments of vanity. And that's dopamine producing, by the way. And that is one of the things that we've heard, like with the- oh geez, Tristan Harris made The Social Dilemma, which is where the phrase race to the bottom of the brain stem is where I originally heard it. I think it came from there. He talked about that. This idea of that isolation. The other side of this though is Icarus. Great tool. Add flight to a person. Let's give them some wings and let 'em go places they couldn't go before.
But we didn't understand the limitations of the technology. There were unknown unknowns and in our ego and self celebration, we fly too high, sun melts the wings, and you fall and die, and I see that happening as well, and I see those are the cautionary tales that we have seen play out over the last, call it 15 years, where over this period of time, we have all this loneliness, all these deaths of despair. 85% of therapists have the longest waiting lists of their careers right now. And it's because of those two stories. And when you give teenagers something that is all knowing and all present because it is connected to Wikipedia and Google and FaceTime.
So you get a little rectangle that is two thirds of God. When you give that to 14 year old girls, they start dying, and that's not good. And so now with an LLM with APIs and function calling, you're giving it some of all powerful. And so we're actually increasing what this technology can do, but our brain is still that caveman primitive. And so that- when I say the rate at which this is growing is concerning to me, it is concerning because of the difference between our evolution cycles and theirs. Because our evolution cycle is a couple centuries if not more. And their evolution cycle can be measured in hours, over the last six months.
And so that difference is scary.
- [Neil] Any words of optimism, Scott?
- [Scott] So there's the third Greek tale. The third Greek tale is one of the only stories in Greek mythology with a happy ending. And it's Cupid. Cupid falls- he's a god, he falls in love with a mortal woman, and Apollo's wife, it might be Aphrodite, I don't know, I haven't read it in a while. Whoever Apollo's wife is. She gets mad at Cupid and says no, this is not okay. We hate her. And the mortal and the god work together, go through all these trials and tribulations, help each other and end up loving each other happily ever after, which is, again, oxytocin. But it is this idea that man and machine or like- it's a Deus Ex Machina because that's what Greek mythology is all about all the time, which means God from the machine. And so we say we have this opportunity in that story to model how can we have man and God-like power coexist happily. So that is absolutely there. And going back to the- that's the alignment problem. This is why OpenAI says they were founded in the first place is to solve this alignment problem.
And I think oxytocin is, and I know I'm beating that drum a lot, but I think it's a first principles piece that we need to be paying attention to. And if we get the alignment problem right, and we've got, fortunately, a ton of smart people working on it and ChatGPT has done a fantastic job of making the planet take that seriously.
So increasingly we have all these brilliant minds pointed at the same thing and brilliant AIs pointed at this thing. We have an opportunity to create that alignment, and if we do, then the happy version of Star Trek is possible. The happy version of Sam Altman's, I think it was a blog post, Moore's Law for Everything.
It's an essay at least. The upside of those things are happening in our lifetime. Not the crazy parts of Star Trek where we're going to different galaxies or whatever, but the socioeconomics of it become possible and on the horizon quickly when we get that alignment figured out and the people are happier, more productive, more well educated, more respectful of each other, and more connected to the world and the people in it.
And I think that's actually an optimistic but reasonable outcome given the enthusiasm of people to solve the alignment problem together right now.
- [Ryan] Scott, I really appreciate you taking the time. This has been a fantastic conversation. I want to give you an opportunity to tell our audience where they can learn more about your company, everything you have going on, as well as if they have any questions or follow up after this conversation, what's the best way to do that?
- [Scott] Yeah, if you got questions, just find me on LinkedIn. It's- my name's Scott Sandland, is easiest way to find me. If you wanna know more about Cyrano, you can go to c y r a n o dot ai, so cyrano.ai/trial, and you can see a demo of the software that we do live on stage at a conference.
And then you can use two different trials of our software, one that you copy and paste and one that integrates into your email and you can see for yourself the kinds of stuff that Cyrano is doing.
- [Ryan] Fantastic. Neil, Nikolai, any last words from your all side?
- [Neil] I think that we could talk for a few more hours, but maybe Scott, if you're open to it, we have you come back towards the end of the year when GPT-5 comes out and continue the conversation.
- [Scott] Yeah, I'd love to.
- [Ryan] Scott, thanks again. Nikolai, Neil, great questions as always. And yeah, our audience, thanks for listening to this. We're excited to have this- get this out to you and yeah, we'll talk to you guys next time.
Special Guest

Hosted By
AFA
AI For All
Special Guest
Scott Sandland
- CEO, Cyrano.ai
Hosted By
AFA
AI For All
Subscribe to Our Podcast
YouTube
Apple Podcasts
Google Podcasts
Spotify
Amazon Music
Overcast