Summary
In this episode of the AI Engineering podcast Adam Honig, founder of Spiro AI, about using AI to automate CRM systems, particularly in the manufacturing sector. Adam shares his journey from running a consulting company focused on Salesforce to founding Spiro, and discusses the challenges of traditional CRM systems where data entry is often neglected. He explains how Spiro addresses this issue by automating data collection from emails, phone calls, and other communications, providing a rich dataset for machine learning models to generate valuable insights. Adam highlights how Spiro's AI-driven CRM system is tailored to the manufacturing industry's unique needs, where sales are relationship-driven rather than funnel-based, and emphasizes the importance of understanding customer interactions and order histories to predict future business opportunities. The conversation also touches on the evolution of AI models, leveraging powerful third-party APIs, managing context windows, and platform dependencies, with Adam sharing insights into Spiro's future plans, including product recommendations and dynamic data modeling approaches.
Announcements
Parting Question
The intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
In this episode of the AI Engineering podcast Adam Honig, founder of Spiro AI, about using AI to automate CRM systems, particularly in the manufacturing sector. Adam shares his journey from running a consulting company focused on Salesforce to founding Spiro, and discusses the challenges of traditional CRM systems where data entry is often neglected. He explains how Spiro addresses this issue by automating data collection from emails, phone calls, and other communications, providing a rich dataset for machine learning models to generate valuable insights. Adam highlights how Spiro's AI-driven CRM system is tailored to the manufacturing industry's unique needs, where sales are relationship-driven rather than funnel-based, and emphasizes the importance of understanding customer interactions and order histories to predict future business opportunities. The conversation also touches on the evolution of AI models, leveraging powerful third-party APIs, managing context windows, and platform dependencies, with Adam sharing insights into Spiro's future plans, including product recommendations and dynamic data modeling approaches.
Announcements
- Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems
- Your host is Tobias Macey and today I'm interviewing Adam Honig about using AI to automate CRM maintenance
- Introduction
- How did you get involved in machine learning?
- Can you describe what Spiro is and the story behind it?
- What are the specific challenges posed by the manufacturing industry with regards to sales and customer interactions?
- How does the type of manufacturing and target customer influence the level of effort and communication involved in the sales and customer service cycles?
- Before we discuss the opportunities for automation, can you describe the typical interaction patterns and workflows involved in the care and feeding of CRM systems?
- Spiro has been around since 2014, long pre-dating the current era of generative models. What were your initial targets for improving efficiency and reducing toil for your customers with the aid of AI/ML?
- How have the generational changes of deep learning and now generative AI changed the ways that you think about what is possible in your product?
- Generative models reduce the level of effort to get a proof of concept for language-oriented workflows. How are you pairing them with more narrow AI that you have built?
- Can you describe the overall architecture of your platform and how it has evolved in recent years?
- While generative models are powerful, they can also become expensive, and the costs are hard to predict. How are you thinking about vendor selection and platform risk in the application of those models?
- What are the opportunities that you see for the adoption of more autonomous applications of language models in your product? (e.g. agents)
- What are the confidence building steps that you are focusing on as you investigate those opportunities?
- What are the most interesting, innovative, or unexpected ways that you have seen Spiro used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI in the CRM space?
- When is AI the wrong choice for CRM workflows?
- What do you have planned for the future of Spiro?
Parting Question
- From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?
- Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers.
The intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
[00:00:05]
Tobias Macey:
Hello, and welcome to the AI Engineering podcast, your guide to the fast moving world of building scalable and maintainable AI systems. Your host is Tobias Macy, and today I'm interviewing Adam Honig about using AI to automate CRM and maintenance of that. So, Adam, can you start by introducing yourself?
[00:00:29] Adam Honig:
Sure. I'm Adam. I'm the founder of Spiro AI, and we're mostly known for really hating CRM.
[00:00:37] Tobias Macey:
And do you remember how you first got started working in the ML and AI space?
[00:00:41] Adam Honig:
Yeah. Well, I think, you know, like many people, it kinda started with the movie Her, but I'll put a little bit more context around it. At the time, I was running a, rather large consulting company that focused on helping, organizations use salesforce.com. We had, like, 750 consultants. We worked for pretty big companies like MetLife or Charles Schwab, and everybody kinda hated Salesforce. And so we did all this stuff. Like, we had a PhD in organizational behavior come in and try to come up with, like models to how to get people to use it better and stuff. And it was just really a labor. And I I wound up selling that business, and it's ultimately became part of Accenture.
And after I saw it, a friend of mine said, hey, you gotta see this movie, Her. And I I watched the movie, and I'm assuming everybody knows the movie. But what really struck me was the fact that, like, the Scarlett Johansson character could just, like, learn about the environment and then make recommendations and do things. And I'm like, I'm struggling with that same issue, you know, like right here in the real world with trying to get, you know, all these salespeople all around to try to enter data into Salesforce. And and why don't, know, why don't we just have Scarlett Johansson enter all the data for them? I mean, it's probably not a good use of her time, but that was sort of the the thing that got me and what we ultimately become the founding team of Spiro thinking, okay. Well, you know, can't we use machine learning to, you know, just do a lot of these tasks that people seem to be doing manually?
[00:02:14] Tobias Macey:
And so now bringing us around to Spiro, can you give an overview about what it is that you're building and some of the story behind how you came into that as the particular problem that you wanted to solve?
[00:02:25] Adam Honig:
Yeah. Happy happy to do that. So we you know, Spiro today, we call it the anti CRM CRM. And, you know, the the idea is, like, imagine if you could have something like CRM that would help you keep track of all your customers and communications with them and really understand everything about them, but nobody had to, like, enter any data. And this data entry thing is sort of a a big deal. And I just wanna just give you a minute on it to why it's so big because salespeople traditionally only get paid when they bring in business. And so if they're spending their time entering data, they see it as very negative. And, you know, they often will just not do it. So CRM systems tend to be ghost towns of of information. There's nothing really in there. And so, you know, what what Spiro does today is it literally reads all of the emails, of our customers, and it looks for data in the email like, you know, who they're emailing it, what customers can automatically create contacts in itself. It reads the email signatures and updates the phone numbers and all of this stuff that just happens in the background so that the experience of our customers using Spiro is that they get all this rich data about customers coming from emails, phone calls, video meetings, what have you, ERP order data, and nobody has to do anything. You know? And then, you know, the the problem with machine learning, of course, is everybody knows is you in order to make good, you know, predictions and insights, you need to have good data. And since we collected the data ourselves without kind of any human interference coming into that process, we tend to get much stronger signal there than if everybody's just kinda typing in their own notes.
So I don't know if if that's sort of the level you wanted to to hear about it, but that's sort of the idea behind Spiro. We focus on the manufacturing industry, and that's part of our journey. You asked me about the journey. So when when we first started the business, you know, we we had this, you know, kind of idea, and we literally sold, you know, to anybody that we could find. And so even today, I have two minor league baseball teams using Spiro. But over time, what we realized is that, manufacturing companies have a very acute need for this because most of their sales teams are I'm trying to think of a nice way to put this, not tech forward.
You know, they're they're the kind of guys and and ladies who need help getting on the Zoom call sometimes. And so the need for, you know, a high-tech company, like a software company to use AI to automate this stuff is medium. But if if you're not a technology company, your need is much higher. And so that's why we're really focused on the manufacturing space. And so that's today our business.
[00:05:04] Tobias Macey:
And in terms of the specifics of the manufacturing sector, you mentioned that the people who are working with the CRM aren't necessarily going to be all gung ho about technology like you might get in a start up. But in terms of the overall industry, the sales process, the customer personas, I'm wondering if there's anything germane to that area that also lends itself to a particular problem and solution statement for how to actually facilitate the increase and kind of care and feeding of their customers and their onboarding flows and the types of ongoing communication that are required and some of the ways that you've structured your product to be able to accommodate those particular requirements versus a HubSpot or a Salesforce that is just everything to everyone?
[00:05:55] Adam Honig:
Yeah. No. It it's a super interesting question, and it was frankly a bit of an eye opening experience for me because the way that a manufacturing company typically sells is let's so I have a customer that makes electrical conduit. They have a big factory in Illinois. They make electrical conduit. They take like metal and turn it into wire. That's what they do. And the way that they sell is that they have hundreds of distributors all around the country that sell their product and they take them to lunch and they buy them some beers and they become friends with them. And eventually when those guys need to buy electrical conduit, they call up my customer and they're like, hey. We need electrical conduit. There's no, like, funnel. There's no, like, sales stages like you think of in CRM. It's like they're building a relationship. And based upon the strength of the relationship, people are gonna buy their goods versus the other guys. And that's kinda what it comes down to. And so from a technology perspective, we had to kinda really rethink, like, the core CRM model because it's not about how do I acquire new customers. In most cases, I'm never gonna acquire a new customer. I'm just gonna keep doing more and more business with the same people. And so what Spiro does is it taps into the order history, which is a great predictor of, you know, future volume of business, and it looks for gaps.
And it you know, it's when when gaps occur unexpectedly, you know, the software can flag it for the sales team to be like, hey. What's going on over here? And, you know, our our friends at the electrical conduit manufacturer, they've been trying to do it manually, but they literally have so many customers that on a monthly basis, they couldn't do the analysis to figure out where these gaps were, just manually trying to do it in spreadsheets and and stuff like that. And so that became, like, a very core use case for Spiro to look at, you know, fallout and patterns in the ordering behavior, but combined with activity data. Because if I'm already talking to you and, and your orders fall off, that means one thing. But if I'm not talking to you and the order falls off, that definitely means something else. Right? And so it's it's combining those kind of datasets that I think really unlocked a lot of value for the manufacturing sector.
[00:08:08] Tobias Macey:
Another aspect of the overall CRM use case is also the ongoing customer support where you want to be able to see what are all the different points of interaction from, hey. This is the first time I've ever emailed this person to, hey. I just sold them a million dollars worth of product, and now they have some issue because something got damaged in shipment or what have you. And so then I need to bring my customer service team in as well, and I'm wondering how the overall life cycle of the customer and some of the different ways that it feeds back around into, okay, they got either really good treatment from our customer service. They were happy, and then they ordered another bulk of merchandise versus they got their merchandise, but our customer service department dropped the ball. And so now I see that they're not going to order anything else from us again to some of the ways that you're able to feed some of those signals back in to help the overall life cycle of that customer beyond just acquisition and sales.
[00:09:05] Adam Honig:
Yeah. Yeah. Super interesting. So one of the things that Spiro does is it automatically tiers the customers. And this isn't something that you need machine learning for to figure out who your top 10% customers are and stuff like that, but it it does automatically tier it because the the customer the the response that you should be giving a tier a customer is just a lot different than what you should be doing a tier c or tier d customer. And and the way that it works in Spiro is, you know, we use generative AI to consolidate all of the communication into what we call an executive summary. So when you go to look at let's say you're doing business with Staples. You know, you look at the Staples company page in Spiro. And first thing when you pull it up, it just gives you the, you know, overview of everything that's been happening in the past week. Could be customer service issues. It could be new orders. It could be just communication. And then you have the option to dig deeper into that by asking follow-up questions like, hey, I'm gonna go meet with these guys. What are the three things I should talk about? And again, based upon emails, phone calls, order history, and so on, it can build that for you pretty rapidly so that you're well prepared when you show up. So it's it's post sale, you know, account management is about 80% of a manufacturer's business. So
[00:10:23] Tobias Macey:
And so digging into now the areas of automation, you talked about automatically bringing in some of the different communications and populating the CRM based on that versus having to do the manual data entry. I'm wondering what are some of the other areas for taking off some of the toil from the sales and customer service staff and just being able to say, don't worry about it. We're doing this for you. I'm just going to tell you when you need to take an action. And other than that, you go ahead and do your job and talk to the customers because that's what really matters, not the care and feeding of some technical system that you don't ever wanna have to think about. Yeah. Well, one of the one of the things that we learned is that it's it's generally considered best practice, for these types of organizations that when you have a meeting with a customer that you send a follow-up email and you document,
[00:11:12] Adam Honig:
you know, kind of what was discussed. A lot of our customers are are like the electrical conduit guys who make kind of standard off the shelf products, but we have quite a few customers that make custom engineered solutions. So one of our customers is called Wanco. They make, these if you're driving on the highway and there's, like, a flashing sign that says merge left, you know, like a big message board, that's the kind of thing that they make. And these are all custom specified. And so, you know, if you have a meeting with a customer and you're talking to them about this really engineered product, you really wanna lay out in detail kind of what what you talked about to make sure that you're getting it right. Because if you build the wrong thing, it's fantastically expensive to fix it. And so one of the things that Spiro does is, you know, because we're joining a lot of communication, we can draft that follow-up email for you automatically.
We don't send it. I I don't feel like the technology is at the point of of we should have it send it. And I I think that there's always a level of personalization that should be involved in this communication, but it will it'll draft the the follow-up response based upon the signals. And this one thing alone, you know, people tell me saves them hours a week, you know, because it's it can be pretty complicated, and and they should always check to a certain level that we're getting everything right in it. But, but that's a a use case that we've seen a lot of, a lot of success with as well. I mean, Spiro does other things. I mean, it'll draft emails from prompts. It'll, you know, understand your product catalog so that when you're when it's generating these kind of communications, it'll use the right terms and stuff like that. But that you know, the the follow-up from a meeting is definitely been a big winner for us.
[00:12:54] Tobias Macey:
Digging now into some of the implementation of what you're building, I as I was preparing for the show, I noticed that you have been around as a business since around 2014, give or take. Obviously, that predates the current epoch of everything generative AI all the time. And so I imagine that you had already developed a fairly substantial set of models and automation routines prior to the advent of everything as a language model. And I'm wondering how you first approached the different areas of automation, some of the types of machine learning that you were doing, and some of the ways that you were thinking about managing the data input and turning that into useful signal and features for reducing some of that toil that your customers were struggling with.
[00:13:45] Adam Honig:
Yeah. Yeah. So it's kind of embarrassing, I'm gonna say. But when we started the business in 2014, we did totally the wrong thing. And I mean, like, totally. So we had this vision of, like, Scarlett Johansson, you know, doing all this work for people automatically. And we made a design decision that we were gonna build on top of the Salesforce platform, and we were gonna automate all the data into Salesforce. And, we we, you know, based upon our backgrounds and having worked with, you know, different companies, we signed up some pilot situations pretty quickly to get people to be testing this out. And it it didn't work at all. And the reason why was like multifold.
One was the each company. So Comcast was one of our pilot companies. They had a very custom Salesforce implementation. And then we went to another big company and they had a differently very custom solution. And then we couldn't generalize off of the the data models that were in these custom Salesforce solutions. It just wasn't working. And then what combined with that was, for Comcast, for example, there just wasn't enough data in the system. Their their sales teams themselves was entering one and a half records per user per week in the system, and it just I don't know. It just wasn't enough to do anything with. You know? And so, so we spent about four years beating our head on that particular thing before we were like, okay. Well, if we if we own the whole stack, if we own the environment, if we're not building on somebody else's platform, you know, I think we can control the data better, and we can, like, set up metadata to be describing the fields and stuff like that so that everything is, you know, well understood within it. So about 2018, we kinda did a complete reset of what we were doing. So even though we were you know, we did it for we've been doing it for about eleven years. I only think seven productively. And you're right. We we did. Once we got to that point, we built our own models, which were fine, but nothing really fabulous.
You know? But what you know, I think for us, you know, it was only really with with version two and then three of the OpenAI platform that everything really started clicking together for us. You know, we work with some of the Google models. We work with some of the Amazon models, and they, again, they were good, but just not really killing it for us.
[00:16:12] Tobias Macey:
And in terms of the generational shift and the step change in terms of off the shelf capabilities for these models, obviously, going from I have to build my own random forest or logistic regression up to I've got my own deep learning workflow, which is complicated and requires a lot of care and feeding to now I just throw my credit card at the problem and throw some data, and I get something useful. Obviously, there are more iterative steps that you can take beyond that initial proof of concept of, oh my goodness. It actually understands what I'm talking about. And I'm wondering how you worked through some of that transition from spending time and energy and toil on building your own custom models to, okay. Now I've got something that does half of what I was already doing but requires less than half the time, and in particular, what your decision process was for deciding which pieces to keep and which pieces to augment and then which to totally replace.
[00:17:12] Adam Honig:
Yeah. So from a decision making process, I it seemed pretty clear very early for us that us trying to compete at a fundamental level with the large AI players was gonna be a mistake. I mean, we're 35 people. You know, we have some great team members, super smart, whatever, but, like, we're not gonna compete with Google. We're not gonna compete with OpenAI. And, you know, some of the early challenges and still some things that we struggle with, I I'm hoping that they're gonna solve because they seem like pretty big problems to me. One of them being the context window. So, you know, our typical customer, like, if you think about a 100 person sales team and you think about the amount of email communication that they're generating and you think about 128 ks token context window, you're like, okay, well, how do we make something useful out of that? And and what kind of techniques can we use to preprocess the data to pull out, you know, the the best bits to be feeding, you know, into the model? So our focus kind of became much more practical in a way instead of fundamental that way. And and there's I don't think there's anything left in the Spiro platform, you know, that we built probably from 2018 to 2021 at this point. I think we've pretty much replaced everything with stuff from one of the big players at this point. So I I like to think of us as more of a applied AI, you know, than a fundamental one. But this context window, don't get me talking too much about it because I'm really hung up on the context window.
Because, like, for me, a lot of the work that we're doing is, like, how do we fit stuff in, you know, and how do we make it fast? Because, like, if, you know, if you go to Spiro and you're like, hey. I'm gonna go meet with this customer. Tell me what I need to know, and it takes ten minutes. I mean, it would still be amazing that it could actually do it, but it's it's useless in a practical sense. You know, people expect, you know, a two, three second maximum response time when they ask the the system something. You know?
[00:19:15] Tobias Macey:
Another aspect of the transition is just in terms of the messaging and expectation setting to your customers. So to your point of, hey. I want this system to do this thing. Why is it taking ten minutes versus, hey. I want the system to do this thing. I don't wanna have to explain it. Just figure it out for me. But also going from building your own narrow AI models that can handle some of the automation steps of, okay. I can parse out some of the semantics of this email and be able to populate that into the different fields in the CRM to now I can actually have a conversation with my CRM.
How did you think about the implementation, the transition stages, and being able to set expectations and kind of handhold your customers through that generational shift in capabilities?
[00:20:07] Adam Honig:
Well, I think the nice thing for our customer base so if, again, if you're thinking about manufacturing, which is not traditionally the most tech forward industry in the world, we we have the advantage of setting the first expectation with them about a lot of this stuff. Let's talk about email communication, and AI responses and stuff like that. So going on about three years now, the very first feature that we rolled out was really just allowing people to reply to an email with AI. Like, you would get an email, you could reply with AI, you could give it a little bit of a direction as to whether it was, you know, where you wanted to go with it and it would generate it. And that was like a really, really small step. And so we what we tried to do when we think about customers adopting new technology is we try to do very incremental, very small steps and make it just seem like a natural extension of their existing workflow.
You know, so there's already a reply button. What if there was a reply with AI button? We already you know? Then we graduated to the AI being able to draft full emails from you from a prompt. Well, we we have email templates in Spiro. That's always been a core part of our offering. So the drafting of the AI, you know, should be looking to the user a little bit like they're using a template in a way, something that they're already familiar with. And and even though we we label it as AI because, you know, we want people to be excited by the technology, but we're not screaming it at them. You know, we don't want them to feel threatened or or worried about it. And, you know, one of the one of the breakthroughs I remember I was at a customer in Chicago at a company that makes, vision related products, and half their sales team is international. And I think one of the things that really got them on the AI train in this way was the realization that it could draft in Japanese or Korean or whatever, just in the same mode. And I think, you know, once people started seeing it as something that was exciting for them and not a threat, which I think that there's a little bit of today, that really helped that particular customer kinda move forward with things. So it's so in in terms of, like, the methodology that we use, it's it's it's just slowly being incremental.
[00:22:22] Tobias Macey:
To your point of how you surface it, so going from a natural workflow of send an email to send it with AI and then increasing the level of autonomy that it can gain. What are some of the areas where maybe you've tried to bring AI to bear on a particular workflow and gotten pushback from your customers saying, no. That should never be the the job of AI, or I don't understand how AI is gonna be helpful or, you know, any variation thereof.
[00:22:49] Adam Honig:
Yeah. So we we did a prototype of a a full just chat interface for the UI of Spiro. So, like, it's basically just like you you log in. It's a chat interface. There's no buttons. You know, there it's just you chat. And I was like, this is perfect. We don't need a front end team. We can just have everything be in the chat, You know? And you could just be like, hey. You know? Tell me about Tobias, or I'm gonna go talk to this company or whatever, and it would just give you what you needed. And that was a complete bomb. You know? People were just not like, they they looked at it. Like, what am I supposed to do with this thing? You know? Like, it it didn't like, we're we're the anti CRM CRM, but we're still a CRM.
You know, it still needed to have sort of the basic, you know, navigation in the system to get people to feel comfortable with it. So that's that's one example of something that just really didn't go well for us. And that that's okay. You know, we try things. We we, you know, we see what works, we doesn't we what doesn't work.
[00:23:49] Tobias Macey:
So now that you're at a state where you're relying largely on these vendor APIs for being able to incorporate these language model capabilities, there are a lot of sophisticated workflows that you can build. You need to be judicious about how you're doing that so that customers are delighted and not displeased. The other challenge that comes to bear when you're bringing in these language model providers is cost management because depending on how verbose your customers are, how verbose the models are, you might spend 5¢ or you might spend $5 for the same interaction as well as the challenge of making sure that the language model response is actually relevant and doesn't fall prone to hallucinations. So then you have to make sure that you're managing the context, bringing us back to your point of context windows. And I'm wondering what your process is for managing some of those cost controls and output validation and just some of the guardrails that you've put around those systems to make sure that your customers are happy and that you're not running over your kind of allocated budget needing into your profit margin because of the costs of operating that utility.
[00:25:00] Adam Honig:
Yeah. So it's it's super interesting. So when we first started working, with, like, OpenAI, for example, I think, you know, we had, like, they had a ability to limit the amount of spend that you had. You could be like, I don't wanna spend more than, like, $200 a month or something like that. As a matter of fact, they wouldn't even let you spend more than $200 a month, I think, at at some point, because they were throttling, you know, their own usage. So I'll give you an example. So we when we first started, we had this vision of the executive summary, and we wanted that, you know, when you go to a company, you're always gonna see, you know, just at a high level exactly what's been happening. So you have to look at all the details that the system is just gonna tell you. But a lot of our customers have ten, twenty thousand customer records in Spiro. You know, we have hundreds of customers. And to to run that for every single one of them, and and people might not even look at all those customers on a weekly basis, would have been crazy expensive. So we we engineered an approach that our customers would follow certain companies in Spiro, and that following told us that that company was important enough that we should run this at the time expensive AI process to to provide that executive summary. And that was sort of the first approach to that. But you know what happened as the different models have come out, the price points have really fallen radically.
And we've been doing a great job of working to use the lowest cost models where possible, you know, kind of across the board. And so today, for example, we do actually just summarize. We create that executive summary for every single company across the board, and it's not costing me more than it did when I only had the selected records being processed. So that's just an example of how much, you know, the cost has come down. But to do some of the things that we wanna do in the future. So I'll give you another example. So we're really only summarizing the last week's worth of activity at this point. But, you know, so I think of it as like I've got this budget that I want to spend.
And as the models improve and the context windows go up and we can you know, the prices are going down, we'll put more in. So I'm kinda managing to a steady state budget with the amount of tokens that I think we're gonna be consuming in that. I don't know if that makes sense. That's kind of the way that I think about it.
[00:27:19] Tobias Macey:
The other challenge, as you said, is these models are constantly changing. The price points are constantly changing. The capabilities are evolving, and that can be a pretty challenging moving target as well as the fact that by relying on this other vendor as part of your core functionality, it brings in a measure of platform risk of, oh, hey. What happens if they all of a sudden decide that they're going to cap the number of requests that I can make again, and they're not going to let me increase that. And then you have to say, okay. Well, I know that this particular set of prompting works with this model. But if I try to do the same thing over here, everything blows up, and it starts trying to tell all my customers bogus data about their prospects.
And so there's that element of platform risk as well beyond just the cost factor. And you mentioned that you were using OpenAI. You've used some Google. But what are some of the risk mitigation that you're doing as a business that is reliant on these other third party systems to make sure that they're not gonna pull the rug out from under you? Or if they do, that you have some soft landing?
[00:28:26] Adam Honig:
Yeah. Well, it's it's a good point. So we we built a whole bunch of functionality on the OpenAI assistant API, for example, which they have basically just replaced with another API. And, you know, for good reason because the other one, we really struggled to get it to work. Well, it was not very performant. It had a level of complication. I don't think it needed it, but, you know, we built for about a year on that thing. And suddenly, we're like, okay. So we gotta now rip it out and replace it, and we're in the process of doing that. So it it definitely is a problem. But, you know, when when you're building commercial software, you're exposed to that all the time. Like, we use, you know, to write an email, we use a third party, you know, editor for that. And and those guys, you know, change stuff and break stuff all the time for us. You know, we have a embedded analytics solution as a component of Spiro, and we have another third party provider that we use for that. And, you know, nobody I mean, I guess, obviously, Microsoft and Google and people like that can, but nobody at the midsize scale can build their own everything anyway. So I think it's a matter of, you know, where do you wanna put your bets?
And, you know, if you look at the funding of these companies and you look at the size that they have in their reputation and you try to make your best guess at it, You know, we've definitely we're working with a vendor. So one of the things that we do is, you know, we're using a third party API to be transcribing video meetings, telephone calls, and stuff like that. We're working with a small provider company called Deepgram. Gram. They've done an excellent job for us. We use the Google API for that initially, and these guys just they seemed a lot more respondent. There's certain technical challenges with transcription, especially when people start speaking multiple languages.
I don't wanna get into it, but, like, and they were just much more responsive and helpful for us during the process. And even though they're not anywhere near Google, they were, you know, still a good bet to work with. So pick your poison, I guess, is the the answer. No no good solution there.
[00:30:23] Tobias Macey:
In terms of your journey from building your own models to those in favor of these more capable language models to you now have rolled that out. You have a set of features based on those. What are some of the opportunities that you're investigating for bringing more autonomy to bear on the CRM, letting the AI models take more control from the humans that are involved, being able to be more proactive in maybe doing their own research of identifying possible prospects of, okay, this is a company that maybe could benefit from the particular product that you're manufacturing. Here are some information based on their public earnings reports that maybe this is a a good opportunity for you to try and get in front of them, etcetera, etcetera, to be able to expand the set of offerings in the ways that your platform is able to grow the capabilities of your customers.
[00:31:21] Adam Honig:
So for a lot of our customers, kind of the holy grail is recommended product, and it doesn't work so well in electrical conduit, but a lot of our customers have thousands, tens of thousands of SKUs of products. And they all, you know, want the Amazon experience of, oh, customer x is buying this. We should offer them that. And that is still a very tricky thing to get right. So this is something that, you know, I think we're definitely looking at for improvements in the future. I mean, Amazon has the benefit of such scale that it works great for them. But also if they're recommending the wrong thing, it's not as bad as a corporate setting. If you show up at a customer and you're like, hey, I think you guys should really be interested in this and it's dumb, you're gonna look like an idiot pretty quick. So we have to be a little bit more accurate there. The other thing that we're looking at, which it's a little let me see if I can describe it properly. I'm kinda calling it fieldless CRM. And so so business applications, by and large, are okay. We got a table. We got some columns in it that represents fields on the screen.
Right? And let's say, you know, in Spiro, you wanna add a new field because you wanna capture the person's favorite color. So you add a new field, and it's called color, and you got a pick list, and you can well, that just feels super old fashioned to me today. Why are we from a business systems just putting everything in fields that way when the way that this information is captured is not like that. You know what I mean? And so you might have fields that don't exist. You might have fields that don't need to be filled in. You might have things that aren't relevant. And so why can't, you know, the database structure and the user experience accommodate that dynamically based upon what they're seeing. And, you know, if you if you think about, you know, some of the design experiences that people see in the future, like custom personally fit clothes and stuff like that, I kind of see that as that metaphor for the way business applications could be working all underpowered by the language models that understand the relationships with these things to the company or contact object or something like that. So I I don't think we're anywhere near making that a practical reality, you know, but, you know, we'll we'll see because I think that's where it should be.
[00:33:48] Tobias Macey:
And that brings to mind a lot of the recent conversations around things like agentic memory, short versus long term memory, and also the potential for auto generating knowledge graphs based on contextual input and having the LLM use that for providing better context and relevancy to the RAG workflows. And I'm wondering what are some of the areas of experimentation that you've started on there to be able to start to bring in some of those more data modeling and data storage focused elements of the overall AI application?
[00:34:23] Adam Honig:
Yeah. I I wish we've done more on that. It's so exciting. But it's it's just we're at the very beginning of it. You know? I mean, we you know, one of the great things about having customers is we get a lot of feedback from them. We got a lot of input and ideas, and and nobody's asking available. And so we have to we have to do a little bit of what people ask us for too. So that's the current state of play, but I'm I'm looking to be carving out some research time to be focused on that because you're totally right. Like, this is this is where it's gonna be. And I I have no idea how to do it, though.
[00:34:57] Tobias Macey:
Well, as a shameless plug, I will recommend that you go back and listen to the episode that I did about Cogni, which is one of those agentic memory systems that helps with managing some of that data ingest and builds the graph context for you. Yeah. Right on. I'll definitely do that. And going back to the confidence building aspect that we touched on earlier of figuring out what are these models capable of, how do I make sure that they continue to be capable of that as new versions of the model come out, as they tweak the system prompts, etcetera. What are some of the ways that you are being deliberate in the experimentation that you're doing, the ways that you're spending time and money on potential features and speculative capabilities of your platform to make sure that you're not just burning a lot of time and energy on something that ultimately has no real future and some of the ways that you're emphasizing on kind of failing fast to identify what are the capabilities that you can't build or don't want to build or that your customers really don't care about.
[00:36:01] Adam Honig:
Yeah. Well, you know, I think it's okay to to have a certain amount of failure in the system, first of all. I I think, you know, if you don't have that amount of failure, I think you're not innovating hard enough, probably. But we do have a pretty extensive beta process as well. And, you know, we we run everything in beta, you know, for about six weeks before we go live with anything in production. And usually, you know, the the kinks start to show up in a couple of weeks of things being live. And when things go wrong in AI, they go wrong pretty bad pretty quick. You know? So it's not usually like a subtle thing. You know? And so, you know, we we do a lot of work with prompt management, kinda what's the the kind of core prompts that the system should be using, how does it get impacted by user decisions, And that's an immense amount of testing that we're doing during the beta phase to make sure that everything's coming out right. So I'd say that's probably the best strategy that we have at this point to be working with this. And, you know, we try to only be in beta with customers that are very understanding of these kind of things. It's not a generalized approach.
Generalized approach.
[00:37:10] Tobias Macey:
And as far as identifying those customers, what are some of the ways that you evaluate and onboard them to set expectations for, hey. This is a beta. It will probably fail spectacularly. I warned you.
[00:37:24] Adam Honig:
Well, I mean, we're we're definitely gonna use everything ourselves first. You know? There's no question. So we run the whole company on Spiro as you would in this kind of circumstance. So we see a lot of things ourselves. There's no magic guide. It's it's people that we have really good relationships with that have shown an appetite for being out on the edge, and these people raise their hands pretty quickly. And then you have to determine whether or not, you know, they're really gonna give you the good feedback that's required to to make it successful. But but you're gonna know that when you see it for sure.
[00:37:56] Tobias Macey:
And as you have been building and growing and iterating on Spiro and working with your customers so that they can fulfill the needs of their customers, what are some of the most interesting or innovative or unexpected ways that you've seen your product applied? Unexpected ways. Yeah. That's a really good question. I'll I'll give you one example.
[00:38:13] Adam Honig:
Ways that you've seen your product applied? Unexpected ways. Yeah. That's a really good question. I'll I'll give you one example. So we, we we had a in the platform for a long time, we had a business card scanner that didn't do really any AI. It was a very basic thing. You would take a picture of a card. It would try to map words, and you'd put the words into the right fields to kinda create a contact for the business card. We upgraded that to be using AI and literally reading the image and just automatically updating everything from there, and that was working really well. And then one of our customers realized that instead of taking a picture of a business card, they could actually just take a picture of a notebook that they had been writing notes in, and it would find the contact information there and just do it. You know? And I was like, wait a second.
So we could actually just automate the whole note ingestion process from handwriting on this stuff? And that that was that this is like a in the past two weeks kinda thing happened for me. So that's that's pretty new. I'm pretty excited about that actually. What when then then what that's leading us to is now customers are like, hey. We wanna we wanna take a picture of the sign of a building and just have that automatically create a company based upon the the sign, the geolocation that's available in the photo. And I'm like, you're totally right. Why the hell not? Why shouldn't you do something like that? You know? What could possibly go wrong?
[00:39:35] Tobias Macey:
Exactly.
[00:39:37] Adam Honig:
You can still edit it. It's okay.
[00:39:41] Tobias Macey:
And in terms of your experience of working in the space, building this business, trying to stay up to date with the constantly moving target that is AI, what are some of the most interesting or unexpected or challenging lessons that you've learned in the process?
[00:39:56] Adam Honig:
I I don't know. I think, you know, kinda going back to this OpenAI assistant model. So the the assistant model that OpenAI the API that they came out with was designed to help you solve the problem of keeping state in a conversation and build up, you know, you know, the ability to look at, you know, different files and structures, you know, in a contained space to make everything work better. And it seemed so smart, and they're obviously very smart guys and ladies working on this, but it was it just wasn't performant enough for us. And so it it was kind of a surprise they came out with it, and we were like, okay. Seems like the right approach. I'm sure they're gonna make it faster. And it just never got faster for us even though we thought it must because they're always trying to improve these things. And so I I'd say we're, you know, like like you said, we are dependent, on some level on third parties to deliver our solution.
And when they don't always live up to that, you know, it it can be a surprise even for the best companies that we work with.
[00:40:58] Tobias Macey:
As people are evaluating the use of CRMs, which ones to choose, and specifically as they're interacting with Spiro and understanding its capabilities, what are some of the cases where you've decided that AI is just absolutely the wrong choice for a given workflow?
[00:41:16] Adam Honig:
Yeah. Well, first of all, our our point of view is that nobody should use CRM. It's terrible. It's outdated. You know, we're against it. That's why they were the anti CRM. But if you had to use a CRM, some of the things that AI probably shouldn't do for you I'm gonna use a simple example of contacts. Like, contact scoring is a well known you know, there's a well known approach to do contact scoring. It's simple math. You know, you don't need AI to be doing anything about that in your CRM. And if anybody's telling you that, oh, contact scoring or territory assignment or kind of things that we've known and figured out, that that should be a red flag to you, I would say.
[00:41:56] Tobias Macey:
And as you continue to iterate on and grow Spiro and try to take advantage of the constantly evolving set of capabilities for these different AI models? What are some of the things you have planned for the near to medium term or any particular projects or problem areas you're excited to explore?
[00:42:13] Adam Honig:
One of the things that we do is it was we do like to do a lot of experiments and just trying things and seeing what happens. And so the our approach to helping our customers under unlock customer better customer insights has been very focused around individual companies. You You know, I wanna know what's going on with Staples. I wanna know what are they unhappy about. I wanna know what do they buy recently. These kind of things. But expanding the viewport to be taking in all of my customers and having that, that's, you know, the the next evolution of what we're focusing on. You know, the the amount of data that's required to resolve those kind of inquiries is is frankly too much for us to manage at this point. So we're working hard to figure out how do we put things into the right size and boxes to make it work. But that's, you know, that's where I start seeing the kind of holy grail coming into view where you can be like, okay, well, Staples bought this. What is Walmart gonna be interested in? Or what would Target be interested in? And at least give you a good basis for answering that question.
[00:43:20] Tobias Macey:
And are there any other aspects of the work that you're doing at Spiro, the ways that you're applying AI, the overall space of CRMs and the capabilities for automating them that we didn't discuss yet that you'd like to cover before we close out the show? Well, the only other thing that I'm super interested in, and this is more of a societal thing than a technology thing because the technology is definitely available. So I feel like the world has kind of gotten used to,
[00:43:45] Adam Honig:
like, AI agents joining Zoom and Teams meetings and stuff like that. They're very prevalent at this point. And I'm really wondering when it's really gonna be excessive acceptable to have a AI agent with you in a meeting. Most of my customers don't do Zoom or Teams calls. They go and visit people in person. That's how you build strong relationships, but we still want that data. So when is when is it gonna be okay for us to collect that and not be super weird and creepy? Like, I think it'd be super weird and creepy right now, but maybe in a few years, it won't be. I don't know.
[00:44:18] Tobias Macey:
Yeah. The trying to pass off AI as some sort of anthropomorphized entity is definitely something that can easily go terribly wrong, and I think we definitely need to continue to just make it very obvious that it is just a machine and treat it as such.
[00:44:36] Adam Honig:
No doubt. Yeah. No. The and you can see this in, like, the AI generated movies and, images too. Like like, my daughter who's in high school, every anytime she sees something in the real world, she's like, that's AI generated. It immediately destroys credibility. And so I think I think on the business context, it's even at a higher level. You know, like like you said, if I don't know the level of sophistication, you need to get something to be like a person to make that happen. I can't even get my brain around how well preserved that how well done that would have to be.
[00:45:09] Tobias Macey:
Well, for anybody who wants to get in touch with you and follow along with the work that you're doing, I'll have you add your preferred contact information to the show notes. And as the final question, I'd like to get your perspective on what you see as being the biggest gaps in the tooling technology or human training that that's available for AI systems today.
[00:45:25] Adam Honig:
Yeah. My you know, it changes from day to day for me, frankly. But my the area that I'm stuck on today is the context window. I just want as big a context window as possible because I have all this data I need to put into it, And I don't wanna be spending time figuring out how do we reduce the amount of data. I just wanna get to the answer. So I would say that's for me, that's where I'm at. So if you got a big context window, I'm your guy.
[00:45:50] Tobias Macey:
Alright. Well, thank you very much for taking the time today to join me and share the work that you're doing at Spiro and some of the ways that you're bringing AI to bear on CRMs and ways to relegate them to the dustbin of history. So I appreciate all of the time and energy you're putting into that, and I hope you enjoy the rest of your day. Alright, Tobias. Thanks so much.
[00:46:12] Tobias Macey:
Thank you for listening. And don't forget to check out our other shows, the Data Engineering Podcast, which covers the latest in modern data management, and podcast.init, which covers the Python language, its community, and the innovative ways it is being used. You can visit the site at the machinelearningpodcast.com to subscribe to the show, sign up for the mailing list, and read the show notes. And if you've learned something or tried out a project from the show, then tell us about it. Email hosts@themachinelearningpodcast.com with your story. To help other people find the show, please leave a review on Apple Podcasts and tell your friends and coworkers.
Hello, and welcome to the AI Engineering podcast, your guide to the fast moving world of building scalable and maintainable AI systems. Your host is Tobias Macy, and today I'm interviewing Adam Honig about using AI to automate CRM and maintenance of that. So, Adam, can you start by introducing yourself?
[00:00:29] Adam Honig:
Sure. I'm Adam. I'm the founder of Spiro AI, and we're mostly known for really hating CRM.
[00:00:37] Tobias Macey:
And do you remember how you first got started working in the ML and AI space?
[00:00:41] Adam Honig:
Yeah. Well, I think, you know, like many people, it kinda started with the movie Her, but I'll put a little bit more context around it. At the time, I was running a, rather large consulting company that focused on helping, organizations use salesforce.com. We had, like, 750 consultants. We worked for pretty big companies like MetLife or Charles Schwab, and everybody kinda hated Salesforce. And so we did all this stuff. Like, we had a PhD in organizational behavior come in and try to come up with, like models to how to get people to use it better and stuff. And it was just really a labor. And I I wound up selling that business, and it's ultimately became part of Accenture.
And after I saw it, a friend of mine said, hey, you gotta see this movie, Her. And I I watched the movie, and I'm assuming everybody knows the movie. But what really struck me was the fact that, like, the Scarlett Johansson character could just, like, learn about the environment and then make recommendations and do things. And I'm like, I'm struggling with that same issue, you know, like right here in the real world with trying to get, you know, all these salespeople all around to try to enter data into Salesforce. And and why don't, know, why don't we just have Scarlett Johansson enter all the data for them? I mean, it's probably not a good use of her time, but that was sort of the the thing that got me and what we ultimately become the founding team of Spiro thinking, okay. Well, you know, can't we use machine learning to, you know, just do a lot of these tasks that people seem to be doing manually?
[00:02:14] Tobias Macey:
And so now bringing us around to Spiro, can you give an overview about what it is that you're building and some of the story behind how you came into that as the particular problem that you wanted to solve?
[00:02:25] Adam Honig:
Yeah. Happy happy to do that. So we you know, Spiro today, we call it the anti CRM CRM. And, you know, the the idea is, like, imagine if you could have something like CRM that would help you keep track of all your customers and communications with them and really understand everything about them, but nobody had to, like, enter any data. And this data entry thing is sort of a a big deal. And I just wanna just give you a minute on it to why it's so big because salespeople traditionally only get paid when they bring in business. And so if they're spending their time entering data, they see it as very negative. And, you know, they often will just not do it. So CRM systems tend to be ghost towns of of information. There's nothing really in there. And so, you know, what what Spiro does today is it literally reads all of the emails, of our customers, and it looks for data in the email like, you know, who they're emailing it, what customers can automatically create contacts in itself. It reads the email signatures and updates the phone numbers and all of this stuff that just happens in the background so that the experience of our customers using Spiro is that they get all this rich data about customers coming from emails, phone calls, video meetings, what have you, ERP order data, and nobody has to do anything. You know? And then, you know, the the problem with machine learning, of course, is everybody knows is you in order to make good, you know, predictions and insights, you need to have good data. And since we collected the data ourselves without kind of any human interference coming into that process, we tend to get much stronger signal there than if everybody's just kinda typing in their own notes.
So I don't know if if that's sort of the level you wanted to to hear about it, but that's sort of the idea behind Spiro. We focus on the manufacturing industry, and that's part of our journey. You asked me about the journey. So when when we first started the business, you know, we we had this, you know, kind of idea, and we literally sold, you know, to anybody that we could find. And so even today, I have two minor league baseball teams using Spiro. But over time, what we realized is that, manufacturing companies have a very acute need for this because most of their sales teams are I'm trying to think of a nice way to put this, not tech forward.
You know, they're they're the kind of guys and and ladies who need help getting on the Zoom call sometimes. And so the need for, you know, a high-tech company, like a software company to use AI to automate this stuff is medium. But if if you're not a technology company, your need is much higher. And so that's why we're really focused on the manufacturing space. And so that's today our business.
[00:05:04] Tobias Macey:
And in terms of the specifics of the manufacturing sector, you mentioned that the people who are working with the CRM aren't necessarily going to be all gung ho about technology like you might get in a start up. But in terms of the overall industry, the sales process, the customer personas, I'm wondering if there's anything germane to that area that also lends itself to a particular problem and solution statement for how to actually facilitate the increase and kind of care and feeding of their customers and their onboarding flows and the types of ongoing communication that are required and some of the ways that you've structured your product to be able to accommodate those particular requirements versus a HubSpot or a Salesforce that is just everything to everyone?
[00:05:55] Adam Honig:
Yeah. No. It it's a super interesting question, and it was frankly a bit of an eye opening experience for me because the way that a manufacturing company typically sells is let's so I have a customer that makes electrical conduit. They have a big factory in Illinois. They make electrical conduit. They take like metal and turn it into wire. That's what they do. And the way that they sell is that they have hundreds of distributors all around the country that sell their product and they take them to lunch and they buy them some beers and they become friends with them. And eventually when those guys need to buy electrical conduit, they call up my customer and they're like, hey. We need electrical conduit. There's no, like, funnel. There's no, like, sales stages like you think of in CRM. It's like they're building a relationship. And based upon the strength of the relationship, people are gonna buy their goods versus the other guys. And that's kinda what it comes down to. And so from a technology perspective, we had to kinda really rethink, like, the core CRM model because it's not about how do I acquire new customers. In most cases, I'm never gonna acquire a new customer. I'm just gonna keep doing more and more business with the same people. And so what Spiro does is it taps into the order history, which is a great predictor of, you know, future volume of business, and it looks for gaps.
And it you know, it's when when gaps occur unexpectedly, you know, the software can flag it for the sales team to be like, hey. What's going on over here? And, you know, our our friends at the electrical conduit manufacturer, they've been trying to do it manually, but they literally have so many customers that on a monthly basis, they couldn't do the analysis to figure out where these gaps were, just manually trying to do it in spreadsheets and and stuff like that. And so that became, like, a very core use case for Spiro to look at, you know, fallout and patterns in the ordering behavior, but combined with activity data. Because if I'm already talking to you and, and your orders fall off, that means one thing. But if I'm not talking to you and the order falls off, that definitely means something else. Right? And so it's it's combining those kind of datasets that I think really unlocked a lot of value for the manufacturing sector.
[00:08:08] Tobias Macey:
Another aspect of the overall CRM use case is also the ongoing customer support where you want to be able to see what are all the different points of interaction from, hey. This is the first time I've ever emailed this person to, hey. I just sold them a million dollars worth of product, and now they have some issue because something got damaged in shipment or what have you. And so then I need to bring my customer service team in as well, and I'm wondering how the overall life cycle of the customer and some of the different ways that it feeds back around into, okay, they got either really good treatment from our customer service. They were happy, and then they ordered another bulk of merchandise versus they got their merchandise, but our customer service department dropped the ball. And so now I see that they're not going to order anything else from us again to some of the ways that you're able to feed some of those signals back in to help the overall life cycle of that customer beyond just acquisition and sales.
[00:09:05] Adam Honig:
Yeah. Yeah. Super interesting. So one of the things that Spiro does is it automatically tiers the customers. And this isn't something that you need machine learning for to figure out who your top 10% customers are and stuff like that, but it it does automatically tier it because the the customer the the response that you should be giving a tier a customer is just a lot different than what you should be doing a tier c or tier d customer. And and the way that it works in Spiro is, you know, we use generative AI to consolidate all of the communication into what we call an executive summary. So when you go to look at let's say you're doing business with Staples. You know, you look at the Staples company page in Spiro. And first thing when you pull it up, it just gives you the, you know, overview of everything that's been happening in the past week. Could be customer service issues. It could be new orders. It could be just communication. And then you have the option to dig deeper into that by asking follow-up questions like, hey, I'm gonna go meet with these guys. What are the three things I should talk about? And again, based upon emails, phone calls, order history, and so on, it can build that for you pretty rapidly so that you're well prepared when you show up. So it's it's post sale, you know, account management is about 80% of a manufacturer's business. So
[00:10:23] Tobias Macey:
And so digging into now the areas of automation, you talked about automatically bringing in some of the different communications and populating the CRM based on that versus having to do the manual data entry. I'm wondering what are some of the other areas for taking off some of the toil from the sales and customer service staff and just being able to say, don't worry about it. We're doing this for you. I'm just going to tell you when you need to take an action. And other than that, you go ahead and do your job and talk to the customers because that's what really matters, not the care and feeding of some technical system that you don't ever wanna have to think about. Yeah. Well, one of the one of the things that we learned is that it's it's generally considered best practice, for these types of organizations that when you have a meeting with a customer that you send a follow-up email and you document,
[00:11:12] Adam Honig:
you know, kind of what was discussed. A lot of our customers are are like the electrical conduit guys who make kind of standard off the shelf products, but we have quite a few customers that make custom engineered solutions. So one of our customers is called Wanco. They make, these if you're driving on the highway and there's, like, a flashing sign that says merge left, you know, like a big message board, that's the kind of thing that they make. And these are all custom specified. And so, you know, if you have a meeting with a customer and you're talking to them about this really engineered product, you really wanna lay out in detail kind of what what you talked about to make sure that you're getting it right. Because if you build the wrong thing, it's fantastically expensive to fix it. And so one of the things that Spiro does is, you know, because we're joining a lot of communication, we can draft that follow-up email for you automatically.
We don't send it. I I don't feel like the technology is at the point of of we should have it send it. And I I think that there's always a level of personalization that should be involved in this communication, but it will it'll draft the the follow-up response based upon the signals. And this one thing alone, you know, people tell me saves them hours a week, you know, because it's it can be pretty complicated, and and they should always check to a certain level that we're getting everything right in it. But, but that's a a use case that we've seen a lot of, a lot of success with as well. I mean, Spiro does other things. I mean, it'll draft emails from prompts. It'll, you know, understand your product catalog so that when you're when it's generating these kind of communications, it'll use the right terms and stuff like that. But that you know, the the follow-up from a meeting is definitely been a big winner for us.
[00:12:54] Tobias Macey:
Digging now into some of the implementation of what you're building, I as I was preparing for the show, I noticed that you have been around as a business since around 2014, give or take. Obviously, that predates the current epoch of everything generative AI all the time. And so I imagine that you had already developed a fairly substantial set of models and automation routines prior to the advent of everything as a language model. And I'm wondering how you first approached the different areas of automation, some of the types of machine learning that you were doing, and some of the ways that you were thinking about managing the data input and turning that into useful signal and features for reducing some of that toil that your customers were struggling with.
[00:13:45] Adam Honig:
Yeah. Yeah. So it's kind of embarrassing, I'm gonna say. But when we started the business in 2014, we did totally the wrong thing. And I mean, like, totally. So we had this vision of, like, Scarlett Johansson, you know, doing all this work for people automatically. And we made a design decision that we were gonna build on top of the Salesforce platform, and we were gonna automate all the data into Salesforce. And, we we, you know, based upon our backgrounds and having worked with, you know, different companies, we signed up some pilot situations pretty quickly to get people to be testing this out. And it it didn't work at all. And the reason why was like multifold.
One was the each company. So Comcast was one of our pilot companies. They had a very custom Salesforce implementation. And then we went to another big company and they had a differently very custom solution. And then we couldn't generalize off of the the data models that were in these custom Salesforce solutions. It just wasn't working. And then what combined with that was, for Comcast, for example, there just wasn't enough data in the system. Their their sales teams themselves was entering one and a half records per user per week in the system, and it just I don't know. It just wasn't enough to do anything with. You know? And so, so we spent about four years beating our head on that particular thing before we were like, okay. Well, if we if we own the whole stack, if we own the environment, if we're not building on somebody else's platform, you know, I think we can control the data better, and we can, like, set up metadata to be describing the fields and stuff like that so that everything is, you know, well understood within it. So about 2018, we kinda did a complete reset of what we were doing. So even though we were you know, we did it for we've been doing it for about eleven years. I only think seven productively. And you're right. We we did. Once we got to that point, we built our own models, which were fine, but nothing really fabulous.
You know? But what you know, I think for us, you know, it was only really with with version two and then three of the OpenAI platform that everything really started clicking together for us. You know, we work with some of the Google models. We work with some of the Amazon models, and they, again, they were good, but just not really killing it for us.
[00:16:12] Tobias Macey:
And in terms of the generational shift and the step change in terms of off the shelf capabilities for these models, obviously, going from I have to build my own random forest or logistic regression up to I've got my own deep learning workflow, which is complicated and requires a lot of care and feeding to now I just throw my credit card at the problem and throw some data, and I get something useful. Obviously, there are more iterative steps that you can take beyond that initial proof of concept of, oh my goodness. It actually understands what I'm talking about. And I'm wondering how you worked through some of that transition from spending time and energy and toil on building your own custom models to, okay. Now I've got something that does half of what I was already doing but requires less than half the time, and in particular, what your decision process was for deciding which pieces to keep and which pieces to augment and then which to totally replace.
[00:17:12] Adam Honig:
Yeah. So from a decision making process, I it seemed pretty clear very early for us that us trying to compete at a fundamental level with the large AI players was gonna be a mistake. I mean, we're 35 people. You know, we have some great team members, super smart, whatever, but, like, we're not gonna compete with Google. We're not gonna compete with OpenAI. And, you know, some of the early challenges and still some things that we struggle with, I I'm hoping that they're gonna solve because they seem like pretty big problems to me. One of them being the context window. So, you know, our typical customer, like, if you think about a 100 person sales team and you think about the amount of email communication that they're generating and you think about 128 ks token context window, you're like, okay, well, how do we make something useful out of that? And and what kind of techniques can we use to preprocess the data to pull out, you know, the the best bits to be feeding, you know, into the model? So our focus kind of became much more practical in a way instead of fundamental that way. And and there's I don't think there's anything left in the Spiro platform, you know, that we built probably from 2018 to 2021 at this point. I think we've pretty much replaced everything with stuff from one of the big players at this point. So I I like to think of us as more of a applied AI, you know, than a fundamental one. But this context window, don't get me talking too much about it because I'm really hung up on the context window.
Because, like, for me, a lot of the work that we're doing is, like, how do we fit stuff in, you know, and how do we make it fast? Because, like, if, you know, if you go to Spiro and you're like, hey. I'm gonna go meet with this customer. Tell me what I need to know, and it takes ten minutes. I mean, it would still be amazing that it could actually do it, but it's it's useless in a practical sense. You know, people expect, you know, a two, three second maximum response time when they ask the the system something. You know?
[00:19:15] Tobias Macey:
Another aspect of the transition is just in terms of the messaging and expectation setting to your customers. So to your point of, hey. I want this system to do this thing. Why is it taking ten minutes versus, hey. I want the system to do this thing. I don't wanna have to explain it. Just figure it out for me. But also going from building your own narrow AI models that can handle some of the automation steps of, okay. I can parse out some of the semantics of this email and be able to populate that into the different fields in the CRM to now I can actually have a conversation with my CRM.
How did you think about the implementation, the transition stages, and being able to set expectations and kind of handhold your customers through that generational shift in capabilities?
[00:20:07] Adam Honig:
Well, I think the nice thing for our customer base so if, again, if you're thinking about manufacturing, which is not traditionally the most tech forward industry in the world, we we have the advantage of setting the first expectation with them about a lot of this stuff. Let's talk about email communication, and AI responses and stuff like that. So going on about three years now, the very first feature that we rolled out was really just allowing people to reply to an email with AI. Like, you would get an email, you could reply with AI, you could give it a little bit of a direction as to whether it was, you know, where you wanted to go with it and it would generate it. And that was like a really, really small step. And so we what we tried to do when we think about customers adopting new technology is we try to do very incremental, very small steps and make it just seem like a natural extension of their existing workflow.
You know, so there's already a reply button. What if there was a reply with AI button? We already you know? Then we graduated to the AI being able to draft full emails from you from a prompt. Well, we we have email templates in Spiro. That's always been a core part of our offering. So the drafting of the AI, you know, should be looking to the user a little bit like they're using a template in a way, something that they're already familiar with. And and even though we we label it as AI because, you know, we want people to be excited by the technology, but we're not screaming it at them. You know, we don't want them to feel threatened or or worried about it. And, you know, one of the one of the breakthroughs I remember I was at a customer in Chicago at a company that makes, vision related products, and half their sales team is international. And I think one of the things that really got them on the AI train in this way was the realization that it could draft in Japanese or Korean or whatever, just in the same mode. And I think, you know, once people started seeing it as something that was exciting for them and not a threat, which I think that there's a little bit of today, that really helped that particular customer kinda move forward with things. So it's so in in terms of, like, the methodology that we use, it's it's it's just slowly being incremental.
[00:22:22] Tobias Macey:
To your point of how you surface it, so going from a natural workflow of send an email to send it with AI and then increasing the level of autonomy that it can gain. What are some of the areas where maybe you've tried to bring AI to bear on a particular workflow and gotten pushback from your customers saying, no. That should never be the the job of AI, or I don't understand how AI is gonna be helpful or, you know, any variation thereof.
[00:22:49] Adam Honig:
Yeah. So we we did a prototype of a a full just chat interface for the UI of Spiro. So, like, it's basically just like you you log in. It's a chat interface. There's no buttons. You know, there it's just you chat. And I was like, this is perfect. We don't need a front end team. We can just have everything be in the chat, You know? And you could just be like, hey. You know? Tell me about Tobias, or I'm gonna go talk to this company or whatever, and it would just give you what you needed. And that was a complete bomb. You know? People were just not like, they they looked at it. Like, what am I supposed to do with this thing? You know? Like, it it didn't like, we're we're the anti CRM CRM, but we're still a CRM.
You know, it still needed to have sort of the basic, you know, navigation in the system to get people to feel comfortable with it. So that's that's one example of something that just really didn't go well for us. And that that's okay. You know, we try things. We we, you know, we see what works, we doesn't we what doesn't work.
[00:23:49] Tobias Macey:
So now that you're at a state where you're relying largely on these vendor APIs for being able to incorporate these language model capabilities, there are a lot of sophisticated workflows that you can build. You need to be judicious about how you're doing that so that customers are delighted and not displeased. The other challenge that comes to bear when you're bringing in these language model providers is cost management because depending on how verbose your customers are, how verbose the models are, you might spend 5¢ or you might spend $5 for the same interaction as well as the challenge of making sure that the language model response is actually relevant and doesn't fall prone to hallucinations. So then you have to make sure that you're managing the context, bringing us back to your point of context windows. And I'm wondering what your process is for managing some of those cost controls and output validation and just some of the guardrails that you've put around those systems to make sure that your customers are happy and that you're not running over your kind of allocated budget needing into your profit margin because of the costs of operating that utility.
[00:25:00] Adam Honig:
Yeah. So it's it's super interesting. So when we first started working, with, like, OpenAI, for example, I think, you know, we had, like, they had a ability to limit the amount of spend that you had. You could be like, I don't wanna spend more than, like, $200 a month or something like that. As a matter of fact, they wouldn't even let you spend more than $200 a month, I think, at at some point, because they were throttling, you know, their own usage. So I'll give you an example. So we when we first started, we had this vision of the executive summary, and we wanted that, you know, when you go to a company, you're always gonna see, you know, just at a high level exactly what's been happening. So you have to look at all the details that the system is just gonna tell you. But a lot of our customers have ten, twenty thousand customer records in Spiro. You know, we have hundreds of customers. And to to run that for every single one of them, and and people might not even look at all those customers on a weekly basis, would have been crazy expensive. So we we engineered an approach that our customers would follow certain companies in Spiro, and that following told us that that company was important enough that we should run this at the time expensive AI process to to provide that executive summary. And that was sort of the first approach to that. But you know what happened as the different models have come out, the price points have really fallen radically.
And we've been doing a great job of working to use the lowest cost models where possible, you know, kind of across the board. And so today, for example, we do actually just summarize. We create that executive summary for every single company across the board, and it's not costing me more than it did when I only had the selected records being processed. So that's just an example of how much, you know, the cost has come down. But to do some of the things that we wanna do in the future. So I'll give you another example. So we're really only summarizing the last week's worth of activity at this point. But, you know, so I think of it as like I've got this budget that I want to spend.
And as the models improve and the context windows go up and we can you know, the prices are going down, we'll put more in. So I'm kinda managing to a steady state budget with the amount of tokens that I think we're gonna be consuming in that. I don't know if that makes sense. That's kind of the way that I think about it.
[00:27:19] Tobias Macey:
The other challenge, as you said, is these models are constantly changing. The price points are constantly changing. The capabilities are evolving, and that can be a pretty challenging moving target as well as the fact that by relying on this other vendor as part of your core functionality, it brings in a measure of platform risk of, oh, hey. What happens if they all of a sudden decide that they're going to cap the number of requests that I can make again, and they're not going to let me increase that. And then you have to say, okay. Well, I know that this particular set of prompting works with this model. But if I try to do the same thing over here, everything blows up, and it starts trying to tell all my customers bogus data about their prospects.
And so there's that element of platform risk as well beyond just the cost factor. And you mentioned that you were using OpenAI. You've used some Google. But what are some of the risk mitigation that you're doing as a business that is reliant on these other third party systems to make sure that they're not gonna pull the rug out from under you? Or if they do, that you have some soft landing?
[00:28:26] Adam Honig:
Yeah. Well, it's it's a good point. So we we built a whole bunch of functionality on the OpenAI assistant API, for example, which they have basically just replaced with another API. And, you know, for good reason because the other one, we really struggled to get it to work. Well, it was not very performant. It had a level of complication. I don't think it needed it, but, you know, we built for about a year on that thing. And suddenly, we're like, okay. So we gotta now rip it out and replace it, and we're in the process of doing that. So it it definitely is a problem. But, you know, when when you're building commercial software, you're exposed to that all the time. Like, we use, you know, to write an email, we use a third party, you know, editor for that. And and those guys, you know, change stuff and break stuff all the time for us. You know, we have a embedded analytics solution as a component of Spiro, and we have another third party provider that we use for that. And, you know, nobody I mean, I guess, obviously, Microsoft and Google and people like that can, but nobody at the midsize scale can build their own everything anyway. So I think it's a matter of, you know, where do you wanna put your bets?
And, you know, if you look at the funding of these companies and you look at the size that they have in their reputation and you try to make your best guess at it, You know, we've definitely we're working with a vendor. So one of the things that we do is, you know, we're using a third party API to be transcribing video meetings, telephone calls, and stuff like that. We're working with a small provider company called Deepgram. Gram. They've done an excellent job for us. We use the Google API for that initially, and these guys just they seemed a lot more respondent. There's certain technical challenges with transcription, especially when people start speaking multiple languages.
I don't wanna get into it, but, like, and they were just much more responsive and helpful for us during the process. And even though they're not anywhere near Google, they were, you know, still a good bet to work with. So pick your poison, I guess, is the the answer. No no good solution there.
[00:30:23] Tobias Macey:
In terms of your journey from building your own models to those in favor of these more capable language models to you now have rolled that out. You have a set of features based on those. What are some of the opportunities that you're investigating for bringing more autonomy to bear on the CRM, letting the AI models take more control from the humans that are involved, being able to be more proactive in maybe doing their own research of identifying possible prospects of, okay, this is a company that maybe could benefit from the particular product that you're manufacturing. Here are some information based on their public earnings reports that maybe this is a a good opportunity for you to try and get in front of them, etcetera, etcetera, to be able to expand the set of offerings in the ways that your platform is able to grow the capabilities of your customers.
[00:31:21] Adam Honig:
So for a lot of our customers, kind of the holy grail is recommended product, and it doesn't work so well in electrical conduit, but a lot of our customers have thousands, tens of thousands of SKUs of products. And they all, you know, want the Amazon experience of, oh, customer x is buying this. We should offer them that. And that is still a very tricky thing to get right. So this is something that, you know, I think we're definitely looking at for improvements in the future. I mean, Amazon has the benefit of such scale that it works great for them. But also if they're recommending the wrong thing, it's not as bad as a corporate setting. If you show up at a customer and you're like, hey, I think you guys should really be interested in this and it's dumb, you're gonna look like an idiot pretty quick. So we have to be a little bit more accurate there. The other thing that we're looking at, which it's a little let me see if I can describe it properly. I'm kinda calling it fieldless CRM. And so so business applications, by and large, are okay. We got a table. We got some columns in it that represents fields on the screen.
Right? And let's say, you know, in Spiro, you wanna add a new field because you wanna capture the person's favorite color. So you add a new field, and it's called color, and you got a pick list, and you can well, that just feels super old fashioned to me today. Why are we from a business systems just putting everything in fields that way when the way that this information is captured is not like that. You know what I mean? And so you might have fields that don't exist. You might have fields that don't need to be filled in. You might have things that aren't relevant. And so why can't, you know, the database structure and the user experience accommodate that dynamically based upon what they're seeing. And, you know, if you if you think about, you know, some of the design experiences that people see in the future, like custom personally fit clothes and stuff like that, I kind of see that as that metaphor for the way business applications could be working all underpowered by the language models that understand the relationships with these things to the company or contact object or something like that. So I I don't think we're anywhere near making that a practical reality, you know, but, you know, we'll we'll see because I think that's where it should be.
[00:33:48] Tobias Macey:
And that brings to mind a lot of the recent conversations around things like agentic memory, short versus long term memory, and also the potential for auto generating knowledge graphs based on contextual input and having the LLM use that for providing better context and relevancy to the RAG workflows. And I'm wondering what are some of the areas of experimentation that you've started on there to be able to start to bring in some of those more data modeling and data storage focused elements of the overall AI application?
[00:34:23] Adam Honig:
Yeah. I I wish we've done more on that. It's so exciting. But it's it's just we're at the very beginning of it. You know? I mean, we you know, one of the great things about having customers is we get a lot of feedback from them. We got a lot of input and ideas, and and nobody's asking available. And so we have to we have to do a little bit of what people ask us for too. So that's the current state of play, but I'm I'm looking to be carving out some research time to be focused on that because you're totally right. Like, this is this is where it's gonna be. And I I have no idea how to do it, though.
[00:34:57] Tobias Macey:
Well, as a shameless plug, I will recommend that you go back and listen to the episode that I did about Cogni, which is one of those agentic memory systems that helps with managing some of that data ingest and builds the graph context for you. Yeah. Right on. I'll definitely do that. And going back to the confidence building aspect that we touched on earlier of figuring out what are these models capable of, how do I make sure that they continue to be capable of that as new versions of the model come out, as they tweak the system prompts, etcetera. What are some of the ways that you are being deliberate in the experimentation that you're doing, the ways that you're spending time and money on potential features and speculative capabilities of your platform to make sure that you're not just burning a lot of time and energy on something that ultimately has no real future and some of the ways that you're emphasizing on kind of failing fast to identify what are the capabilities that you can't build or don't want to build or that your customers really don't care about.
[00:36:01] Adam Honig:
Yeah. Well, you know, I think it's okay to to have a certain amount of failure in the system, first of all. I I think, you know, if you don't have that amount of failure, I think you're not innovating hard enough, probably. But we do have a pretty extensive beta process as well. And, you know, we we run everything in beta, you know, for about six weeks before we go live with anything in production. And usually, you know, the the kinks start to show up in a couple of weeks of things being live. And when things go wrong in AI, they go wrong pretty bad pretty quick. You know? So it's not usually like a subtle thing. You know? And so, you know, we we do a lot of work with prompt management, kinda what's the the kind of core prompts that the system should be using, how does it get impacted by user decisions, And that's an immense amount of testing that we're doing during the beta phase to make sure that everything's coming out right. So I'd say that's probably the best strategy that we have at this point to be working with this. And, you know, we try to only be in beta with customers that are very understanding of these kind of things. It's not a generalized approach.
Generalized approach.
[00:37:10] Tobias Macey:
And as far as identifying those customers, what are some of the ways that you evaluate and onboard them to set expectations for, hey. This is a beta. It will probably fail spectacularly. I warned you.
[00:37:24] Adam Honig:
Well, I mean, we're we're definitely gonna use everything ourselves first. You know? There's no question. So we run the whole company on Spiro as you would in this kind of circumstance. So we see a lot of things ourselves. There's no magic guide. It's it's people that we have really good relationships with that have shown an appetite for being out on the edge, and these people raise their hands pretty quickly. And then you have to determine whether or not, you know, they're really gonna give you the good feedback that's required to to make it successful. But but you're gonna know that when you see it for sure.
[00:37:56] Tobias Macey:
And as you have been building and growing and iterating on Spiro and working with your customers so that they can fulfill the needs of their customers, what are some of the most interesting or innovative or unexpected ways that you've seen your product applied? Unexpected ways. Yeah. That's a really good question. I'll I'll give you one example.
[00:38:13] Adam Honig:
Ways that you've seen your product applied? Unexpected ways. Yeah. That's a really good question. I'll I'll give you one example. So we, we we had a in the platform for a long time, we had a business card scanner that didn't do really any AI. It was a very basic thing. You would take a picture of a card. It would try to map words, and you'd put the words into the right fields to kinda create a contact for the business card. We upgraded that to be using AI and literally reading the image and just automatically updating everything from there, and that was working really well. And then one of our customers realized that instead of taking a picture of a business card, they could actually just take a picture of a notebook that they had been writing notes in, and it would find the contact information there and just do it. You know? And I was like, wait a second.
So we could actually just automate the whole note ingestion process from handwriting on this stuff? And that that was that this is like a in the past two weeks kinda thing happened for me. So that's that's pretty new. I'm pretty excited about that actually. What when then then what that's leading us to is now customers are like, hey. We wanna we wanna take a picture of the sign of a building and just have that automatically create a company based upon the the sign, the geolocation that's available in the photo. And I'm like, you're totally right. Why the hell not? Why shouldn't you do something like that? You know? What could possibly go wrong?
[00:39:35] Tobias Macey:
Exactly.
[00:39:37] Adam Honig:
You can still edit it. It's okay.
[00:39:41] Tobias Macey:
And in terms of your experience of working in the space, building this business, trying to stay up to date with the constantly moving target that is AI, what are some of the most interesting or unexpected or challenging lessons that you've learned in the process?
[00:39:56] Adam Honig:
I I don't know. I think, you know, kinda going back to this OpenAI assistant model. So the the assistant model that OpenAI the API that they came out with was designed to help you solve the problem of keeping state in a conversation and build up, you know, you know, the ability to look at, you know, different files and structures, you know, in a contained space to make everything work better. And it seemed so smart, and they're obviously very smart guys and ladies working on this, but it was it just wasn't performant enough for us. And so it it was kind of a surprise they came out with it, and we were like, okay. Seems like the right approach. I'm sure they're gonna make it faster. And it just never got faster for us even though we thought it must because they're always trying to improve these things. And so I I'd say we're, you know, like like you said, we are dependent, on some level on third parties to deliver our solution.
And when they don't always live up to that, you know, it it can be a surprise even for the best companies that we work with.
[00:40:58] Tobias Macey:
As people are evaluating the use of CRMs, which ones to choose, and specifically as they're interacting with Spiro and understanding its capabilities, what are some of the cases where you've decided that AI is just absolutely the wrong choice for a given workflow?
[00:41:16] Adam Honig:
Yeah. Well, first of all, our our point of view is that nobody should use CRM. It's terrible. It's outdated. You know, we're against it. That's why they were the anti CRM. But if you had to use a CRM, some of the things that AI probably shouldn't do for you I'm gonna use a simple example of contacts. Like, contact scoring is a well known you know, there's a well known approach to do contact scoring. It's simple math. You know, you don't need AI to be doing anything about that in your CRM. And if anybody's telling you that, oh, contact scoring or territory assignment or kind of things that we've known and figured out, that that should be a red flag to you, I would say.
[00:41:56] Tobias Macey:
And as you continue to iterate on and grow Spiro and try to take advantage of the constantly evolving set of capabilities for these different AI models? What are some of the things you have planned for the near to medium term or any particular projects or problem areas you're excited to explore?
[00:42:13] Adam Honig:
One of the things that we do is it was we do like to do a lot of experiments and just trying things and seeing what happens. And so the our approach to helping our customers under unlock customer better customer insights has been very focused around individual companies. You You know, I wanna know what's going on with Staples. I wanna know what are they unhappy about. I wanna know what do they buy recently. These kind of things. But expanding the viewport to be taking in all of my customers and having that, that's, you know, the the next evolution of what we're focusing on. You know, the the amount of data that's required to resolve those kind of inquiries is is frankly too much for us to manage at this point. So we're working hard to figure out how do we put things into the right size and boxes to make it work. But that's, you know, that's where I start seeing the kind of holy grail coming into view where you can be like, okay, well, Staples bought this. What is Walmart gonna be interested in? Or what would Target be interested in? And at least give you a good basis for answering that question.
[00:43:20] Tobias Macey:
And are there any other aspects of the work that you're doing at Spiro, the ways that you're applying AI, the overall space of CRMs and the capabilities for automating them that we didn't discuss yet that you'd like to cover before we close out the show? Well, the only other thing that I'm super interested in, and this is more of a societal thing than a technology thing because the technology is definitely available. So I feel like the world has kind of gotten used to,
[00:43:45] Adam Honig:
like, AI agents joining Zoom and Teams meetings and stuff like that. They're very prevalent at this point. And I'm really wondering when it's really gonna be excessive acceptable to have a AI agent with you in a meeting. Most of my customers don't do Zoom or Teams calls. They go and visit people in person. That's how you build strong relationships, but we still want that data. So when is when is it gonna be okay for us to collect that and not be super weird and creepy? Like, I think it'd be super weird and creepy right now, but maybe in a few years, it won't be. I don't know.
[00:44:18] Tobias Macey:
Yeah. The trying to pass off AI as some sort of anthropomorphized entity is definitely something that can easily go terribly wrong, and I think we definitely need to continue to just make it very obvious that it is just a machine and treat it as such.
[00:44:36] Adam Honig:
No doubt. Yeah. No. The and you can see this in, like, the AI generated movies and, images too. Like like, my daughter who's in high school, every anytime she sees something in the real world, she's like, that's AI generated. It immediately destroys credibility. And so I think I think on the business context, it's even at a higher level. You know, like like you said, if I don't know the level of sophistication, you need to get something to be like a person to make that happen. I can't even get my brain around how well preserved that how well done that would have to be.
[00:45:09] Tobias Macey:
Well, for anybody who wants to get in touch with you and follow along with the work that you're doing, I'll have you add your preferred contact information to the show notes. And as the final question, I'd like to get your perspective on what you see as being the biggest gaps in the tooling technology or human training that that's available for AI systems today.
[00:45:25] Adam Honig:
Yeah. My you know, it changes from day to day for me, frankly. But my the area that I'm stuck on today is the context window. I just want as big a context window as possible because I have all this data I need to put into it, And I don't wanna be spending time figuring out how do we reduce the amount of data. I just wanna get to the answer. So I would say that's for me, that's where I'm at. So if you got a big context window, I'm your guy.
[00:45:50] Tobias Macey:
Alright. Well, thank you very much for taking the time today to join me and share the work that you're doing at Spiro and some of the ways that you're bringing AI to bear on CRMs and ways to relegate them to the dustbin of history. So I appreciate all of the time and energy you're putting into that, and I hope you enjoy the rest of your day. Alright, Tobias. Thanks so much.
[00:46:12] Tobias Macey:
Thank you for listening. And don't forget to check out our other shows, the Data Engineering Podcast, which covers the latest in modern data management, and podcast.init, which covers the Python language, its community, and the innovative ways it is being used. You can visit the site at the machinelearningpodcast.com to subscribe to the show, sign up for the mailing list, and read the show notes. And if you've learned something or tried out a project from the show, then tell us about it. Email hosts@themachinelearningpodcast.com with your story. To help other people find the show, please leave a review on Apple Podcasts and tell your friends and coworkers.
Introduction to AI Engineering Podcast
Interview with Adam Honig: Automating CRM with AI
The Concept and Evolution of Spiro AI
Challenges in the Manufacturing Sector
Automation in CRM: Beyond Data Entry
Transition to Generative AI and Model Management
Customer Adoption and AI Integration
Cost Management and Platform Risks
Future Opportunities and Innovations in CRM
Lessons Learned and Unexpected Applications