The Whole Idea by DCG ONE
The disciplines required to grow market share in a digitally driven marketing landscape are getting broader by the day. Touchpoints are multiplying, and for many consumers, the noise is deafening.
Enter The Whole Idea by DCG ONE: an elixir of strategy, technology, and creativity at work in every campaign and at every touchpoint to set the connection, overcome the clatter, and spur engagement and growth.
Join us for The Whole Idea by DCG ONE for insight and inspiration from industry-leading experts at The Agency and across DCG ONE, and from our many partners with whom we create real-world experiences that are memorable and meaningful.
Email us anytime at podcast@dcgone.com.
The Whole Idea by DCG ONE
How to Win with AI, Part One
This episode of The Whole Idea by DCG ONE is the first of a two-part podcast on artificial intelligence. In Part One, DCG ONE President and CEO, Brad Clarke, and SVP Technology, Chris Geiser, join host Greg Oberst for an in-depth discussion about AI. Learn where game-changing opportunities exist for your business and hear advice on how to manage the daunting security and privacy issues that come with AI.
Other links you may like to check out:
About us - https://www.dcgone.com/about
Strategy - https://www.dcgone.com/strategy
Technology - https://www.dcgone.com/technology
The Agency - https://www.dcgone.com/agency
Let's connect! https://www.dcgone.com/contact
Email us: podcast@dcgone.com
Check us out on social media:
LinkedIN, Instagram, Facebook
Episode: How to Win with Artificial Intelligence, part 1
Greg: Hello again and welcome to the Whole Idea Podcast by DCG ONE. I'm your host, Greg Oberst, senior writer at the agency at DCG ONE. This episode is part one of a two-part series on artificial intelligence. In this part one, we're going to hear from DCG ONE president Brad Clark and DCG ONE Senior Vice President of Technology. Chris Geiser. First up is Brad Clark. We started with how he sees AI influencing marketing in the next few years.
Brad: Well, I think about it in a couple of ways. To me, it seems that these LMMs…
Greg: Large…
Brad: …Language models. Yes, large language model applications of AI are inherently optimized for the marketing industry. And that to me, it feels like the most creative people will ask the most creative questions and be able to have the most interesting applications that this is really, at the end of the day, these are tools still, and it's really the creative people who can come up with the most interesting and most diversified questions to ask these LLMs and get the most interesting and I guess insightful responses, whether that be an application of how do we look at data and understand our customers and maybe market to them more appropriately or more effectively? How do we accelerate content and content creation? And then I see, and something we talk a lot about around here is what will the web experience look like in six months or a year or two years? I feel like probably we won't be going to websites and clicking buttons in the way we do today. And I know the team here is giving a lot of thought around that is what will that digital experience feel like? And we feel like these tools will be core to what our customers and how they'll interact with their customers online.
Greg: What do you think are some of the questions that business leaders like yourself should be asking about AI these days?
Brad: Well, of course security is the first thing that comes to mind when you ask me that question. I think we all need to be very aware of the security implications and the privacy implications. And just in the last week or two, we've seen some lawsuits around copyright and protection and there's just a bit of an unknown on how we can best practice things out in the public sphere if things that will be sent out beyond our walls. So I think those are all really big unknowns and things. We need to be really challenging ourselves and our teams to understand as the rules are written within the context of I think what we can develop very safe infrastructures and the things within the business that don't, maybe aren't implicated by some of those copyright or legal challenges. I think a lot about just how do we help our teams utilize these tools? How do we help them understand what's possible? How do we accelerate what is a tool to help us accelerate?
Greg: You touched on copyright laws, but what is your thought? Do we need more regulation or less? How do we keep an eye on that?
Brad: It feels to me like something, yeah, needs some guardrails. I think we've experienced with the other forms of technology, emerging technologies that without those guardrails there's real kind of downstream implication. And I think this technology has a much broader ability to have negative consequence if we aren't managing to some borders and some guardrails. Yeah, I do feel like it's important that this is a managed technology…
Greg: At a government level…
Brad: Yes, certainly at a government level.
Greg: Does that worry you at all? Too much change too fast?
Brad: I don't think it worries me. I think it is challenging. I think it challenges me and it challenges us. We have to move at the pace that the technology moves and those are the companies that are going to flourish.
Greg: Brad, I sometimes wonder if the hype around AI lately is distorting the path forward for businesses. In other words, some of the discussion around AI feels inverted. Do you think that's the right attitude to let AI lead the discussion?
Brad: No, I don't think that's the right attitude. And part of that is because of the speed of change and development. I think you need to organize your business the way you see, the way you believe you can provide most value to your customer. And then you want to use all applicable tools to do that, to deliver the best product, the best results, and do it most efficiently. And certainly these tools are great at those things, but really they need to be in the context of how your business is organized and running and how do you use these tools and advance your product and perhaps develop new products to take advantage of these unique capabilities. So I think your word of inverted is accurate. I think some people are starting, I feel like there's this, that these tools are the lead and I don't think they are. I think they're perhaps an important piece of the infrastructure that's the product and the deliverable that still matters.
Greg: I know we have automation, which isn't new, but automation aside, how is it influencing our print production floor these days?
Brad: Yeah, well, and this is one of those differentiators, so you're right, we've been using AI, machine learning AI for some time in the manufacturing side of the business. A lot of our machines have inherent machine learning capabilities and delivered great reduction in waste and quality enhancements. And we've seen really rapid development of that over the last several years and we will continue. So when we talk about the context of what is the large language models and these tools, I think it's really more around process and procedure that we're seeing advantages now. How can we rapidly deploy and improve process, procedure, train? We're delivering training in multiple languages very effectively and rapidly now by using these tools where that was expensive and slow a year ago. And so really if we're talking about LLMs, we're seeing more advantage in the manufacturing world today more on kind of the human capital side of the business. How do we help our people understand how to do their job most efficiently, most effectively? How do we introduce new process and new procedures as efficiently and effectively as we can
Greg: Around the hype of artificial intelligence Lately is this thought that, hey, it's coming for my job. As an executive, how do you think about that notion, the concept of doing more or saving money with fewer employees?
Brad: I would be surprised if there's many business executives who are really leading their organizations with that mindset. These are really growth accelerators. I think these tools, right? The ability to deploy more efficiently, more effectively, faster if done. I think these can be kind of the rocket fuel, right? You have a great idea prospect or a great delivery, great product, great customers, and this can be that accelerant to be faster and more effective and more efficient. So I don't think of this as a cost saving application or a labor reduction application, and I'd be surprised if really many leaders are thinking of those this way.
Greg: If you use the technology and work it into your strategy in the right sort of way smartly, you created a competitive advantage…
Brad: That kind of goes back to a few minutes ago… if the people on our teams can be asking them the best questions, if they can be using the tools in the most interesting and effective ways. That's right. That's a competitive advantage.
Greg: So what gets you most excited about artificial intelligence at the end of the day?
Brad: I think it's the ultimate possibility is to allow people to do their highest value work. Let's reduce our repetitive processes, our unneeded work, our unnecessary, and really focus people on their highest value in their workday. And I think that will ultimately, that's the most rewarding work that we do. It's the most valuable work we do for the business and for our customers in relatively short order. We can really tackle a lot of the mundane and the non-value add work we do. And I guess I think that's the most interesting and kind of intellectually intriguing application for me, and ultimately that will lead to great innovation outside of the tools. We will simply have more time to innovate and to work on those things that we should be working on.
Greg: That's Brad Clark, president of DCG ONE with his take on artificial intelligence. Next I caught up with Chris Geiser, our senior vice president of technology, and as you can imagine, AI is top of mind with Chris, especially as it applies to security. So I started with Chris by asking him what scares him the most about AI.
Chris: Everything scares me about AI in the context of security, and the issue though is that when we talk about security and we talk about security in the context of the business, the business always has to move forward. And so the best security strategy is always going to be based on what the needs of the business are. In our case, AI could be a big part of developing competitive advantage over the next three to six to nine to 12 months. And so when we see that as part of our technology roadmap, we have to just embrace what all the security issues are and lean into that.
That's from a use case standpoint as we're developing things, how are we protecting our data as we're using these new tools? How are we making sure that everyone is using these tools responsibly? How are we making sure that we have a policy in place that actually educates our team members on what the best uses are for artificial intelligence and how they should use that to be both productive in their work and also to make sure that they're not putting any company or customer data at risk.
Chris: Those are all critical things. Those scare me less because any good security plan is going to take into account the needs of the business and look at that first and say, okay, I see what you need to do here and I see how that's important to the business. So let's talk about how we can do the best to keep that data secure so that we can get the most out of these tools without compromising our position. It doesn't have to be AI for that. It could be Adobe Photoshop for that matter, that we would have the same conversation about businesses need systems, systems need strategies to stay secure, and so that's what we're all about, making sure that we use strategies to keep the businesses operations secure. Where AI really terrifies me is in being put in the hands of bad actors. Bad actors now have a much faster route to completion on developing new compromised techniques.
Chris: They can use AI to write malicious code much, much faster. They can use AI to test malicious code much, much faster. They can use AI to find out how businesses are actually protecting themselves so that they can start to circumvent that the possibilities for AI in the hands of a bad actor are almost limitless, and that's one of the things that terrifies me.
The inverse of that though, of course, is that the security industry is thankfully, has been using machine learning and AI type techniques for a long time to try and combat that because data security and cybersecurity in the corporate world are under siege really by a force of bad actors that really have a lot of momentum and a lot of resources on their side. They make money with their attacks to fund their next attack, and so that's their business. And so we have to understand that we're competing with a business model that's out to destroy ours.
Chris: And so that's what really, really terrifies me about it. But the cybersecurity industry has tried to keep up with that and use AI in a protective way. We work with intrusion management and other tools that use AI to basically make sure that they're bubbling up the best or most informative threats that we're looking at on a day-to-day basis so that we're not spending a lot of time sifting through log files, looking at every single event as if it's a significant event when really there might be dozens of events to look at rather than thousands, which is how many we receive.
Greg: Chris, what advice can you offer businesses to not just maintain security, but also to stay ahead of the bad actors with AI and their toolkits?
Chris: I would say that the first thing would be to start the way that you would start with any kind of zero trust program, which is to basically develop an asset classification. So right now, the basic format of an asset classification for data within your enterprise would center around things that are public, right? Things that might be published on your website that everybody knows about your company, things that are business information that only people inside your company should know. Things that are confidential, which are things that are for only a specific set of eyes to interact with and things that are sensitive, which are the top-secret things. Could be trade secrets, could be personally identifiable information, EFI, health information, those types of things. But as you do that data classification, it could be that you're just really applying another label. Is this appropriate to use within a large language model?
Chris: Is this appropriate to use with AI? And I think as you climb that ladder of data classification from public all the way to sensitive, you're going to find that less and less of it is or that you're going to be able to use that data, but you can only use that data within a sequestered field of technology resources that only the company and the right sets of eyes within the company have access to. Right? So you're basically applying your data classification and all the security principles that you put in your data classification around how you use ai. That would be my first and best piece of advice to make decision making easier without it always having to be a judgment call. If you have a data classification and you have a policy about how you classify your data assets, then you never have to make a judgment call. You can always make the right call based on what you've decided is in that data asset.
Greg: So, to be clear, policy shouldn't just be around private AI tools but also extend to public models like ChatGPT and others?
Chris: Right. If you train that model with your company's data, then basically you've turned that data public, and that's a big mistake and that's why the data classification, the asset classification matters so much. If you're letting AI learn from your public website, that's probably going to be good for you for your business in the long run because basically the AI is learning everything the way that you want it to learn, and that's one of the things about AI is that AI can develop bias based on how it's taught. So if you're in control of how a public model is taught about who you are and what you are, then you're going to be able to propagate that message across anybody that asks about you. However, if you were to use data that you don't consider public within a public model, again going to the ChatGPT app or the OpenAI website or any of the hundreds of thousands of models that seem to be popping up every five minutes out there, then that data essentially becomes compromised. It becomes property of the LLM and the LLM will use it as it sees fit,
Greg: And you're not getting it back…
Chris: And you're not getting it back. It's always going to be there.
Greg: Chris, safe to say that every software company out there is working pretty hard to incorporate some sort of AI function into their software. AI isn't going away, but one day the novelty will wear off. For better or worse.
Chris: The frog in the bathwater moment of AI is going to be that so many of these software companies, especially the largest ones that have the biggest budgets like Microsoft, Google, are going to start incorporating these sorts of tools into everyday applications so that you don't even realize that that's what you're doing. It's just rote tasks that took you three to five minutes are now going to take you one minute, and some things are just going to start to feel easier to the point that this conversation that we're having right now might really feel crazy two years from now that we even had to talk about this because boy hasn't it always been like that.
And so it's just something to keep in mind is as you start to look at, I think about our creative teams that are forever at work in the Adobe Suite or other tools of that sort that will have these watershed moments where these things that took them lots and lots of time before are taking them less and less time because the programs are actually anticipating it. And so as AI develops, it will get wrapped more and more into things rather than being sort of this separate thing on the side that I have that I ask questions to and I do these little parlor tricks where I get it to write me a document. I mean, those things will probably continue, but they'll probably be much more baked into the interfaces that we're already used to using.
Greg: Is that okay that AI slips away from top of mind for people who aren't especially technically inclined but still use it every day?
Chris: That's the essential question. I think about the old joke about the older fish that swims up to the younger two fish and says, “How's the water today, boys? And the two fish look at each other and say, “What's water?” And so he's aware of the water. Maybe he's been pulled out of the water on a fishhook and is suddenly aware that there's water there. And I think we're at this stage of awareness where we're all thinking about this new technology that's there, but two or three years from now where somebody that's just onboarding to their digital life, maybe through middle school and high school, they'll just assume that those functions are part of the tools. The way that we assumed about calculators in college or whatever else. We didn't have to use an abacus anymore or work out things in writing.
Greg: It feels to me like some level of awareness is important for security purposes to understand that what tools you're using and how those tools operate.
Chris: Well, more and more those tools are moving off of your desktop and into the cloud, which means that your data is moving along with it. And so being cognizant of how that data is being used after the fact, which for anybody that's ever read terms and conditions, a software licensing agreement, if you've read it all the way through, you can see that your data is actually usually part of their learning experience, and while some say that they'll never sell your data, they actually will use your data to learn from it, and that's been going on for years and years. Now, when you think about that accelerating and you think about a creative application like Mid journey, if you add a mid-journey like engine to Photoshop that is now learning from every creative execution that's out there, then suddenly you've got maybe more concerns about it.
Chris: So, it's going to take vigilance to understand what data's going where. For sure. The thought, I think probably on the minds of the software providers is how do we bake this in so that it's really just not a thought for anybody and that we're gaining the advantage of this. We're providing faster tools, which is what we're providing to the customer in return, but that leaves the burden on the consumer. It always has. In the case of business-to-business relationships, the consumer becomes the security operations of the enterprise are responsible for understanding how data flows in and out of an organization.
Greg: And that creates an even greater challenge.
Chris: That's going to be a huge challenge,
Greg: Okay, for companies feeling behind the curve or simply want to find a place to start with ai, where is that?
Chris: For any business that feels like they're missing the opportunity with AI, they should be thinking about use cases where they find work to be time consuming, trivial, rote, difficult to execute things that take a lot of time, but don't add a lot of incremental value, but yet have to be done. Those are good places to start because those tend to be the simplest tasks. I think when you train a large language model, you're starting to train a new brain from the ground up, so to speak, so it'll have a great capacity to learn, but it'll have to be taught slowly and it'll have to be taught thoughtfully.
Chris: It's not the same, let's say as getting a computer to do math for you, although certainly you can do that. Getting a computer to do math for you is a really fun thing to think about because you can solve all kinds of complex problems really quickly, but it could do things like, instead of uploading my spreadsheet to a public model, I'm just going to ask how to get this Excel formula done. Those are the kinds of things that can take up a lot of time, and if I say, okay, I have this Excel formula and now I want to scale that out so that I can do X, Y, or Z with this spreadsheet so that I can turn it into something meaningful for my next meeting with my manager, that might be something that AI could go a long way in helping you with, right?
Chris: So again, refer back to policy. Policy is really the arbiter of all disputes on what you should and shouldn't be able to do with public models and what you should be able to do with private models that you keep inside the company firewall, but finding those use cases. Start with a use case, draw it out. We run to solutions thinking, I'm going to do this thing and it's going to be the silver bullet and it's going to solve all my problems, but start incrementally, start iteratively. Start with one simple task, see how it goes, and then work on your prompt engineering. Maybe send a few people to a course on prompt engineering just so that they can learn the basics of how it reacts and what it'll give and how to prevent bias and hallucination and all the other things that we worry about, and then stand up a model inside your firewall and start to train it.
Chris: Other advice would be to watch expense. These tools are not cheap. They are very often consumption based unless you've developed your own and can stand it up on your own hardware or whatever. If you're standing these up within the big cloud platforms like Google Cloud, Microsoft Azure, Amazon Web Services, they all have the building blocks for these things and how you can build them, but they are consumption based, and so the cost could escalate really quickly. And so keeping an eye on that and understanding that you've got your use case nailed down, good preparation will save you money in the long run as you start to get running. Now, for things that are simple and that are being done with public information, it may pay to spring for a service, whether it be OpenAI, or whether it be one of the other providers that provides a service that allows you to do these things.
Chris: It may be worth springing for the premium so that you can solve smaller business problems with a public model, not have to develop anything. The minute that you start to develop things, you get into the software business and then you have to support it. And so you have to be ready for that, and you have to know that you are going to need resources to maintain it. You're also going to need to be committed to keeping the data current and making sure that if something changes in your data, that you properly retrain your model so that your employees aren't going to the well and getting the wrong answer or getting yesterday's answer. You're going to have to make sure that you are on top of training that, because if you're restricting it, it's not going to know. It's only going to learn what you teach it.
Greg: So don't be in a big rush. It's more important to be thoughtful in your integration of AI if it makes sense for you and your company.
Chris: Yeah, and there's a pretty good chance that some part of it makes sense, but start simply and you'll win.
Greg: Good advice from the man who knows. My thanks to Chris Geiser, senior vice president of Technology here at DCG ONE. Thanks also to DCG ONE president Brad Clark for his thoughts on this, the first of two episodes on AI coming up in part two of our series on artificial intelligence. We'll look at how AI fits into the strategy, technology, and creativity pillars of whole idea thinking within the agency at DCG ONE. If you have questions for our guests or about anything you've heard on this podcast, write us at podcast@dcgone.com. Thanks very much for listening to the Whole Idea podcast. Our producers are Mandy DiCesare and Kelcie Brewer. I'm Greg Oberst. Watch this channel for our next podcast and more expertise, insight, and inspiration for Whole Idea Marketing.
Take care.