Lob's website experience is not optimized for Internet Explorer.
Please choose another browser.

Lobcast Podcast: A/B Testing & Apricot Bellinis

Testing, testing! We’re talking about A/B testing in marketing campaigns on this episode of the Lobcast Podcast. Learn more about statistical significance, what elements to test, and what marketing channels to run your A/B tests on.

Listen to podcast

On this episode of the Lobcast Podcast, we’re discussing all things A/B testing in our marketing strategies, including direct mail. Learn more about running statistically significant A/B tests and improve your overall marketing campaigns by accurately analyzing the results and applying what you’ve learned. 

Key highlights include:

  • A/B testing in both direct mail and email can help improve campaign effectiveness and ROI by providing valuable insights into what resonates with your target audience
  • There are many personalization elements you can A/B test to drive lift.  The ones that immediately come to mind are CTAs, CTA placement, offers, copy, lifestyle imagery used; the possibilities if you partner with tech companies like Lob are really endless
  • The general results analysis time for direct mail is 45-60 days
  • Statistical significance is essentially a fancy term for proving whether a result is due to chance or a true result of your testing efforts.  Essentially, a probability measurement

Meet the Speakers

Stephanie Donelson

Senior Content Marketing Manager

Summer Hahnlen

Senior Director - Direct Mail Expert in Residence

STEPHANIE: Hello, and welcome to the Lobcast Podcast: Mixers and Marketing. I'm Stephanie Donelson, your hostess with the marketing mostess, and I'm the senior content marketing manager here at Lob. I'm thrilled to be joined by Summer Hahnlen yet again. Summer, do you mind giving our listeners a quick recap of your professional experience in case they're new to the show?

SUMMER: Sure, and I do feel bad about this for people who have watched our other ones because I've been on a few, but I've been in the direct marketing space for around about twenty years. Agency side, customer client side, but I've worked in analytics campaign planning, strategy, even in like procurement, and what people might call it print vine. And so I have a lot of experience in how you not only run direct mail programs, but how you integrate them in a more omnichannel fashion.

STEPHANIE: Well, that's a great little recap, and thank you for joining us. Listeners, if you wanna make the complimentary cocktail of this episode, it is an apricot bellini, So you are going to need some friends as this recipe does make about eight to ten servings, but you're gonna need a seven fifty milliliter bottle of prosecco, chilled, five hundred ml of apricot nectar, one cup of crushed ice, and fresh mint for garnish if you wish, I am not tarnishing my drink with mint, but that is my choice.

SUMMER: I have some in mine, but I will say also, I just wanna make one serving. You can just put a splash of the juice in the bottom like you would any type of mimosa.

STEPHANIE: There you go. I made the full pitcher. My husband's probably drinking some now. But if you are making the whole thing, you're gonna mix together the apricot nectar and crushed ice, pour into a champagne flute until half full, and then top with the chilled prosecco, and the sprig of mint for garnish, if you so wish, cheers, and welcome back to the show, Summer. Alright, so today we're talking about a topic that many marketers love, A/B testing. But one thing that comes to mind, first for me with A/B testing, is making sure that we're running a valuable test. I have been on so many teams where we'll be asked to run A/B tests on emails or ads, but then we never schedule time to actually review the results and understand what the data is telling us. So, Summer, how can we as marketers make sure that we're setting up tests correctly, and what metrics should we be tracking, so when we do actually schedule those meetings, we understand how the tests performed.

SUMMER: Well, your story about not reviewing results is actually very common. I think it's fair to say, and I think I've said this before in another podcast that marketers are always being asked to do more with less and are often almost overfocused on continued program management and do not necessarily give themselves the time to analyze past campaigns or really strategically set up tests. So I would start off by saying that I always recommend that you do your test set up when you initially plan your campaign, essentially at the same time that you are planning your targeting and your campaign objectives. This way the test is actually like a part of the entire campaign, is at the forefront of your mind and you're more likely to spend the time then to look at the results prove out whether your theory worked or not because you're gonna be excited about it in your plan this whole time. The other thing I would call out for metrics is that, and you love this answer, it depends. The metrics you track, that should be a drinking game, and it's still how many times I'm gonna say it. It depends. The metrics you track are completely dependent on the program that you're running and the KPIs that already already matter to your program. So Again, it just depends.

STEPHANIE: Yeah. And I think, again, that's a very valid answer for everything. We always have to understand what's going to work best for our brand. And our customers. Because we have to know them best, what works well for another company may totally bomb when we try and replicate the same thing at our place. But thinking of testing, we also have to talk about statistical significance. I feel like I'm back in college and putting together my final term paper, but how can marketers ensure that their tests actually have statistical significance?

SUMMER: And I'm just gonna stop by saying, I have a really hard time saying this term without getting tongue tied. So thank you for putting it at the upfront of our little drinking podcast. Yes. This is normally harder. But stat sig is essentially just a fancy term for proving whether or not your result is due to your efforts. Or just chance. So it's essentially like a probability measurement. Most brands look at stat sig success as being ninety to ninety five percent and larger brands lean toward ninety five. And that's because they can they can afford to lean toward ninety five because it's, stat sig is really a combination of two things. It's your, your volume and your response rate so these larger brands put a lot more volume behind it. I think for anyone who is listening to this call, they probably are thinking, okay, that that makes sense but how does that actually work? And so if you're comfortable with it, I'd like to actually just share my screen which I've never done before on a podcast and I recognize for anyone who's listening to the audio. This isn't going to necessarily translate the same way, but I will talk you through it. So I will give this a shot. Then you let me know, Stephanie, if that works from your side.

STEPHANIE: I see your screen, the common phrase of 2020, right?

SUMMER: Yep. Exactly. Alright, so, and I apologize, this is very crude, quickly put together slide, but I figure sometimes it's just easier for people to understand if you have a visual. So when I say that it's important that you put enough volume behind something, what I mean is that if you have something like a million pieces that you can send of direct mail, and you split that fifty-fifty. And say you have like an average response rate of round about like one percent that means group A could come in at about one percent If you mailed five hundred thousand, that's five thousand conversions. Simple math, group B comes in at maybe just under one percent point nine six percent which means that they had four thousand eight hundred conversions. Very close results, but there was roundabout a five percent lift with group number one or group a, and because of the volume behind it, when you do a calculation, you actually see that it's ninety eight percent stat sig. Which means essentially, if you did this again you would have a repeat result. Does that compute okay?

STEPHANIE: Yep. I get you.

SUMMER: Okay, so then on the flip side, let's say you take that exact same response rate, that exact same percentage of lift of five percent but you're looking at only fifty thousand for your test. You split it in half, 25K apiece, one percent of your group a responded, point nine six percent of your group b responded. So you have two fifty conversions from group a, two forty from group b, Again, five percent lift for group A, but because of the volume it's only sixty eight percent stat sig, which essentially means it's directional. Which then means, and that's another just fancy term, that means that if you did this again, you might have the same results, but you could not count on it. So when people ask me how much volume should I put behind something, it really does have to do quite a bit with the volume that you're considering and then also the benchmarks of what you've normally seen in the past.

STEPHANIE: No, thank you for sharing that. That was a great visual and great walk through of that. Again, I haven't had to talk about stat sig since college, and I think only I did like I did my final thesis on flirting behaviors between men and women. And --

SUMMER: That's fun.

STEPHANIE: -- it was really fun, but only one of my results actually had statistical significance. So it was like, I could still report on the other ones, but like only one of them, like, actually had, like, true validation behind it. Don't ask me where it was, because I don't remember anymore.

SUMMER: You wanna be careful because you get excited about your wins, but are they repeatable? Can you trust them? And that is also why A/B testing is so important because you don't wanna test too many things because then it breaks up the possibility of hitting your stat sig number.

STEPHANIE: Yep. Alright. So we just talked about stat sig. What about time? You know, having worked in email marketing before, I've seen teams that will send the winning email in an A/B test after five hours of running it, some wait an entire day. What do you feel is the right amount of time for an A/B test to run, and does it depend on the marketing channel?

SUMMER: It completely depends on the marketing channel. And and I think that that that was a softball question by the way. So in general results for or results analysis for direct mail are forty five to sixty days. Regardless of your brand, your call to action, your offer, it's forty five to sixty days. You can certainly look at the results earlier. A lot of people will peek. So those are usually the same people who read the end of the book before they get into the first chapter. But I don't recommend that you make a firm decision on whether something won until you hit at least the forty five day mark after your send. The main reason for this is the fact that it takes time for your piece to be produced and you get into the mail stream and to reach your recipient. As you and I know with Lob, it's a lot quicker. Low plug there. But it's also the fact that most Americans keep a mail piece in their home whether it's on their counter or in a pile somewhere on the desk. For an average of seventeen days. So that means that consideration window is a lot longer and it's a consideration window that is continuously reconsidered because as people add to a pile, they go through the pile again normal human behavior. So it just means it takes a lot longer for people to then make the decision and to react and so you should not jump the gun and say because you do more digital marketing you should follow those rules for a direct mail.

STEPHANIE: No, I think that's very valid. And actually we just had Tuffley on the podcast, and I think I told him about an example recently where I got a piece of direct mail from Bath and Body Works, and it was twenty percent off your entire order and I was like, yeah yeah yeah, like I do need to restock a few of my things. I waited until I got that follow-up email of, hey, some things you've bought in the past are on sale, so I was able to click on that email, get the sale price, and then go find that piece of direct mail that was hidden in my pile somewhere, and get that twenty percent off the day before it expired.

SUMMER: Yep. And you can always find their mail too. I have to give them a plug because they have a very distinct brand look and feel to their mail. You can find it easily.

STEPHANIE: Yep. Alright, since we kind of just talked about how direct mail and email do work together, in your opinion, is there a marketing channel that's strongly suited for A/B testing that we should start with? And if so, what element should market would be testing?

SUMMER: There are two that immediately come to mind and those are our direct mail and email because they're just both so highly measurable and they allow for precise targeting and really targeting specific audiences. They also allow for testing of multiple variables like subject lines, lifestyle imagery, call to actions, formats, tone and even your offer. So overall A/B testing in both direct mail and email can help improve your campaign effectiveness and your ROI. By providing those insights that you need, but also they work so well together. So I just feel like they're they're a great pair.

STEPHANIE: Yeah. They are a great pair. I think that's the first one that comes to mind too. Right? We always just think of the two versions of mail, whether it's electronic or directly in your mailbox. Alright, so I think a lot of marketers are familiar with A/B testing and email marketing, like we just talked about, and they're running tests on their landing pages too, where we're looking to improve an email, potentially that's open rates, click through rates, conversions. On the landing page, it could be decreasing bounce rates. What kind of goals should marketers set when they're A/B testing in direct mail?

SUMMER: It's not entirely different. Direct mail testing can really be used to improve your engagement rates with like a QR code or a personalized vanity URL and your conversion rates. It's also really well suited for testing creative elements like your design and your copy. Just essentially because you have a very captive audience and a lot more real estate to work with. So I don't really see a huge difference in testing possibilities between these two channels and again because they work so well together and you can target similar audiences, they will lift one another up in terms of performance.

STEPHANIE: Definitely. So I wanna keep talking about A/B testing in direct mail for a little bit. You hosted a webinar last year with Evan Metrock from iExit and I remember that you two were talking about his results from his own A/B test that he ran, and how just changing the format increases ROI one hundred and fifty five percent Do you think his switch to letters works better because he was in the B2B space, or do you think those in the B2C space should test form factor in the same way?

SUMMER: The quick answer is both. I've seen letters work better in the B2B space mostly and if you think about this from just like marketer's or even a prospect's perspective. Imagine your office has an initial gatekeeper that will often discard anything that looks promotional. But we'll pass on like plain white envelopes that could be official correspondence or a bill. So with him making that shift, that was his way of getting past like that first line of defense. But that being said, letters really work well in the B2C space and should be tested if you're finding that your postcard or your self mail just isn't working as hard as it used to. But I want you just to and the audience just to keep in mind that copy and the amount of copy has been reduced across all formats in the last twenty years by sixty percent So regardless of format, less is more. So if you decide to do a letter test against your existing champion of say a self mailer or a postcard, make sure you're only testing the format and you're not adding a whole bunch of copy that doesn't exist on the other format because you don't wanna be testing, you know, copy or tone or approach or offer or your CTA, and then one wins and you say the format did all the heavy lifting.

STEPHANIE: No, definitely, because you don't wanna change your strategy and then realize like, oh, nope, it wasn't just the format, though. She made all that change.

SUMMER: Especially, only do A/B testing if you can keep it clean.

STEPHANIE: Right. All right, so we're talking about A/B testing that you can do, but then comes the results. How can marketers leverage the results of A/B testing and use that data in other areas of marketing?

SUMMER: So, many companies begin A/B testing actually in their email space because they can get those quick turn results, and then they use the results in their direct mail to further prove out those wins.

STEPHANIE: Yep.

SUMMER: Also, companies will use the data that they they get from these tests to optimize their website design to improve that overall CX experience and to improve or even inform overall marketing strategy Who should we be targeting? Why? What kind of messaging really resonates with them?

STEPHANIE: Mhmm.

SUMMER: And you've seen brands use the A/B test results to inform new product ideas and pricing that helps them roadmap future business decisions.

STEPHANIE: That's really smart. I mean, yeah, I definitely agree with the CX perspective, like what people are interacting with physically, probably will translate nicely to the screen, think I've actually written a blog that's the inverse of that like how to use your landing page best practices to create a great piece of direct mail. All right, I also think we've talked about this before on the podcast, but I'd love to revisit the topic because I think it's so important, simultaneous A/B testing. Is there any benefit to running simultaneous A/B tests, like an A/B split email, and an A/B split direct mail campaign at the same time?

SUMMER: There are not necessarily very clear benefits. I I feel like this is a this is a question I wanna really get into just a little bit, but there are also watch outs with this. So knowing that something works well in the email channel and the direct mail channel is a win on its own. So if something consistently works in two channels, you want to move forward with that because that's an omni channel approach and a strategy and that's a win for you. But I would definitely recommend if you're going to do simultaneous testing. You're not using that same audience with types of tests at the same time because you're basically asking them to do a lot of work for you and one thing that they're receiving could influence the reaction to enough So it's essentially a test buried in another one if that makes sense?

STEPHANIE: Yeah.

SUMMER: So I just recommend if you're going to do any like parallel path testing that you try to keep your audiences clean of one another so that you can really focus on that independent result and then be able to make it work for you later on.

STEPHANIE: No, I love that that is great advice. All right, let's pivot for a moment and talk about audiences for A/B tests. We want to make sure like your graphics demonstrated that we have equally sized audiences to have conclusive results, and we want our groups to most often be split randomly. What if a marketer wants to run an A/B test, but is worried that their sizing is too small to draw conclusive results? Summer, what do you think they could do?

SUMMER: Well, I have to say one thing. You don't need to actually split your list fifty fifty to run an A/B test.

STEPHANIE: Okay.

SUMMER: Because I've seen some larger brands who are reluctant to negatively impact their benchmarks so their response rate for a campaign. Excuse me. I had to cough there for a second. Where they say, you know, this champion has been really working for me, and I am only willing to spend twenty percent of what I could see to be the impact on a test. So eighty percent will be reserved for the champion and then twenty percent will be reserved for the test itself. An A/B test does not have to be fifty fifty, A/B test really just means one element against another element and you keep it down to that element. But when you're worried about volume, what you wanna make sure is that you've done some sort of analytics research with whatever company you work with or whatever business analytics team you have in house. To tell you that based on your most recent campaign results, you're gonna be able to actually hit a number that makes sense. So If you're mailing a million, you're pretty safe. If you're mailing twenty five thousand and you wanna do an eighty twenty split, the difference in response rate would have to be so incredibly polarizing in order to hit stat sig that it almost to me says just wait and wait so you have a little bit more volume or run the test longer so that you can have a result at the end that you feel like you can stand behind. And I've seen companies do it both ways. So if you don't have a lot of volume, you run it longer and so you actually hit that result. But definitely work with like a trusted statistician or business analytics provider to help you map that out before you just put all of your weight behind it.

STEPHANIE: Yeah, no, I love that. Alright, see, we're also recently a guest on the podcast where we talked a lot about personalization, another topic I think you love. And we all know that it's important to our customers and prospects. But do you think there are A/B tests that marketers should be running on personalization variables, such as imagery, showing related products, instead of the product the customer purchased, or anything like that, Definitely.

SUMMER: Personalization is we'll just give you so much more bang for your buck than I think people realize and it's important to test into it because I think also it's a very buzzworthy topic right now. And as a marketer, you are going to be skeptical of anything that's buzzworthy or you should be at least. And so you want to definitely test into some of these elements and see how much they work for you because sometimes to do personalization is a little bit extra effort on your side. And so, I recommend that you can start with things like your call to action because that's usually something your consumers spend a lot of time looking at. Even call the action placement. Do you put it above the fold below the fold? Do you put it in both places? Do you make it larger? Testing your offer, testing even lifestyle imagery, which it can as a personalization element if you were to leverage it based on what you think your customer looks like or maybe a product that they've purchased in the past. So really possibilities are endless and if you partner with companies that are more tech-focused like Lob, you are going to find this is a lot easier to do.

STEPHANIE: Yeah. I think you bring up a really great point on our best direct mail campaigns page that we have. We have that of pieces of mail that we just love. And I've noticed in a lot of them, they really make sure to repeat the CTA, as well as a lot of brands, make sure they provide multiple ways for you to take action on it, you know, visit us in store, shop online, call this phone number, and if they have a pURL which is a personalized URL, they have that on either if it's a postcard, they make sure they have it in both spots, or I've seen the self mailers that it's on every single panel, just to make sure that people can find it and remember it, because, kinda like the rule of three in writing, if you've read it three times, you're more likely to remember it.

SUMMER: Yes. So that's also good for people to hear because as you design it, you're very close to it.

STEPHANIE: Yep.

SUMMER: It might feel like it's too much. But imagine a consumer's perspective is not one that planned and executed and watched this campaign. They have a split second that they're spending looking at it, make it easier for them.

STEPHANIE: Yep. Well, because even going back to my Bath and Body Works example, I knew that it was the twenty percent off my entire purchase, I was very excited, and then I got to the back, and I know that they're gonna be tracking that code that I put in, it was a long code to put in it. I was like, is this even worth it?

SUMMER: It is a really long code. And I I think they need a little bit of guidance on that one.

STEPHANIE: Bath and Body Works, call us. We'll help you out.

SUMMER: Yeah. Let me make it let's improve this customer experience just a little bit.

STEPHANIE: Alright, so we talked a bit about iExit and its experimentation, and I think another neat customer story to highlight, is that of thredUP. And they did an A/B test with a reactivation campaign. They tested a postcard and one version had a note from the CEO, and one that included three reasons to thrift shop. What other creative A/B tests have you seen our customers run that might not be top of mind for marketers to test?

SUMMER: I've seen a couple, two are win-back examples. One, I really love because it was like one of those meal delivery services that that everybody really loved during COVID and still loves now just because we're so busy.

STEPHANIE: I still love them.

SUMMER: Yeah. I mean, they they have not gone away by any mean. And it was a win-back approach that used an image of a meal that someone had requested three plus times as the background image for the postcard itself and it definitely raised the response rate for this customer. There's another win-back example that I recall just from my past history of working with telecoms, where they specifically address the reason the person left based on how they answered that survey.

STEPHANIE: I love that!

SUMMER: Think about like the Domino's example of just them saying like my pizza's crap and I own it, and we're gonna we're better now. I promise you kind of thing. If you are looking to win back your customers, sometimes you just have to own what happened and if you are unabashedly you know, unapologetic, you are likely to win back that customer because the customer also likely has had to apologize at some point in their lives kind of thing. So I feel like these two are great examples of using personalization to win somebody back. Another one that comes to mind is another telecom that used a store locator map when it was very early days. When it was incredibly expensive to do it, there were not a lot of companies that did it. As you know, Lob, it's incredibly easy and doesn't cost you more to do it. This was a time where they had to hire an additional vendor and there was additional cost associated with having the store locator on this postcard.

STEPHANIE: Yeah.

SUMMER: And so they had to be able to quantify, does it make enough impact to be able to justify the cost? And they raised their store traffic by twenty two percent. Wow. And I think there's probably an underestimation because imagine if you're working in one of those stores, you aren't likely going to ask every customer who comes to the door. Did you get a postcard with a store located map on it?

STEPHANIE: And even if they did, people would be like!

SUMMER: It's not getting keyed in. Yep. Not likely.

But for recipients that raised it by twenty two percent which meant that then moving forward, this brand as many times as they could included that map because they really wanna drive back store traffic after COVID.

STEPHANIE: Yeah. No, I love both of those examples. And I even think, you know, talking about the win-back example, that is such a good reason for marketing and sales to even work more closely than ever, especially like in Salesforce, like if the deal does not get close, Tell us why, like what is the reason so then that way when we are planning these future campaigns, we can say, oh, okay, these people said we're too expensive, we just introduced a new tier, we have this whole list of people we can go after kinda doing the same thing like, hey, we've introduced a new tier and based on the criteria we know about you, seems like this plan would be a good fit and wanna give us a try like, oh, talk about a win.

SUMMER: But skip the thing about based on the criteria we know about you so we don't sound creepy, just maybe them like we know them. Without saying we know you.

STEPHANIE: What? You just they already know we're tracking everything.

SUMMER: Right. They know.

STEPHANIE: But going back to that thredUP example for a second, you know, they found or they not only found that they had two and a half times more orders were placed with the winning version of their A/B test, but the average order value was also higher. Summer, are there any other metrics that marketers should pay attention to when running A/B tests outside of simply conversions?

SUMMER: I think engagement rates are becoming more important than ever, especially in the direct mail space because as we know in the digital space it's going to get harder and harder to track those engagement rates.

STEPHANIE: Mhmm.

SUMMER: Also overall lifetime value, if you have a team that can track that for you, based on again the increased purchase amount or potentially even measuring frequency of purchases or shorten time in between purchases. There are all kinds of things that you can study. You just have to take a step back and really think about what metrics matter most to you and then work with somebody to help you uncover what really is standing out.

STEPHANIE: Since we're kind of talking about metrics already, do you have any software recommendations or preferred methods for tracking the results of A/B testing?

SUMMER: I love to work with internal analytics teams like BA teams and and I wanna refrain from listing any particular brands because there's a lot out there to do that for you. So I would say just like do a simple Google search to start but then really dive in and get a feel for what is the what is the user experience look like? How does the cost match up with your budget? How comfortable does the analytics tool match up with your existing marketing tech stack or future planned or hopeful you know, aspirational marketing tech stack because a lot of people are in that stage.

STEPHANIE: Yeah.

SUMMER: So it, again, it really depends on your needs and your budget, but just start with a simple search and then just do some digging because there are a lot out there.

STEPHANIE: Definitely. Alright. On a previous podcast, Kim C. mentioned the principle of ABT: Always Be Testing. Do you agree? Or do you think there are some campaigns especially in the direct mail space that can kind of follow the idea of set it and forget it, like many of our email nurture streams?

SUMMER: So set it and forget it, I feel like should be reserved for... I'm gonna watch my words here. Because I don't like the term like always or not ever. Set and forget should be used for something where you've spent the time to really organize triggered response based analysis type of program where someone has gone onto your site and they've done like put an item in their cart and you have a safe cart program and by set and forget, you've gotten the program off the ground. It is running on a daily basis. You have the right tech partners to make sure that it runs and you can step back and focus on how do I then forward plan any creative testing? How do I forward plan any cadence testing. Set and forget is a term that really kind of turns me off if that's fair enough to say, but I also feel like every campaign, especially an ad hoc campaign is an opportunity to learn something more. Than what you initially plan to learn. And that's why it's really important as you're setting up any ad hoc campaigns that you think about testing at the beginning. Think a lot of times, and I've even seen companies out there map out their sort of like their roadmap for how they think about kicking off a campaign. Most people will start with the data now, thank goodness, because that's where you absolutely have to start. But then they'll do testing at the end. And that drives me crazy because what kind of test you're gonna come up with at the very end when you're already thinking about them the next thing that you promised your leadership you were gonna do after this thing got out the door.

STEPHANIE: Yep.

SUMMER: You're probably gonna slap something together or you're gonna skip it. So this...

STEPHANIE: You can say you ran a test!

SUMMER: Yeah. Right? So. And anything stops in together, you might not even look at the results because you're already focused on the next thing. So I feel like we've gone three sixty on this podcast.

STEPHANIE: Full circle.

SUMMER: Yep. So It's just it's really important that you think about testing as your job. It is part and parcel with the other objectives that you have in your role.

STEPHANIE: Yeah, I guess first thing that comes to mind for me with like the set it and forget it, and I'm sorry I said the bad words, but --

SUMMER: It's okay.

STEPHANIE: -- any like, you know, membership programs or rewards programs, you're going to have, okay, it's so and so's birthday month, let's trigger out that postcard with their standard twenty percent off, and your birthday fell in June, like you can tell that's a variable that they would just slap whatever month is in there. But I think campaigns like that kind of can follow that set and forget it because again, those ones are based on certain dates or triggers or again just something triggering in your CRM system that it's like, it's so and so's birthday, boop, here we go, and then you can follow it up with the email and make it still work in its own workflow, but it's not something that you're actively testing. I mean, unless you really want to see what slice of birthday cake really captures their attention.

SUMMER: Or unless you realize, especially if you're spending out of pocket cost associated with a direct mail piece, this is where I think my direct mail hat goes back on. Set and forget is a dangerous term because if that mail piece is costing you money and it's not doing any heavy lifting for you --

STEPHANIE: Yep.

SUMMER: -- you need to know that quickly and then you need to figure out how do I tailor something as simple as a thank you birthday must with an offer to better resonate with the customer. So I think maybe in the digital space it can be a little bit more set and forget because it doesn't cost you as much to get an email out the door. But with direct mail, you have to always be looking at those results and saying to yourself, what could I be doing better? What could I be tweaking? What could I be learning from this? Yeah. And so you always have to have your testing hat on.

STEPHANIE: Definitely. Alright. Summer, do you have any final thoughts you wanna share or is there anything that we didn't get to today during our episode?

SUMMER: I think my only, like, my final thoughts, we've covered quite a bit today is don't let the term stat sig scare you. It's if you do some research online, there are even calculators online that will help you figure out if you've reached stat sig. And so I'm not saying statistical significance because I can't ever do it twice in a row. There are plenty of free online tools that will help you figure out how to map this. And don't be scared of it, it is essentially like a fancy term to help you understand did something work or not and is it repeatable. And so there are always companies out there that can help you people, resources, and what I've seen in this environment of marketers is people really wanna lift one another up. And not put another down.

STEPHANIE: No. What is it when the tide rises all boats float --

SUMMER: Yes.

STEPHANIE: -- kind of thing? Alright. Well, to our listeners, thank you so much for joining us. Summer, thank you for joining us as well to talk all things about A/B testing. If you do want to dive deeper into the topic of A/B testing and experimentation in direct mail, please feel free to download your own copy of our ebook, optimizing direct mail for maximum results at tinyurl.com/optimizingdm, that's tinyurl.com/optimizingdm. As always, you can browse our library of episodes over at lobdemo.co/lobcast. Thanks for listening, and that's all folks.