

By
Lob
Direct mail performance can look different depending on when you start the clock.
If you measure from drop date, you’re starting at the point mail enters the postal network. If you measure from in-home date, you’re starting when the piece actually reaches the mailbox. If you measure from response date, you’re starting from the moment a customer takes action.
None of these approaches are wrong, but they answer different questions. And if you’re not consistent about which one you’re using, it’s easy to misread what worked, compare campaigns unfairly, or draw conclusions that don’t match what customers experienced.
In this guide, we’ll break down what each date means, what it’s best for, and how to choose a measurement approach you can confidently use across campaigns.
A direct mail campaign moves through three timelines. Each one can be a valid “start date,” depending on what you’re trying to learn.
A simple way to think about it: drop date includes transit time, while in-home date starts at exposure. That difference can change how you interpret “fast response” and how fairly you compare markets.
Drop date is when your mail pieces enter the USPS system, typically when your print partner hands them off at a USPS acceptance point (such as a BMEU).
Drop date is easy to track and helpful for operational reporting because it’s tied to your production timeline. It’s less useful for understanding customer behavior on its own, since it doesn’t tell you when the piece was actually seen.
Best used for: internal timelines, vendor performance, and execution benchmarks.
In-home date is when mail arrives in the recipient’s mailbox. This is when a customer can actually see and act on your offer, which makes it a strong anchor for performance analysis.
Many teams plan around estimated delivery windows. If you have access to USPS scan-event data (often enabled via Intelligent Mail barcodes), you can get closer to actual delivery timing and reduce guesswork.
Best used for: response velocity, offer and creative performance, and cross-channel sequencing.
Response date is when the recipient takes action, such as scanning a QR code, visiting a tracked URL, calling, filling out a form, or purchasing.
Response date is useful when you’re focused on outcomes and reconciliation. The nuance is that response date alone doesn’t explain whether a late conversion was due to late delivery or longer decision-making.
Best used for: outcome reporting, ROI reconciliation, and total conversion capture.
Measuring from drop date can help you report quickly, but it blends operational timing with customer behavior.
Common issues you may run into:
When drop date works best: when your goal is operational accountability and execution consistency.
Anchoring to in-home date typically makes performance easier to interpret because it starts when customers had a real chance to respond.
Benefits you’ll often see:
When in-home date works best: when you want to understand how the mail performed as a marketing touch.
Measuring from response date works backward from outcomes. It’s a practical approach when you need a complete view of conversions tied to a campaign, including those that come in later.
The tradeoff is that response date doesn’t naturally show you delivery patterns. If you’re trying to understand timing, response date is strongest when paired with in-home visibility.
When response date works best: when you’re reconciling conversions and revenue against spend.
A quick decision framework:
In-home date is often the clearest anchor because you’re measuring from exposure and looking for true offer performance.
Response date can be important when you’re measuring actions against lifecycle timing (renewals, reactivation windows, repeat purchases).
In-home date becomes especially useful when sequencing matters. It’s hard to align mail with email, paid, or SMS timing without knowing when the piece actually arrived.
To move beyond estimated delivery ranges, you need delivery visibility.
IMb can generate scan events as pieces move through the USPS network. Those events can improve visibility into delivery timing, depending on how the data is captured and interpreted.
Platforms like Lob can surface USPS scan-event data so delivery timing is usable for reporting, measurement, and campaign optimization.
Consistency is what makes your reporting comparable over time.
Measurement timing might feel like a technical detail, but it shapes what your results appear to say. When you choose an anchor that matches your goal and apply it consistently, it’s easier to evaluate performance, explain results, and make clean optimizations.
Ready to see exactly when your mail lands? Book a demo to explore how Lob’s delivery visibility can support clearer measurement.
FAQs about direct mail measurement timing
FAQs
How long should you wait after in-home date before measuring final results?
It depends on your offer and buying cycle. The key is choosing a consistent window you can apply across campaigns so comparisons stay clean.
Does mail format affect the response window you should use?
Often, yes. Some formats tend to prompt faster action than others, so it’s worth reviewing your own data for patterns by format.
How do you handle attribution when direct mail timing differs from digital timing?
Use in-home date as your mail anchor, then align it with digital impression or click timing so your cross-channel reporting follows a consistent timeline.