The current message system is binary — the entire nag is either "on track" or "behind." Both metrics (photos + reviews) are lumped together into a single status, and there's no awareness of when in the week the message fires.
Real impact: A business owner can be crushing it on reviews but missing one image upload — and the message they get says "You are behind this week" in a flat, critical tone. On Monday morning, before anyone has even had a chance to act, the first touch of the week is negative. That's killing the vibe.
The messages are hardcoded in four places inside worker.py:
Duplicated again at lines 718-720 and 843-845 for the single-metric code path.
Each metric gets its own evaluation and its own message line. The tone adapts to three things:
This creates a 2 × 3 × 3 = 18 template slots per nag. Each slot is a configurable string with variable interpolation.
| Time of Week | Below Expectation | Meeting | Exceeding |
|---|---|---|---|
| Beginning | "It's early! Let's aim for {target} {metric} this week. You've got this." | "Great start — already at {current}/{target} {metric}! Keep it rolling." | "Incredible start! {current}/{target} {metric} already. Let's see how high we can push this!" |
| Mid-week | "Halfway check-in: {current}/{target} {metric}. {remaining} more to hit the goal — we can close this gap." | "Right on pace with {current}/{target} {metric}. Solid work this week." | "You're ahead of pace! {current}/{target} {metric}. Outstanding effort." |
| End of week | "Final stretch: {current}/{target} {metric}. Let's push to close the gap before the week ends." | "Goal met! {current}/{target} {metric}. Great week — let's keep this momentum." | "Amazing week! {current}/{target} {metric} — you blew it out of the water." |
message_templates table keyed by nag_id, metric, progress_level, time_of_weektime_of_week by checking the message's position among the nag's scheduled sends for that intervalWhen a customer leaves a review with attached photos, those images are not counted toward the image goal. Only images from the dedicated google/by-profile/images Plepper endpoint are counted.
Image counting at worker.py:244 only pulls from the images endpoint:
Meanwhile, review processing at worker.py:348-353 only extracts rating and time — any image data in the review response is ignored.
google/by-profile/reviews response includes image/photo data within each review recordPull a sample google/by-profile/reviews response and inspect the JSON structure. If there's an images array or photo_url field on review records, it's straightforward. If not, we move on.
A business owner received 3 reviews during the interval. The system reported only 1. They got a message saying they were failing when they should have been congratulated. This damaged trust in the system — the person who was actually doing great work got chewed out by their group.
I verified the reviews were visible on the GBP profile and within the interval dates. Something in the pipeline is dropping or miscounting them.
There are several places where this could go wrong. Here's what I need audited end-to-end:
1. Date formatting in image counting
At worker.py:274, the image date is constructed as:
This can produce 2024-1-5 instead of 2024-01-05. The is_date_in_range() function at worker.py:60-72 parses with %Y-%m-%d, which should handle single digits — but this is worth verifying.
2. Review date parsing
At worker.py:352, review time is split differently:
If the Plepper time field format changes or has timezone information appended, the .split(" ")[0] could produce something is_date_in_range() can't parse — and the bare except: pass would silently skip the review.
3. Stale Plepper results
When the scheduler creates a batch job, Plepper may not have finished scraping by the time the worker polls. If the worker picks up a "Finished" response that contains stale cached data from a previous scrape, it marks the batch "completed" and moves on — the new reviews never get counted.
4. run_track deduplication
At job_scheduler.py:216:
This prevents re-evaluation when a nag has multiple scheduled sends per day. If the first send fires with stale Plepper data, the second send (which might have fresh data) gets skipped entirely.
interval_start_at and interval_end_at are being set correctly when a new week beginsis_date_in_range() is inclusive on both boundaries and handles all date formats from Plepperrun_track dedup logic — should multiple sends per day be allowed? Should dedup key on schedule_id, not just nag_id + date?except: pass blocks that silently swallow counting errorsRight now, you can't see how the system is performing without checking Heroku logs. I can't easily correlate the text messages I receive with what the system actually computed. When something seems off, our feedback loop is slow — I describe a symptom, you dig through logs, we go back and forth.
A Discord webhook integration in our shared server with 2–3 channels:
Discord webhooks are the simplest path — no bot framework needed. Just store webhook URLs as env vars (DISCORD_WEBHOOK_TESTING, DISCORD_WEBHOOK_LOG, DISCORD_WEBHOOK_ALERTS) and POST to them from the worker and scheduler. A simple helper function is all it takes:
This isn't urgent — just sharing where things are headed so it's on your radar.
The nudge_server already has a solid REST API. The next evolution would be:
The practical effect: instead of clicking through the admin UI to adjust a nag's targets or add a contact to a group, I could just say it and the agent handles the API calls. Makes managing multiple offices much faster.
We can scope this as a separate project once Issues 1–4 are solid. No rush.
Mustajab, please take a look through this and come back with:
Budget is ready for all of this. I'm excited to get The Nag to where it needs to be so we can start rolling it out to more offices. Let's make it happen.
— Gordon