CacheCore
Invalidation Patterns
CacheCore

Invalidation Patterns

Production-ready recipes for keeping your cache in sync with changing data. Each pattern covers a specific data lifecycle scenario.

Pattern 1: Database webhook

Trigger invalidation when a database record changes. The most common pattern.

from cachecore import CachecoreClient

cc = CachecoreClient(
    gateway_url="https://api.cachecore.it",
    tenant_jwt=settings.CACHECORE_TOKEN,
)

@app.post("/webhooks/document-updated")
async def on_document_updated(event: DocumentEvent):
    result = await cc.invalidate(
        dep_id=f"doc:{event.document_id}",
        new_hash=event.new_version,
    )
    return {"ok": result.ok}

When to use: Any record update that should invalidate cached summaries, classifications, or extractions derived from that record.

Pattern 2: Batch table invalidation

Invalidate all entries that depend on a table after a bulk update or ETL run.

results = await cc.invalidate_many(
    dep_ids=["table:products", "table:prices"],
    new_hash=f"batch-{datetime.utcnow().date()}",
)
failed = [r for r in results if not r.ok]
if failed:
    logger.error("Invalidation failed: %s", [r.dep_id for r in failed])

When to use: Nightly data refreshes, migrations, or bulk imports.

Pattern 3: Version-pinned dependencies

Tag requests with a specific prompt or model version. Invalidate when you cut a new version.

PROMPT_VERSION = "v3"

with cc.request_context(deps=[Dep("classifier-prompt", hash=PROMPT_VERSION)]):
    response = await openai.chat.completions.create(...)

# After updating the prompt:
await cc.invalidate("classifier-prompt", new_hash="v4")

When to use: Prompt engineering iterations, model upgrades, or any change to the LLM task definition.

Pattern 4: Policy-version global invalidation

Roll a new policy version to invalidate the entire namespace. No per-request dep tags needed; the cc:policy-version dep is added automatically to every cache entry.

curl -X POST https://api.cachecore.it/v1/invalidate \
  -H "Authorization: Bearer cc_live_xxxxx.eyJ..." \
  -d '{"dep_id": "cc:policy-version", "new_hash": "v2"}'

Then issue new JWTs with policy_version: v2. Old tokens still work but will miss cache (different namespace).

When to use: Major rollouts, security incidents requiring full cache purge, or breaking configuration changes.

Pattern 5: Event-driven queue

For high-throughput systems, publish invalidation events to a queue and process them asynchronously.

# Worker consuming from SQS / Kafka / etc.
async def process_invalidation_event(event: dict):
    dep_id = event["dep_id"]
    new_hash = event.get("new_hash")
    result = await cc.invalidate(dep_id, new_hash=new_hash)
    if not result.ok:
        raise InvalidationError(f"Failed: {result.error}")

# Producer (e.g. ORM save hook):
def on_contract_saved(contract):
    queue.publish({
        "dep_id": f"doc:{contract.id}",
        "new_hash": contract.version,
    })

When to use: High-write workloads where synchronous invalidation would add latency to the write path.

Pattern 6: Bypass for write operations

Do not cache LLM calls that generate unique output. Use bypass instead of invalidation.

with cc.request_context(bypass=True):
    response = await openai.chat.completions.create(
        model="gpt-5.4-mini",
        messages=[{"role": "user", "content": "Generate a unique confirmation for order #12345"}],
    )

When to use: Personalised responses, order confirmations, one-time codes, or any task where uniqueness matters.

Choosing a pattern

| Scenario | Pattern | |----------|---------| | Single record update | 1. Webhook | | Bulk table update or ETL | 2. Batch invalidation | | Prompt or model version change | 3. Version-pinned deps | | Breaking config change or security incident | 4. Policy-version rollout | | High-throughput writes | 5. Event-driven queue | | Unique generative output | 6. Bypass |