Originally published in Russian on abramov.blog
Audit logging almost always comes later than it should. At first, regular logs are enough. Then suddenly it becomes important to understand who exactly changed the order status, why a user lost their role, or where the strange data in the system came from.
In most projects, audit either gets scattered across business code or tries to live on top of a regular logger without a clear model. I extracted this into a separate library: audit.
What audit does
The library solves a specific problem: storing entity change history.
At the model level:
- an entity is identified by a string key (
order:12345,user:42) - each event has an author, description, and timestamp
- changes are tracked by field
- values can be visible or hidden
No magic, no reflection, no automatic diffs. All changes are described explicitly.
Basic scenario
Simple example - order lifecycle:
logger := audit.New()
logger.Create(
"order:12345",
"john.doe",
"Order created",
map[string]audit.Value{
"status": audit.PlainValue("pending"),
"total": audit.PlainValue(99.99),
"payment_token": audit.HiddenValue(),
},
)
Sensitive data is explicitly marked as hidden - it participates in the event but doesn't appear in logs in plain text.
Further changes:
logger.Update(
"order:12345",
"warehouse.system",
"Order shipped",
map[string]audit.Value{
"status": audit.PlainValue("shipped"),
"tracking_number": audit.PlainValue("TRK123456789"),
},
)
Change history
The library provides two levels of data access.
Full history:
logs := logger.Logs("order:12345")
Each record contains a list of fields with from and to, author, and description: ""
[
{
"Fields": [
{"Field": "status", "From": null, "To": "pending"},
{"Field": "total", "From": null, "To": 99.99},
{"Field": "payment_token", "From": "***", "To": "***"}
],
"Description": "Order created",
"Author": "john.doe",
"Timestamp": "2026-01-18T17:06:29.126233926+07:00"
}
]
Filtering by fields:
events := logger.Events("order:12345", "status")
Useful for building timelines - for example, an order status chain.
Custom storage
audit is not tied to a storage method. The Storage interface allows you to plug in a file, database, Kafka, or any other backend:
storage := NewJSONFileStorage("audit_events.json")
logger := audit.New(audit.WithStorage(storage))
slog integration
The audit/slog package extracts audit events from structured logs:
handler := auditslog.NewHandler(auditLogger, auditslog.HandlerOptions{
Handler: slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
Level: slog.LevelInfo,
}),
KeyExtractor: auditslog.AttrExtractor(auditslog.AttrEntity),
ShouldAudit: func(record slog.Record) bool {
return record.Level >= slog.LevelInfo
},
})
A regular log automatically becomes an audit event:
slog.Info(
"User account created",
auditslog.AttrEntity, "user:123",
auditslog.AttrAction, "create",
auditslog.AttrAuthor, "admin",
"email", "alice@example.com",
"role", "editor",
)
The benefit: logs are visible in log collection systems and accessible via .Logs(), .Events().
Repository: github.com/w0rng/audit
audit is a small isolated layer for cases when audit matters as infrastructure, not formality.
Top comments (0)