Best Practices
Guidelines for using capitan effectively in production systems.
Signal Design
Define Signals as Package Constants
Signals create internal state (workers, registries). Define them at package level:
// signals.go
package orders
import "github.com/zoobz-io/capitan"
var (
OrderCreated = capitan.NewSignal("order.created", "New order placed")
OrderConfirmed = capitan.NewSignal("order.confirmed", "Order confirmed")
OrderShipped = capitan.NewSignal("order.shipped", "Order shipped")
OrderCanceled = capitan.NewSignal("order.canceled", "Order canceled")
)
Never create signals dynamically:
// Bad: unbounded signal creation
signal := capitan.NewSignal(fmt.Sprintf("user.%s.action", userID), "...")
// Good: use fields for variable data
capitan.Emit(ctx, userAction, userID.Field(id))
Signals are compared by identity, not name. Using different instances with the same name won't match:
// Wrong: different signal instances
var SignalA = capitan.NewSignal("order.created", "...")
var SignalB = capitan.NewSignal("order.created", "...") // Different instance
capitan.Hook(SignalA, handler)
capitan.Emit(ctx, SignalB, fields...) // Won't match SignalA
// Correct: import and use the same signal
import "myapp/orders"
capitan.Hook(orders.OrderCreated, handler)
capitan.Emit(ctx, orders.OrderCreated, fields...)
Use Hierarchical Naming
Name signals with dot-separated hierarchies:
// Domain.action pattern
"order.created"
"order.shipped"
"payment.processed"
"payment.failed"
"inventory.reserved"
"inventory.released"
Benefits:
- Grep-friendly (
grep "order\.") - Observer whitelisting by prefix (manual filtering)
- Clear ownership boundaries
Write Descriptive Descriptions
The description appears in logs and debugging. Make it human-readable:
// Good: describes what happened
capitan.NewSignal("order.created", "New order placed by customer")
capitan.NewSignal("payment.failed", "Payment processing failed")
// Bad: repeats the name
capitan.NewSignal("order.created", "order.created")
capitan.NewSignal("payment.failed", "PaymentFailed")
Key Design
Define Keys Alongside Signals
Keep keys with their signals for discoverability:
// orders/signals.go
package orders
var (
OrderCreated = capitan.NewSignal("order.created", "New order placed")
// ... other signals
)
var (
OrderID = capitan.NewStringKey("order_id")
CustomerID = capitan.NewStringKey("customer_id")
Total = capitan.NewFloat64Key("total")
)
Use Consistent Key Names
Establish naming conventions across your codebase:
// Consistent: snake_case
orderID := capitan.NewStringKey("order_id")
customerID := capitan.NewStringKey("customer_id")
createdAt := capitan.NewTimeKey("created_at")
// Inconsistent: mixed styles
orderID := capitan.NewStringKey("orderId")
customer_id := capitan.NewStringKey("customer-id")
Prefer Specific Types
Use the most specific key type:
// Good: typed keys
count := capitan.NewIntKey("count")
total := capitan.NewFloat64Key("total")
active := capitan.NewBoolKey("active")
createdAt := capitan.NewTimeKey("created_at")
// Avoid: stringly-typed
count := capitan.NewStringKey("count") // "42" instead of 42
Lifecycle Management
Close Listeners on Shutdown
In long-running services, close listeners during graceful shutdown:
func main() {
// Register listeners
orderListener := capitan.Hook(orderCreated, handleOrder)
paymentListener := capitan.Hook(paymentProcessed, handlePayment)
observer := capitan.Observe(logEvent)
// Setup shutdown handling
stop := make(chan os.Signal, 1)
signal.Notify(stop, syscall.SIGINT, syscall.SIGTERM)
// Run server...
<-stop
// Close listeners (stops new event delivery)
orderListener.Close()
paymentListener.Close()
observer.Close()
// Drain pending events
capitan.Shutdown()
}
Scope Listeners to Components
Close listeners when their owning component stops:
type OrderProcessor struct {
listener *capitan.Listener
}
func NewOrderProcessor() *OrderProcessor {
p := &OrderProcessor{}
p.listener = capitan.Hook(orderCreated, p.handle)
return p
}
func (p *OrderProcessor) handle(ctx context.Context, e *capitan.Event) {
// Process order...
}
func (p *OrderProcessor) Stop() {
p.listener.Close()
}
Use Isolated Instances for Modules
Separate subsystems can use separate instances:
// Billing module
var billingCapitan = capitan.New(capitan.WithBufferSize(64))
// Analytics module
var analyticsCapitan = capitan.New(capitan.WithBufferSize(256))
func ShutdownAll() {
billingCapitan.Shutdown()
analyticsCapitan.Shutdown()
}
Error Handling
Always Check Field Extraction
Fields may be missing or have wrong types:
capitan.Hook(orderCreated, func(ctx context.Context, e *capitan.Event) {
orderID, ok := orderIDKey.From(e)
if !ok {
log.Printf("Missing order_id in %s event", e.Signal().Name())
return
}
// Proceed with orderID...
})
Use Severity Appropriately
Reserve error severity for actual errors:
// Info: normal operations
capitan.Emit(ctx, orderCreated, fields...)
// Warn: unusual but handled
capitan.Warn(ctx, lowStock, fields...)
// Error: failures requiring attention
capitan.Error(ctx, paymentFailed, fields...)
// Debug: development only
capitan.Debug(ctx, queryExecuted, fields...)
Configure Panic Handlers in Production
Never run production without panic visibility:
capitan.Configure(
capitan.WithPanicHandler(func(sig capitan.Signal, recovered any) {
log.Printf("PANIC in %s: %v", sig.Name(), recovered)
metrics.Increment("capitan_panics_total", "signal", sig.Name())
// Optional: alert on-call
if shouldAlert(sig) {
alerting.Page("Listener panic", sig.Name(), recovered)
}
}),
)
Performance
Don't Block in Listeners
Listeners run in worker goroutines. Blocking affects all events for that signal:
// Bad: blocking call in listener
capitan.Hook(signal, func(ctx context.Context, e *capitan.Event) {
time.Sleep(5 * time.Second) // Blocks worker
http.Post(url, body) // Blocking I/O
})
// Good: offload blocking work
capitan.Hook(signal, func(ctx context.Context, e *capitan.Event) {
go func() {
// Blocking work in separate goroutine
http.Post(url, body)
}()
})
// Better: use a work queue
capitan.Hook(signal, func(ctx context.Context, e *capitan.Event) {
workQueue <- extractWork(e)
})
Don't Hold Event References
Events are pooled and reused. Copy data you need to retain:
// Bad: holding event reference
var lastEvent *capitan.Event
capitan.Hook(signal, func(ctx context.Context, e *capitan.Event) {
lastEvent = e // Dangerous: event will be reused
})
// Good: copy needed data
var lastOrderID string
capitan.Hook(signal, func(ctx context.Context, e *capitan.Event) {
if id, ok := orderIDKey.From(e); ok {
lastOrderID = id
}
})
// Also good: use Clone() when you need the full event
var lastEvent *capitan.Event
capitan.Hook(signal, func(ctx context.Context, e *capitan.Event) {
lastEvent = e.Clone() // Safe: independent copy
})
Size Buffers Appropriately
Monitor queue depths and adjust:
// Start with defaults
capitan.Configure(capitan.WithBufferSize(16))
// Monitor in production
go func() {
for range time.Tick(time.Minute) {
stats := capitan.Stats()
for signal, depth := range stats.QueueDepths {
if depth > 10 {
log.Printf("High queue depth: %s = %d", signal.Name(), depth)
}
}
}
}()
// Increase if queues consistently fill
capitan.Configure(capitan.WithBufferSize(128))
Testing
Use Sync Mode for Unit Tests
Eliminates timing dependencies:
func TestOrderHandler(t *testing.T) {
c := capitan.New(capitan.WithSyncMode())
defer c.Shutdown()
// Events process synchronously - no waiting needed
}
Use Isolated Instances
Avoid cross-test contamination:
func TestA(t *testing.T) {
c := capitan.New(capitan.WithSyncMode())
defer c.Shutdown()
// ...
}
func TestB(t *testing.T) {
c := capitan.New(capitan.WithSyncMode())
defer c.Shutdown()
// Separate instance, clean state
}
Never Use Default Instance in Tests
The default singleton persists across tests:
// Bad: uses shared singleton
func TestBad(t *testing.T) {
capitan.Hook(signal, handler) // Leaks into other tests
}
// Good: isolated instance
func TestGood(t *testing.T) {
c := capitan.New(capitan.WithSyncMode())
c.Hook(signal, handler)
defer c.Shutdown()
}