Node.js Logging Patterns with Loguro
Getting logs flowing is the easy part. The difference between a codebase where you can debug a production issue in 30 seconds and one where you’re blind comes down to how you log. Here are the patterns that actually matter.
The base helper
Start with a thin wrapper around fetch. No dependencies, no install step:
// lib/logger.js
const ENDPOINT = 'https://ingest.logu.ro';
const API_KEY = process.env.LOGURO_API_KEY;
const SERVICE = process.env.SERVICE_NAME;
const NODE_ENV = process.env.NODE_ENV;
export function log(level, message, context = {}, traceId) {
fetch(ENDPOINT, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/json',
// X-Request-Id is optional — Loguro generates one if not present, searchable via trace:"..."
...(traceId ? { 'X-Request-Id': traceId } : {}),
},
body: JSON.stringify({
level,
message,
context: { service: SERVICE, env: NODE_ENV, ...context },
timestamp: new Date().toISOString(),
}),
}).catch(() => {});
}
export const logger = {
debug: (msg, ctx, traceId) => log('debug', msg, ctx, traceId),
info: (msg, ctx, traceId) => log('info', msg, ctx, traceId),
warn: (msg, ctx, traceId) => log('warning', msg, ctx, traceId),
error: (msg, ctx, traceId) => log('error', msg, ctx, traceId),
critical: (msg, ctx, traceId) => log('critical', msg, ctx, traceId),
}; Notice service and env are injected from env vars on every log — every entry is filterable by context.service:"payments" without any extra work at call sites. The optional traceId argument is sent as the X-Request-Id header; Loguro stores it internally and makes it searchable via trace:"<id>" in the filter bar.
Structure your context — always
The single highest-leverage habit: never log a string when you can log an object.
// ❌ Hard to search, hard to aggregate
logger.error('Payment failed for user 42 with code card_declined');
// ✓ Every field is independently searchable and aggregatable
logger.error('Payment failed', {
userId: 42,
orderId: 'ord_9xk2',
gateway: 'stripe',
errorCode: 'card_declined',
amount: 9900,
}); With the second pattern you can ask Loguro: how many unique users hit card_declined in the last 24 hours?
level:error context.errorCode:"card_declined" @last-24h --unique:context.userId You can’t do that with strings.
HTTP middleware (Express / Fastify)
Log every request and response. This gives you full observability on your API without instrumenting individual handlers.
Express:
import { randomUUID } from 'crypto';
import { logger } from './lib/logger.js';
app.use((req, res, next) => {
const requestId = randomUUID();
const start = Date.now();
req.requestId = requestId;
res.on('finish', () => {
const level = res.statusCode >= 500 ? 'error'
: res.statusCode >= 400 ? 'warning'
: 'info';
logger[level === 'warning' ? 'warn' : level]('HTTP request', {
method: req.method,
path: req.path,
statusCode: res.statusCode,
duration: Date.now() - start,
userAgent: req.headers['user-agent'],
ip: req.ip,
}, requestId); // passed as X-Request-Id header → searchable via trace:"<id>"
});
next();
}); Now in Loguro you can find every slow request with --slow:500, every 5xx with level:error context.statusCode:500, group by path with --top:context.path, and trace a full request lifecycle with trace:"<requestId>".
Fastify:
fastify.addHook('onResponse', (request, reply, done) => {
const level = reply.statusCode >= 500 ? 'error'
: reply.statusCode >= 400 ? 'warning'
: 'info';
logger[level === 'warning' ? 'warn' : level]('HTTP request', {
method: request.method,
path: request.routeOptions.url,
statusCode: reply.statusCode,
duration: reply.elapsedTime,
}, request.id); // passed as X-Request-Id header → searchable via trace:"<id>"
done();
}); Uncaught error capture
Surface crashes that would otherwise disappear into a PM2 log file or a Docker restart loop:
process.on('uncaughtException', (err) => {
logger.critical('Uncaught exception', {
error: err.message,
stack: err.stack,
name: err.name,
});
// Give the fetch a moment to complete before exiting
setTimeout(() => process.exit(1), 500);
});
process.on('unhandledRejection', (reason) => {
logger.critical('Unhandled rejection', {
reason: String(reason),
stack: reason instanceof Error ? reason.stack : undefined,
});
}); These are critical — they mean something crashed unexpectedly. Wire up a Loguro alert on level:critical and you’ll know within seconds.
Heartbeats for background jobs
Any long-running job or worker should send a heartbeat on a regular interval. If it stops arriving, that’s your signal the job is stuck or crashed — without waiting for user complaints.
// Send a heartbeat every 60 seconds
setInterval(() => {
log('heartbeat', 'invoice-worker alive', {
queueDepth: queue.size(),
processedSinceStart: counter,
});
}, 60_000); Then in Loguro, create an embed widget that monitors this heartbeat with a 2-minute timeout. If no heartbeat arrives, the widget flips to degraded and (optionally) fires a Slack or Discord notification.
--embed::status:create:invoice-worker What pros skip
A few patterns that feel helpful but cause problems at scale:
Don’t put log level in context. Use the level field — it’s already there. context.logLevel: "error" is redundant and pollutes your filters.
Don’t log in tight loops without sampling. A loop that runs 10,000 times per second will generate 600k logs per minute. Sample at the loop level or aggregate before logging:
// ❌ Floods your quota
for (const item of items) {
logger.debug('Processing item', { id: item.id });
}
// ✓ Log the batch, not the item
logger.info('Batch processed', { count: items.length, duration: Date.now() - start }); Don’t put PII in context. User emails, card numbers, SSNs — none of it belongs in logs. Log IDs, not values. If you need to correlate a complaint to a log, use context.userId to look it up.
Don’t swallow errors silently. If you catch an error and don’t log it, it never happened as far as your observability is concerned. At minimum log at warning:
try {
await sendEmail(user);
} catch (err) {
logger.warn('Email delivery failed', { userId: user.id, error: err.message });
} What’s next
- Set up Alerting so critical errors page you automatically
- Use —slow to find performance regressions without code changes
- Wire up Heartbeat Monitoring for your background workers