Serilog in .NET: Structured Logs for APIs and Workers
Introduction
Modern applications require consistent, searchable, and easily correlated logs, especially in APIs, Workers, and distributed environments. In this context, Serilog stands out in the .NET ecosystem by offering structured logging, integration with multiple destinations, and flexible configuration based on sinks, enrichers, and filters. This article presents the use of Serilog in ASP.NET Core applications and Worker services, showing how the structured approach improves observability, failure diagnosis, and event tracking in production.
Fundamentals of Structured Logs
Definition and Benefits
Structured logs are an evolution of traditional text logging. Instead of recording messages as loose text, they store each event with a template, named fields, and typed values. This makes searches, filters, aggregations, and correlation between events from different services easier, without relying on fragile regular expressions (CHUVAKIN; SCHMIDT; PHILLIPS, 2012).
Among the most important benefits are querying by properties, generating metrics from the events themselves, correlating with trace ids and correlation ids, and reducing the time spent on operational investigations. In larger environments, this improves both observability and governance of recorded data (BEYER et al., 2016).
Context in .NET
In .NET, logging is already part of the platform infrastructure through the ILogger<T> abstraction and dependency injection. Serilog extends this model by allowing APIs and Workers to record events with more context, including request data, authenticated user, execution environment, and correlation identifiers.
Serilog Architecture and Operation
Serilog Logging Pipeline
The Serilog pipeline revolves around three elements: Logger, Sink, and Enricher. The Logger receives the event, the Enrichers add context, and the Sinks send the result to destinations such as console, file, database, Seq, or Elasticsearch. This separation makes the configuration clearer and allows the log flow to be adapted to the system's needs (CHUVAKIN; SCHMIDT; PHILLIPS, 2012).
Multi-Sink Configuration Example
var logger = new LoggerConfiguration()
.MinimumLevel.Information()
.MinimumLevel.Override("Microsoft.AspNetCore", LogEventLevel.Warning)
.Enrich.FromLogContext()
.Enrich.WithMachineName()
.Enrich.WithEnvironmentName()
.Enrich.WithProperty("Application", "MyApiService")
.WriteTo.Console(outputTemplate: "[{Timestamp:HH:mm:ss} {Level:u3}] {Message:lj} {Properties:j}{NewLine}{Exception}")
.WriteTo.File("logs/app-.log",
rollingInterval: RollingInterval.Day,
retainedFileCountLimit: 30,
shared: true)
.WriteTo.Seq("http://localhost:5341")
.CreateLogger();
Integration with ASP.NET Core APIs
Logging Middleware
The integration of Serilog with the HTTP pipeline helps observe requests and responses in a standardized way. With UseSerilogRequestLogging, it becomes easier to log latency, status code, exceptions, and user or call information, which improves failure analysis in REST APIs, gRPC, and Minimal APIs.
Middleware Example Using Serilog
public class SerilogRequestLoggingMiddleware
{
private readonly RequestDelegate _next;
private readonly ILogger<SerilogRequestLoggingMiddleware> _logger;
public SerilogRequestLoggingMiddleware(RequestDelegate next, ILogger<SerilogRequestLoggingMiddleware> logger)
{
_next = next;
_logger = logger;
}
public async Task Invoke(HttpContext httpContext)
{
var correlationId = httpContext.Request.Headers["X-Correlation-ID"].FirstOrDefault()
?? Guid.NewGuid().ToString();
using (LogContext.PushProperty("CorrelationId", correlationId))
using (LogContext.PushProperty("ClientIp", httpContext.Connection.RemoteIpAddress?.ToString()))
{
var sw = Stopwatch.StartNew();
try
{
await _next(httpContext);
}
catch (Exception ex)
{
_logger.LogError(ex, "Unhandled exception on {Method} {Path}",
httpContext.Request.Method, httpContext.Request.Path);
throw;
}
finally
{
sw.Stop();
_logger.LogInformation(
"HTTP {Method} {Path} responded {StatusCode} in {ElapsedMs:0.0000} ms, User: {User}",
httpContext.Request.Method,
httpContext.Request.Path,
httpContext.Response.StatusCode,
sw.Elapsed.TotalMilliseconds,
httpContext.User?.Identity?.Name ?? "anonymous");
}
}
}
}
Best Practices for Structuring API Logs
- Use contextual properties such as request id, authenticated user, tenant, and source IP.
- Log both start and end events for each request, as well as unhandled exceptions, with structured stack traces.
- Adopt appropriate severity levels (
Informationfor normal flows,Warningfor recoverable conditions, andError/Criticalfor failures). - Propagate correlation via X-Correlation-ID header for tracking across microservices, integrating it with .NET
Activity.Current. - Avoid logging complete payloads without masking — prefer reduced and secure schemas.
Applying Serilog in .NET Worker Services
Worker Services Contextualization
Worker-type services, typically implemented with IHostedService or BackgroundService, execute asynchronous tasks, queue consumption, and scheduled routines. In this scenario, structured logs are even more important because diagnostics almost always depend on what was recorded by the system. Therefore, it is worth logging start, end, duration, failures, retry attempts, and business context in each processing (MAJORS; FONG-JONES; MIRANDA, 2022).
Complete Example with Background Worker and Business Event Logs
public class QueueProcessorService : BackgroundService
{
private readonly IServiceProvider _serviceProvider;
private readonly ILogger<QueueProcessorService> _logger;
public QueueProcessorService(IServiceProvider serviceProvider, ILogger<QueueProcessorService> logger)
{
_serviceProvider = serviceProvider;
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
_logger.LogInformation("Queue Processor Service started at {StartTime}", DateTime.UtcNow);
while (!stoppingToken.IsCancellationRequested)
{
try
{
using var scope = _serviceProvider.CreateScope();
var queue = scope.ServiceProvider.GetRequiredService<IMessageQueue>();
var message = await queue.DequeueMessageAsync(stoppingToken);
if (message is null)
{
await Task.Delay(TimeSpan.FromSeconds(1), stoppingToken);
continue;
}
var correlationId = message.CorrelationId ?? Guid.NewGuid().ToString();
using (LogContext.PushProperty("CorrelationId", correlationId))
using (LogContext.PushProperty("MessageId", message.Id))
{
var sw = Stopwatch.StartNew();
_logger.LogInformation("Processing message {MessageId}", message.Id);
// ... business rules
sw.Stop();
_logger.LogInformation(
"Message {MessageId} processed in {ElapsedMs:0.00} ms",
message.Id, sw.Elapsed.TotalMilliseconds);
}
}
catch (OperationCanceledException) when (stoppingToken.IsCancellationRequested)
{
_logger.LogInformation("Queue Processor Service stopping gracefully");
break;
}
catch (Exception ex)
{
_logger.LogError(ex, "Error processing message at {Time}", DateTime.UtcNow);
await Task.Delay(TimeSpan.FromSeconds(5), stoppingToken);
}
}
}
}
Enrichers and Context Customization
Enriching Logs with Dynamic Properties
Serilog allows each event to be automatically enriched with additional properties. Among the most useful data are machine name, environment, application version, tenant, transaction identifier, and execution metrics. This avoids code repetition and keeps log context more consistent (CHUVAKIN; SCHMIDT; PHILLIPS, 2012).
Creating a Custom Enricher: Capturing Host Name
public class HostNameEnricher : ILogEventEnricher
{
private readonly LogEventProperty _property;
public HostNameEnricher()
{
_property = new LogEventProperty("HostName", new ScalarValue(Environment.MachineName));
}
public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory)
{
logEvent.AddPropertyIfAbsent(_property);
}
}
// Usage in configuration:
var logger = new LoggerConfiguration()
.Enrich.With<HostNameEnricher>()
.WriteTo.Console()
.CreateLogger();
Correlation ID Capture by Enricher
A good practice is to define a correlation id per request or per processed message. In APIs, this is usually done in middleware; in Workers, in the local context of the consumed item. With this identifier present in all sinks, tracking across services becomes much simpler (SIGELMAN et al., 2010).
Log Persistence: File, Database and Search Engines
File Persistence with Rolling Logs
When persisting logs to file, it is important to define rotation, size limit, and retention. These parameters prevent uncontrolled data growth and keep the operational history available long enough for analysis.
.WriteTo.File("logs/api-events-.log",
rollingInterval: RollingInterval.Day,
fileSizeLimitBytes: 10_485_760, // 10 MB
rollOnFileSizeLimit: true,
retainedFileCountLimit: 30,
buffered: true,
flushToDiskInterval: TimeSpan.FromSeconds(2))
Integration with SQL Server and Advanced Queries
Storing logs in relational databases can be useful when there is a need for auditing, operational reports, or integration with BI tools. In the case of SQL Server, the sink allows you to map important properties to specific columns and keep event queries more organized.
.WriteTo.MSSqlServer(
connectionString: Configuration.GetConnectionString("LogsDb"),
sinkOptions: new MSSqlServerSinkOptions
{
TableName = "AppLogs",
AutoCreateSqlTable = true,
BatchPostingLimit = 100,
BatchPeriod = TimeSpan.FromSeconds(5)
},
columnOptions: new ColumnOptions
{
AdditionalColumns = new List<SqlColumn>
{
new SqlColumn { ColumnName = "Application", DataType = SqlDbType.NVarChar, DataLength = 100 },
new SqlColumn { ColumnName = "RequestId", DataType = SqlDbType.UniqueIdentifier },
new SqlColumn { ColumnName = "Severity", DataType = SqlDbType.NVarChar, DataLength = 20 }
}
})
Elasticsearch, Grafana and Kibana: Contemporary Observability
Integrating structured logs with the Elastic stack is a common practice in distributed environments. This way, events become queryable in real time, allowing the creation of dashboards, searches by properties, and automatic alerts based on operational patterns.
.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:9200"))
{
AutoRegisterTemplate = true,
AutoRegisterTemplateVersion = AutoRegisterTemplateVersion.ESv7,
IndexFormat = "apilogs-{0:yyyy.MM.dd}",
CustomFormatter = new ExceptionAsObjectJsonFormatter(),
NumberOfShards = 2,
NumberOfReplicas = 1,
EmitEventFailure = EmitEventFailureHandling.WriteToSelfLog
| EmitEventFailureHandling.RaiseCallback,
FailureCallback = e => Console.WriteLine($"Unable to submit log: {e.MessageTemplate}")
})
Logs and Observability in Distributed Systems
Tracing, Metrics, and Correlation
In distributed architectures, logging alone is not enough. Ideally, you should combine logs, metrics, and traces so that an incident can be analyzed from different perspectives. When logs include tracing identifiers, it becomes easier to reconstruct the path of a request across multiple services.
Resilience in Log Transmission
The reliability of the pipeline also depends on how sinks deliver events. Buffers, asynchronous writing, retry, and local retention help reduce losses when a remote destination becomes unavailable. In production, this attention is part of the system's own reliability strategy.
Security and Compliance in Logging
Handling Sensitive Data
Logs can expose sensitive data if there is no caution in modeling the events. Passwords, tokens, personal documents, and financial data should be masked or removed before being persisted. In Serilog, this can be addressed with filters and destructuring policies, reducing security risks and regulatory issues.
Filtering Sensitive Properties via Serilog
public class SensitiveDataDestructuringPolicy : IDestructuringPolicy
{
private static readonly HashSet<string> SensitiveFields = new(StringComparer.OrdinalIgnoreCase)
{
"Password", "Token", "Authorization", "CreditCard", "Cpf", "Ssn"
};
public bool TryDestructure(object value, ILogEventPropertyValueFactory factory, out LogEventPropertyValue result)
{
if (value is null) { result = null; return false; }
var properties = value.GetType().GetProperties()
.Select(p => new LogEventProperty(
p.Name,
SensitiveFields.Contains(p.Name)
? new ScalarValue("***")
: factory.CreatePropertyValue(p.GetValue(value), true)));
result = new StructureValue(properties);
return true;
}
}
// Configuration:
var logger = new LoggerConfiguration()
.Destructure.With<SensitiveDataDestructuringPolicy>()
.WriteTo.Console()
.CreateLogger();
Compliance, Auditing, and LGPD
Logging infrastructures need to consider access control, anonymization, retention, and record disposal. In practical terms, this means balancing traceability and legal requirements, such as those provided by the LGPD, without turning logs into an unnecessary source of data exposure.
Testing and Validating Logging Strategies
Automated Tests for Logs
It is also possible to test logging. In critical scenarios, it is worth checking if the system is emitting the expected events, with the correct level and with the most important properties filled in. This helps to avoid incomplete observability in production.
using (TestCorrelator.CreateContext())
{
var logger = new LoggerConfiguration()
.WriteTo.TestCorrelator()
.CreateLogger();
logger.Information("Testing event with Id {EventId}", 42);
var logEvents = TestCorrelator.GetLogEventsFromCurrentContext().ToList();
Assert.Contains(logEvents, e =>
e.MessageTemplate.Text.Contains("Testing event") &&
e.Properties["EventId"].ToString() == "42");
}
Monitoring, Fault Diagnosis and Alerts
Dynamic Dashboards and Alerts
When logs are properly structured, tools like Kibana, Grafana and Application Insights can turn events into useful dashboards and alerts. This allows the detection of increased errors, slowdowns, and unusual behaviors earlier.
Real Case with Alerts in Elastic/Kibana
// Example: alert for abnormal number of exceptions in 15 minutes
PUT _watcher/watch/exception_alert
{
"trigger": { "schedule": { "interval": "15m" } },
"input": {
"search": {
"request": {
"indices": [ "apilogs-*" ],
"body": {
"query": { "match": { "Level": "Error" } },
"size": 0,
"aggs": {
"errors_count": { "value_count": { "field": "Exception" } }
}
}
}
}
},
"condition": {
"compare": {
"ctx.payload.aggregations.errors_count.value": { "gt": 100 }
}
},
"actions": {
"notify-slack": {
"webhook": {
"method": "POST",
"url": "https://hooks.slack.com/services/xxxx/yyyy",
"body": "{\"text\": \"Number of exceptions in 15min exceeded 100!\"}"
}
}
}
}
Integration with OpenTelemetry and Distributed Tracing
Propagating Traces and Correlation IDs
Integration with OpenTelemetry brings logs, metrics, and traces together in the same observability flow. By enriching events with TraceId and SpanId, the application provides a more complete view of the path taken by each request between services.
Example of Advanced Logging with Trace Info Enrichment
public class OpenTelemetryEnricher : ILogEventEnricher
{
public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory)
{
var span = System.Diagnostics.Activity.Current;
if (span is null) return;
logEvent.AddPropertyIfAbsent(propertyFactory.CreateProperty("TraceId", span.TraceId.ToString()));
logEvent.AddPropertyIfAbsent(propertyFactory.CreateProperty("SpanId", span.SpanId.ToString()));
logEvent.AddPropertyIfAbsent(propertyFactory.CreateProperty("ParentSpanId", span.ParentSpanId.ToString()));
logEvent.AddPropertyIfAbsent(propertyFactory.CreateProperty("OperationName", span.OperationName));
}
}
// Configuration combining Serilog + OpenTelemetry
var logger = new LoggerConfiguration()
.Enrich.FromLogContext()
.Enrich.With<OpenTelemetryEnricher>()
.WriteTo.Console(outputTemplate:
"[{Timestamp:HH:mm:ss} {Level:u3}] [{TraceId}/{SpanId}] {Message:lj}{NewLine}{Exception}")
.WriteTo.OpenTelemetry(options =>
{
options.Endpoint = "http://otel-collector:4317";
options.Protocol = OtlpProtocol.Grpc;
options.ResourceAttributes = new Dictionary<string, object>
{
["service.name"] = "MyApiService",
["service.version"] = "1.0.0"
};
})
.CreateLogger();
Final Considerations
Serilog is a solid choice for making .NET applications more observable. With structured logs, contextual enrichment, proper persistence, and integration with tracing, APIs and Workers gain greater operational predictability. In practice, this means investigating incidents more quickly, reducing blind spots, and turning logging into a real part of the system's reliability strategy.
References
-
CHUVAKIN, Anton; SCHMIDT, Kevin; PHILLIPS, Chris. Logging and log management: the authoritative guide to understanding the concepts surrounding logging and log management. Newnes, 2012.
-
MAJORS, Charity; FONG-JONES, Liz; MIRANDA, George. Observability engineering. " O'Reilly Media, Inc.", 2022.
-
BEYER, Betsy et al. Site reliability engineering: how Google runs production systems. "O'Reilly Media, Inc.", 2016.
-
SIGELMAN, Benjamin H. et al. Dapper, a large-scale distributed systems tracing infrastructure. 2010.