Leaders Logo

Serilog in .NET: Structured Logs for APIs and Workers

Introduction

Modern applications require consistent, searchable, and easily correlated logs, especially in APIs, Workers, and distributed environments. In this context, Serilog stands out in the .NET ecosystem by offering structured logs, integration with multiple destinations, and flexible configuration based on sinks, enrichers, and filters. This article presents the use of Serilog in ASP.NET Core applications and Worker services, showing how the structured approach improves observability, failure diagnosis, and event tracing in production (CHUVAKIN; SCHMIDT; PHILLIPS, 2012); (MAJORS; FONG-JONES; MIRANDA, 2022).

Structured Log Fundamentals

Definition and Benefits

Structured logs are an evolution of traditional text logging. Instead of recording messages as plain text, they store each event with a template, named fields, and typed values. This makes it easier to search, filter, aggregate, and correlate events from different services without relying on fragile regular expressions (CHUVAKIN; SCHMIDT; PHILLIPS, 2012).

Among the most important benefits are querying by properties, generating metrics from the events themselves, correlation with trace ids and correlation ids, and reducing the time spent on operational investigations. In larger environments, this improves both observability and governance of the recorded data (MAJORS; FONG-JONES; MIRANDA, 2022); (BEYER et al., 2016).

Contextualization in .NET

In .NET, logging is already part of the platform infrastructure through the ILogger<T> abstraction and dependency injection. Serilog expands this model by allowing APIs and Workers to record events with more context, including request data, authenticated user, runtime environment, and correlation identifiers.

Architecture and Operation of Serilog

Serilog Logging Pipeline

The Serilog pipeline revolves around three elements: Logger, Sink, and Enricher. The Logger receives the event, the Enrichers add context, and the Sinks send the result to destinations such as console, file, database, Seq, or Elasticsearch. This separation makes configuration clearer and allows you to adapt the log flow to the system's needs (CHUVAKIN; SCHMIDT; PHILLIPS, 2012).

Multi-Sink Configuration Example

var logger = new LoggerConfiguration()
    .MinimumLevel.Information()
    .MinimumLevel.Override("Microsoft.AspNetCore", LogEventLevel.Warning)
    .Enrich.FromLogContext()
    .Enrich.WithMachineName()
    .Enrich.WithEnvironmentName()
    .Enrich.WithProperty("Application", "MyApiService")
    .WriteTo.Console(outputTemplate: "[{Timestamp:HH:mm:ss} {Level:u3}] {Message:lj} {Properties:j}{NewLine}{Exception}")
    .WriteTo.File("logs/app-.log",
        rollingInterval: RollingInterval.Day,
        retainedFileCountLimit: 30,
        shared: true)
    .WriteTo.Seq("http://localhost:5341")
    .CreateLogger();

Integration with ASP.NET Core APIs

Logging Middleware

The integration of Serilog with the HTTP pipeline helps observe requests and responses in a standardized way. With UseSerilogRequestLogging, it becomes easier to log latency, status code, exceptions, and user or call information, which improves failure analysis in REST APIs, gRPC, and Minimal APIs.

Example of Middleware Using Serilog

public class SerilogRequestLoggingMiddleware
{
    private readonly RequestDelegate _next;
    private readonly ILogger<SerilogRequestLoggingMiddleware> _logger;

    public SerilogRequestLoggingMiddleware(RequestDelegate next, ILogger<SerilogRequestLoggingMiddleware> logger)
    {
        _next = next;
        _logger = logger;
    }

    public async Task Invoke(HttpContext httpContext)
    {
        var correlationId = httpContext.Request.Headers["X-Correlation-ID"].FirstOrDefault()
                            ?? Guid.NewGuid().ToString();

        using (LogContext.PushProperty("CorrelationId", correlationId))
        using (LogContext.PushProperty("ClientIp", httpContext.Connection.RemoteIpAddress?.ToString()))
        {
            var sw = Stopwatch.StartNew();
            try
            {
                await _next(httpContext);
            }
            catch (Exception ex)
            {
                _logger.LogError(ex, "Unhandled exception on {Method} {Path}",
                    httpContext.Request.Method, httpContext.Request.Path);
                throw;
            }
            finally
            {
                sw.Stop();
                _logger.LogInformation(
                    "HTTP {Method} {Path} responded {StatusCode} in {ElapsedMs:0.0000} ms, User: {User}",
                    httpContext.Request.Method,
                    httpContext.Request.Path,
                    httpContext.Response.StatusCode,
                    sw.Elapsed.TotalMilliseconds,
                    httpContext.User?.Identity?.Name ?? "anonymous");
            }
        }
    }
}

Best Practices for Structuring API Logs

  • Use contextual properties such as request id, authenticated user, tenant, and originating IP.
  • Log both start and end events of a request, as well as unhandled exceptions with structured stack traces.
  • Adopt proper severity levels (Information for normal flows, Warning for recoverable conditions, and Error/Critical for failures).
  • Propagate correlation through the X-Correlation-ID header for tracking across microservices, integrating it with Activity.Current in .NET (SIGELMAN et al., 2010).
  • Avoid logging full payloads without masking — prefer reduced and secure schemas.

Applying Serilog in .NET Worker Services

Contextualization of Worker Services

Worker services, usually implemented with IHostedService or BackgroundService, execute asynchronous tasks, queue consumption, and scheduled routines. In this scenario, structured logs are even more important because diagnostics almost always depend on what was recorded by the system. Therefore, it is worth recording start, end, duration, failures, retry attempts, and business context for each processing (MAJORS; FONG-JONES; MIRANDA, 2022).

Complete Example with Background Worker and Business Event Logging

public class QueueProcessorService : BackgroundService
{
    private readonly IServiceProvider _serviceProvider;
    private readonly ILogger<QueueProcessorService> _logger;

    public QueueProcessorService(IServiceProvider serviceProvider, ILogger<QueueProcessorService> logger)
    {
        _serviceProvider = serviceProvider;
        _logger = logger;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        _logger.LogInformation("Queue Processor Service started at {StartTime}", DateTime.UtcNow);

        while (!stoppingToken.IsCancellationRequested)
        {
            try
            {
                using var scope = _serviceProvider.CreateScope();
                var queue = scope.ServiceProvider.GetRequiredService<IMessageQueue>();
                var message = await queue.DequeueMessageAsync(stoppingToken);

                if (message is null)
                {
                    await Task.Delay(TimeSpan.FromSeconds(1), stoppingToken);
                    continue;
                }

                var correlationId = message.CorrelationId ?? Guid.NewGuid().ToString();
                using (LogContext.PushProperty("CorrelationId", correlationId))
                using (LogContext.PushProperty("MessageId", message.Id))
                {
                    var sw = Stopwatch.StartNew();
                    _logger.LogInformation("Processing message {MessageId}", message.Id);

                    // ... business rules

                    sw.Stop();
                    _logger.LogInformation(
                        "Message {MessageId} processed in {ElapsedMs:0.00} ms",
                        message.Id, sw.Elapsed.TotalMilliseconds);
                }
            }
            catch (OperationCanceledException) when (stoppingToken.IsCancellationRequested)
            {
                _logger.LogInformation("Queue Processor Service stopping gracefully");
                break;
            }
            catch (Exception ex)
            {
                _logger.LogError(ex, "Error processing message at {Time}", DateTime.UtcNow);
                await Task.Delay(TimeSpan.FromSeconds(5), stoppingToken);
            }
        }
    }
}

Enrichers and Context Customization

Enriching Logs with Dynamic Properties

Serilog allows you to enrich each event with additional properties automatically. Among the most useful data are machine name, environment, application version, tenant, transaction identifier, and execution metrics. This avoids repetition in the code and keeps the log context more consistent (CHUVAKIN; SCHMIDT; PHILLIPS, 2012).

Creating a Custom Enricher: Capturing Host Name

public class HostNameEnricher : ILogEventEnricher
{
    private readonly LogEventProperty _property;

    public HostNameEnricher()
    {
        _property = new LogEventProperty("HostName", new ScalarValue(Environment.MachineName));
    }

    public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory)
    {
        logEvent.AddPropertyIfAbsent(_property);
    }
}

// Usage in configuration:
var logger = new LoggerConfiguration()
    .Enrich.With<HostNameEnricher>()
    .WriteTo.Console()
    .CreateLogger();

Capturing the Correlation ID with an Enricher

A good practice is to define a correlation id per request or per processed message. In APIs, this is usually done in middleware; in Workers, in the local context of the consumed item. With this identifier present in all sinks, tracing between services becomes much simpler (SIGELMAN et al., 2010).

Log Persistence: File, Database, and Search Engines

File Persistence with Rolling Logs

When persisting logs to a file, it is important to set up rotation, size limit, and retention. These parameters prevent uncontrolled growth of data volume and ensure the operational history remains available long enough for analysis.

.WriteTo.File("logs/api-events-.log",
    rollingInterval: RollingInterval.Day,
    fileSizeLimitBytes: 10_485_760, // 10 MB
    rollOnFileSizeLimit: true,
    retainedFileCountLimit: 30,
    buffered: true,
    flushToDiskInterval: TimeSpan.FromSeconds(2))

Integration with SQL Server and Advanced Queries

Storing logs in relational databases can be useful when there is a need for auditing, operational reports, or integration with BI tools. In the case of SQL Server, the sink allows mapping important properties to specific columns and keeps event querying more organized.

.WriteTo.MSSqlServer(
    connectionString: Configuration.GetConnectionString("LogsDb"),
    sinkOptions: new MSSqlServerSinkOptions
    {
        TableName = "AppLogs",
        AutoCreateSqlTable = true,
        BatchPostingLimit = 100,
        BatchPeriod = TimeSpan.FromSeconds(5)
    },
    columnOptions: new ColumnOptions
    {
        AdditionalColumns = new List<SqlColumn>
        {
            new SqlColumn { ColumnName = "Application", DataType = SqlDbType.NVarChar, DataLength = 100 },
            new SqlColumn { ColumnName = "RequestId",   DataType = SqlDbType.UniqueIdentifier },
            new SqlColumn { ColumnName = "Severity",    DataType = SqlDbType.NVarChar, DataLength = 20 }
        }
    })

Elasticsearch, Grafana and Kibana: Contemporary Observability

Integrating structured logs with the Elastic stack is a common practice in distributed environments. This makes events queryable in real time, allowing dashboards to be created, searches by properties, and automated alerts based on operational patterns (MAJORS; FONG-JONES; MIRANDA, 2022).

.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:9200"))
{
    AutoRegisterTemplate = true,
    AutoRegisterTemplateVersion = AutoRegisterTemplateVersion.ESv7,
    IndexFormat = "apilogs-{0:yyyy.MM.dd}",
    CustomFormatter = new ExceptionAsObjectJsonFormatter(),
    NumberOfShards = 2,
    NumberOfReplicas = 1,
    EmitEventFailure = EmitEventFailureHandling.WriteToSelfLog
                       | EmitEventFailureHandling.RaiseCallback,
    FailureCallback = e => Console.WriteLine($"Unable to submit log: {e.MessageTemplate}")
})

Logs and Observability in Distributed Systems

Tracing, Metrics, and Correlation

In distributed architectures, logging alone is not enough. Ideally, logs, metrics, and traces should be combined so that an incident can be analyzed from different perspectives. When logs include tracing identifiers, it becomes easier to reconstruct the path of a request across multiple services (MAJORS; FONG-JONES; MIRANDA, 2022).

Resilience in Log Transmission

The reliability of the pipeline also depends on how sinks deliver the events. Buffers, asynchronous writing, retries, and local retention help to reduce loss when a remote destination is unavailable. In production, this precaution is part of the system's own reliability strategy (BEYER et al., 2016).

Security and Compliance in Logging

Handling Sensitive Data

Logs can expose sensitive data if care is not taken in the modeling of events. Passwords, tokens, personal documents, and financial data should be masked or removed before being persisted. In Serilog, this can be handled with filters and destructuring policies, reducing security risks and regulatory issues (CHUVAKIN; SCHMIDT; PHILLIPS, 2012).

Filtering Sensitive Properties with Serilog

public class SensitiveDataDestructuringPolicy : IDestructuringPolicy
{
    private static readonly HashSet<string> SensitiveFields = new(StringComparer.OrdinalIgnoreCase)
    {
        "Password", "Token", "Authorization", "CreditCard", "Cpf", "Ssn"
    };

    public bool TryDestructure(object value, ILogEventPropertyValueFactory factory, out LogEventPropertyValue result)
    {
        if (value is null) { result = null; return false; }

        var properties = value.GetType().GetProperties()
            .Select(p => new LogEventProperty(
                p.Name,
                SensitiveFields.Contains(p.Name)
                    ? new ScalarValue("***")
                    : factory.CreatePropertyValue(p.GetValue(value), true)));

        result = new StructureValue(properties);
        return true;
    }
}

// Configuration:
var logger = new LoggerConfiguration()
    .Destructure.With<SensitiveDataDestructuringPolicy>()
    .WriteTo.Console()
    .CreateLogger();

Compliance, Auditing, and LGPD

Logging infrastructures need to consider access control, anonymization, retention, and disposal of records. In practical terms, this means balancing traceability and legal requirements, such as those stipulated by the LGPD, without turning logs into a source of unnecessary data exposure.

Testing and Validating Logging Strategies

Automated Tests for Logs

It is also possible to test logging. In critical scenarios, it is worth verifying if the system is emitting the expected events, with the correct level and the most important properties filled. This helps avoid incomplete observability in production.

using (TestCorrelator.CreateContext())
{
    var logger = new LoggerConfiguration()
        .WriteTo.TestCorrelator()
        .CreateLogger();

    logger.Information("Testing event with Id {EventId}", 42);

    var logEvents = TestCorrelator.GetLogEventsFromCurrentContext().ToList();
    Assert.Contains(logEvents, e =>
        e.MessageTemplate.Text.Contains("Testing event") &&
        e.Properties["EventId"].ToString() == "42");
}

Monitoring, Failure Diagnosis, and Alerts

Dynamic Dashboards and Alerts

When logs are properly structured, tools such as Kibana, Grafana, and Application Insights can turn events into useful dashboards and alerts. This allows for earlier detection of error spikes, slowdowns, and abnormal behaviors (MAJORS; FONG-JONES; MIRANDA, 2022).

Real Case with Alerts in Elastic/Kibana

// Example: alert for an abnormal number of exceptions in 15 minutes
PUT _watcher/watch/exception_alert
{
  "trigger": { "schedule": { "interval": "15m" } },
  "input": {
    "search": {
      "request": {
        "indices": [ "apilogs-*" ],
        "body": {
          "query": { "match": { "Level": "Error" } },
          "size": 0,
          "aggs": {
            "errors_count": { "value_count": { "field": "Exception" } }
          }
        }
      }
    }
  },
  "condition": {
    "compare": {
      "ctx.payload.aggregations.errors_count.value": { "gt": 100 }
    }
  },
  "actions": {
    "notify-slack": {
      "webhook": {
        "method": "POST",
        "url": "https://hooks.slack.com/services/xxxx/yyyy",
        "body": "{\"text\": \"Number of exceptions in 15min exceeded 100!\"}"
      }
    }
  }
}

Integration with OpenTelemetry and Distributed Tracing

Propagating Traces and Correlation IDs

The integration with OpenTelemetry brings logs, metrics, and traces together in the same observability flow. By enriching events with TraceId and SpanId, the application offers a more complete view of the journey taken by each request between services (MAJORS; FONG-JONES; MIRANDA, 2022).

Advanced Logging Example with Trace Info Enrichment

public class OpenTelemetryEnricher : ILogEventEnricher
{
    public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory)
    {
        var span = System.Diagnostics.Activity.Current;
        if (span is null) return;

        logEvent.AddPropertyIfAbsent(propertyFactory.CreateProperty("TraceId", span.TraceId.ToString()));
        logEvent.AddPropertyIfAbsent(propertyFactory.CreateProperty("SpanId", span.SpanId.ToString()));
        logEvent.AddPropertyIfAbsent(propertyFactory.CreateProperty("ParentSpanId", span.ParentSpanId.ToString()));
        logEvent.AddPropertyIfAbsent(propertyFactory.CreateProperty("OperationName", span.OperationName));
    }
}

// Configuration combining Serilog + OpenTelemetry
var logger = new LoggerConfiguration()
    .Enrich.FromLogContext()
    .Enrich.With<OpenTelemetryEnricher>()
    .WriteTo.Console(outputTemplate:
        "[{Timestamp:HH:mm:ss} {Level:u3}] [{TraceId}/{SpanId}] {Message:lj}{NewLine}{Exception}")
    .WriteTo.OpenTelemetry(options =>
    {
        options.Endpoint = "http://otel-collector:4317";
        options.Protocol = OtlpProtocol.Grpc;
        options.ResourceAttributes = new Dictionary<string, object>
        {
            ["service.name"] = "MyApiService",
            ["service.version"] = "1.0.0"
        };
    })
    .CreateLogger();

Final Considerations

Serilog is a solid choice for making .NET applications more observable. With structured logs, contextual enrichment, proper persistence, and integration with tracing, APIs and Workers gain greater operational predictability. In practice, this means investigating incidents more quickly, reducing blind spots, and turning logging into a real part of the system's reliability strategy.

References

  • CHUVAKIN, Anton; SCHMIDT, Kevin; PHILLIPS, Chris. Logging and log management: the authoritative guide to understanding the concepts surrounding logging and log management. Newnes, 2012. reference.Description
  • MAJORS, Charity; FONG-JONES, Liz; MIRANDA, George. Observability engineering. "O'Reilly Media, Inc.", 2022. reference.Description
  • BEYER, Betsy et al. Site reliability engineering: how Google runs production systems. "O'Reilly Media, Inc.", 2016. reference.Description
  • SIGELMAN, Benjamin H. et al. Dapper, a large-scale distributed systems tracing infrastructure. 2010. reference.Description
About the author