Learn about sending logs to VMware Aria Operations for Applications (formerly known as Tanzu Observability by Wavefront).

You can send logs to the Wavefront proxy from your log shipper or directly from your application. The Wavefront proxy sends the log data to our service.

shows how data goes from the log shipper to the wavefront proxy and then to the Wavefront instance

Install the Wavefront Proxy

Our logging solution currently requires a Wavefront proxy and does not support direct ingestion. The Wavefront proxy accepts logs as JSON array and JSON lines payload over HTTP or HTTPS and forwards it to the service.

Proxy System Requirements:
  • 2 CPUs
  • 4 GB memory
  • Additional proxy configuration settings: - name: JAVA_HEAP_USAGE value:2G - name: JVM_USE_CONTAINER_OPTS value: "false"
Proxy Kubernetes Requirements:
  • Request resources: 1 CPU and 2 GB memory
  • Limit resources: 2 CPUs and 4 GB memory
  • 1 GB heap memory

To install and configure a new proxy:

  1. Select Browse > Proxies.
  2. Click Add new proxy and follow the instructions on the screen.
  3. Edit the wavefront.conf file to open the pushListenerPorts to receive logs from the log shipper.
    For example:
    • If you installed the proxy on Linux, Mac, or Windows, open the wavefront.conf file, uncomment the pushListenerPorts configuration property, and save the file. The port is set to 2878 by default.
    • If you installed the proxy on Docker, the command you use opens the pushListenerPorts and sets it to 2878.
  4. Optionally, uncomment or add other logs proxy configurations the wavefront.conf file.
  5. Optionally, configure preprocessor rules for logs in the preprocessor_rules.yaml file.
  6. Start the proxy.

Option 1: Use Our Integrations

You can monitor your Kubernetes clusters or Linux hosts using our built-in integrations and send logs to our system.

  • Linux host integration: Install the Wavefront proxy and configure the log shipper.
  • Kubernetes integration: Enable logs while you set up the integration, generate the script, and run it on your Kubernetes cluster.
  • AWS CloudWatch integration: If you have already configured the AWS CloudWatch integration, you can create an AWS lambda function to send logs to our service.

Option 2: Configure a Log Shipper

The log shipper sends your data to the Wavefront proxy. We support the Fluentd and Fluent Bit log shippers, which scrape and buffer your logs before sending them to the Wavefront proxy.

If you want to use a different log shipper, contact technical support.


Add the VMware domain (*.vmware.com) to the allowlist in your environment. Because our service uses a VMware log cluster, you need to add the VMware domain to your allow list to send log data successfully. If you want to narrow down the domain, contact your account representative.

Configure your log shipper:

  1. Install the log shipper. For example, install Fluentd or install Fluent Bit.

  2. Configure the log shipper to send data to the Wavefront proxy.

    1. Add the hostname of the host where the proxy runs.
    2. Add the pushListenerPorts that you configured in the proxy.

    For example:

    • Edit the Fluentd configuration file (fluent.conf) to send data to a proxy as follows:

      <match wf.**>
        @type copy
          @type http
          endpoint http://<proxy url>:<proxy port (example:2878)>/logs?f=logs_json_arr
          open_timeout 2
          json_array true
            flush_interval 10s
    • Edit the Fluent Bit configuration file (fluent-bit-<os>.conf) to send data to a proxy as follows:

          Name http
          Host <proxy url>
          Port <proxy port>(example: 2878)
          URI /logs?f=logs_json_lines
          Format json_lines
  3. As part of preprocessing, tag the logs with the application and service name to ensure you can drill down from traces to logs.
  4. (Optional) If you’re already using a logging solution, specify alternate strings for required and optional log attributes in the proxy configuration file. See also My Logging Solution Doesn’t Use the Default Attributes.

Learn More!