Netskope Log Streaming Custom Data Collection Integration

Netskope Log Streaming Custom Data Collection Integration

In this article, you will find out how to configure your Netskope Log Streaming subscription and its Lumu Custom Data Collection integration to pull, transform, and inject the Web Transactions by Netskope Log Streaming into Lumu to enhance the detection & response capabilities of your organization.

Requirements

  • An active Netskope subscription with the Log Streaming feature.
    • You need a Netskope administrator user to perform the configurations in the document.
  • Lumu Custom Collector API configuration for Proxy Logs.
    • A Lumu custom collector ID and client key are required to set up the collection process. Information on how to create a custom collector in your Lumu portal can be found in Manage Custom Collectors.
  • Script host.
    • A scripting host is required to deploy the integration. This host must have Internet visibility over the Lumu Custom Collector API and the Netskope Log Streaming API endpoints. According to the deployment model you select, you will need a host with:
      • Python 3.13+, or
      • A Docker-enabled host
  • Script package.
    • Contact the Lumu support team to request the package we created to deploy the required files.

Contacted hosts

Ensure your script host can communicate with the following hosts. These are required for the operation of this integration.

  • Your cloud storage provider
  • api.lumu.io
  • docker.io
  • ghcr.io
  • *.ubuntu.com
  • *.launchpad.net
  • canonical.com
  • debian.org
  • *.debian.org
  • debian-security.org
  • pypi.python.org
  • pypi.org
  • pythonhosted.org
  • files.pythonhosted.org

Integration’s overview

Lumu Custom Data Collection integration with Netskope Log Streaming uses the logs pushed by Netskope on the Cloud Provider, then collects their results, processes them as Lumu events, and sends them to Lumu Cloud.

Preliminary Setup - Netskope Log Streaming

To set up the integration, you must prepare your Netskope Log Streaming instance to communicate with the Lumu integration. To do this, you need the following:

  • Create a transaction log streaming
  • Create credentials for the selected cloud provider.
Notes This guide outlines the steps for setting up credentials to store transaction logs across three major cloud providers: AWS, Azure, and GCP. Each provider uses a unique identity and access management (IAM) system, so the process for generating credentials differs.

Create a transaction log streaming

We encourage you to follow the official documentation. This guide outlines the three main steps for setting up a log stream in Netskope, from configuring the stream itself to selecting the data and choosing a destination.

Configure a New Stream

  1. Log in to your Netskope tenant and navigate to Settings > Tools > Event Streaming.
  2. On the Event Streaming page, click Add Stream.
  3. Provide a unique name and description for your log stream. Keep it at hand as it will be used during the integration.
  4. Choose the Destination Type where the logs will be sent. Options include major cloud storage providers like Amazon S3, Microsoft Azure Blob, and Google Cloud Storage.

Choose Data Sets

Select the types of log data you want to stream. The two primary data sets available are:

  • Transaction Events: This includes logs for all web and cloud transactions.
  • Events & Alerts: This is a comprehensive data set that bundles alerts, incidents, page events, application events, network events, and endpoint events.

Select Transaction Events for the integration with Lumu.

Configure the Destination

Once you have selected your destination type and data sets, you must provide the necessary credentials to connect your Netskope stream to the cloud storage bucket. The information required varies depending on the destination type selected. Follow the instructions based on your preferred cloud storage provider.

Amazon S3

For the Amazon S3 destination field, fill in the following fields:

    • Name of Destination: A human-readable description for the destination.
    • Bucket: The name of the user’s Amazon S3 bucket.
    • Folder Path: The path to the folder within the bucket where the user wants to store and save their logs. If the folders don’t exist in the bucket, Amazon creates them—for example, logs or logs/diagnostics. Amazon treats objects that end with ‘/’ as folders.
    • Access Key ID: The Access Key ID to the S3 bucket, provided by AWS.
    • Secret Access Key: The Secret Access Key to the S3 bucket, provided by AWS.
    • Region: The AWS services region where your S3 bucket is hosted. You can find more information about Regions on AWS Website.
    • Delivery Options: Push frequency is an ongoing 240 seconds.

Microsoft Azure Blob

For the Azure Blob destination field, fill in the following fields:

    • Name of Destination: A human-readable description for the destination.
    • Storage account name: The name of the storage account where the logs will be stored. To learn more, go to Azure documentation.
    • Container Name: The name of the container within a user’s account where the logs will be stored. To learn more about naming conventions, go to Azure documentation.
    • Path: The path to the folder within the bucket where the user wants to store and save their logs. If the folders do not exist in the bucket, Azure creates them—for example, logs or logs/diagnostics.
    • Access Key: Either of the access keys associated with the user's Azure account.
    • Delivery Options: Push frequency is an ongoing 240 seconds.

For Google Cloud Storage (GCS):

For the Google Cloud Storage destination field, fill in the following fields:

    • Name of Destination: A human-readable description for the destination.
    • Bucket: The name of the storage bucket you created in your Google Cloud account. To learn more: Bucket naming conventions for Google Cloud Storage.
    • Path (optional): The path to the folder within your Google Cloud bucket where the user wants to store logs. In Google Cloud Storage, paths work as object names. When a user enters a custom path, such as netskope/logs/{%Y}, Google Cloud Storage doesn’t create new folders for Netskope, logs, and {%Y} in the bucket. Instead, the objects are stored in one bucket and named netskope/logs/{%Y}/filename. To learn more about this topic, go to About Cloud Storage objects.
    • Private Key: The private_key value from the JSON key type user generated and downloaded from your Google Cloud Storage account. Users should enter a private key in the PEM format with break (\n) symbols, e. g. —–BEGIN PRIVATE KEY—–\nprivate_key\n—–END PRIVATE KEY—–\n.
    • Delivery Options: Push frequency is an ongoing 240 seconds.

Activate the Streams

Activate the stream upon saving. Once you have two active streams, you cannot create more. You will see that the Create Stream button will be grayed out once you reach the limit. You can click the ellipses at the end of the stream name to edit its configuration.

Create credentials for the selected cloud provider

This is a necessary, independent step for listing and downloading the log file that was previously uploaded by the Netskope log stream.

Notes Existing authentication methods may be reused; however, the administrator has the discretion to either reuse them or create new ones for download purposes.

For Amazon S3

If your host integration uses AWS resources, you can use IAM Role as the recommended option. If not, you need to create a user with the minimal privileges to generate the access key ID and the Secret key pair values as follows.

Establish an IAM Policy with the minimum necessary permissions.

Navigate to IAM > Policies and select Create Policies. Then follow these instructions:

1. Paste the following snippet in the first step as the next image shows


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket"
      ],
      "Resource": "arn:aws:s3:::temp-eparra-test"
    },
    {
      "Sid": "VisualEditor1",
      "Effect": "Allow",
      "Action": [
        "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::NAME/*"
    }
  ]
}
Notes Replace NAME with your bucket name.

2. Then, continue to the next step and save the policy. You will see the following confirmation.


Create a user and assign the created permissions.

Navigate to IAM > Users and select Create User. Then, follow these steps:

1. For Step 1, make sure the Provide access to the AWS Management Console box remains unchecked. Then, click Next.


2. On Step 2, select the policy created previously. Then, click Next followed by Create in the last step.


3. After creating the user, access its details and navigate to the Security credentials tab and click Create access key (1).


4. Select the Application running outside AWS option, then click Next.


5. Copy the Access key and Secret access key. They are required as input parameters for the integration.


For GCP

Follow these instructions to create the credentials.

Create a Role with the least permissions

From the GCP console, go to IAM > Roles and create a new role. Assign only the storage.objects.get and storage.objects.list permissions as shown below.


Create a Service Account

Within the GCP console go to IAM > Service Accounts and create a new service account.

1. First provide a name for the account.


2. Then, add the role created previously for the integration and click Done (1) to create the account.


3. Enter the Service account created, navigate to the Keys tab. Then, click Add Key and select JSON format in the creation pop-up.


4. Once you create the new key, download it and save the path of the file for the integration. You will get a file that looks as follows:


For Azure Blob Storage

We support three access methods: Entra ID client credentials, the Storage connection string, and the Storage key.

Client Credentials (Recommended)

Follow these steps to create the credentials for the integration.

1. Within Entra ID, navigate to Integration Tools|App Registration and select New registration.

2. Within the creation window, provide a name, choose the single tenant option, and then click Register to finish.

3. Go to the overview section of the new app registry, and copy the Application (client) ID. Keep it at hand as it will be required for the integration.


4. Now, navigate to the Manage section, and select Certificates & Secrets (1). Go to the Client Secrets tab, then click New client secret (2) and configure a name, and an expiration date to generate the client secret. Copy the Client Secret Value. This value will only be visible once, so save it for the integration.


5. Now, access the Storage Account to which you will assign the Storage Blob Data Contributor Role, then select Access Control (IAM) (1). Click + Add (2) and assign the Storage Blob Data Contributor role to the previously created app as a member.


Connection string

Access storage account and select Access keys (1) under Security + networking. Then copy the Connection String. Save it for later as it will be needed for the integration.


Account key

Access storage account and select Access keys (1) under Security + networking. Then copy the Key and the Storage account name. Save them for later as they will be needed for the integration.


Notes Although using the Connection string or the Storage key is viable, we highly recommend using the Client credentials, which utilizes Microsoft Entra Authentication.

Preliminary setup - Lumu portal

The integration set-up process needs you to collect this information from Lumu portal:

  • Lumu Defender API key
  • Company UUID

Log in to your Lumu portal and run the following procedures to collect this data.

Collect your Lumu Collector Key

To collect the Lumu Collector key, please refer to the Collector key section of the Custom Collector API document.

Collect your Lumu Collector ID

To collect the Lumu Custom Collector key, please refer to the Collector ID section of the Custom Collector API document.

Collect your Lumu company UUID

To collect your Lumu company UUID, log in to your Lumu portal. Once you are in the main window, copy the string below your company name.


Preliminary Setup - Choose your integration environment

There are 2 environment options to deploy the script. Select the one that best fits your current infrastructure.

  • Run it as a Python script (Unix-based systems and Windows)
  • Run it as a Docker container.
    • By using the Makefile model (Unix-based systems).
    • By using Docker commands (Unix-based systems and Docker Desktop for Windows).

Whichever alternative you select, you must unpack the integration package shared by our Support team.

Unpack the deployment package provided by Lumu in your preferred path/folder. Keep in mind this location, as it will be required for further configurations. From now on, we will refer to this folder as <app_lumu_root>.

Prepare your integration environment

Notes Before starting, ensure your integration environment can communicate with the hosts listed in the Contacted hosts section.

You can deploy your integration using the following alternatives:

  • Run your integration on Python
  • Run your integration on Docker

Follow the instructions based on the selected deployment method.

Prepare Python on your environment

Notes If Docker is your chosen deployment method, you may skip this step.

If Python is your chosen deployment method, you will need to create a Virtual environment for each integration to avoid conflicts between them and your operating system tools. Make sure you follow the steps in our Preparing Environment for Custom Integrations article.

Prepare Docker on your environment

Notes If Python is your chosen deployment method, you may skip this step.

If Docker is your chosen deployment method, you must follow the Docker installation documentation that corresponds to your OS. Ensure you follow the Post-installation steps for Linux before deploying the integration.

Notes For Windows users, follow the Install Docker Desktop for Windows documentation to install the Docker Engine.

Set up the configuration files

You need to add and edit the integrations.yml configuration file to set up the integration.

Notes You will find the integrations_template.yml sample file inside the integrations package. Use it to build your configuration file.

Complete the integrations file

Notes All the parameters in red should be replaced with the real data necessary for your integration deployment. For example, the parameter “COMPANY-UUID” should end up as something similar to “aa11bb22bb33-123a-456b-789c-11aa22bb33cc”. Follow these indications for all similar parameters.

The integrations.yml file contains the information required by the integration to collect the network activity data from your Netskope Log Streaming console, transform it, and send it to the Lumu Cloud.

lumu:
  uuid: "COMPANY-UUID"
  collector_key: "COLLECTOR-KEY"
  collector_id: "COLLECTOR-ID"
app:
  name: "UNIQUE-NAME"
  cloud_provider: "CLOUD-PROVIDER"
  stream_id: "STREAM-ID"
storage:
  name: "BUCKET-NAME"
csv:
  delimiter: "DELIMITER"
api:
  aws_access_key_id: "AWS-ACCESS-KEY-ID"
  aws_secret_access_key: "AWS-SECRET-ACCESS-KEY"
  aws_region: "AWS-REGION"
  gcp_credentials_json_file: "PATH-TO-GCP-CREDENTIALS-JSON-FILE"
  azure_storage_account_name: "AZURE-STORAGE-ACCOUNT-NAME"
  azure_tenant_id: "AZURE-TENANT-ID"
  azure_client_id: "AZURE-CLIENT-ID"
  azure_client_secret: "AZURE-CLIENT-SECRET"
  azure_account_key: "AZURE-ACCOUNT-KEY"
  azure_connection_string: "AZURE-CONNECTION-STRING"

Replace the highlighted placeholders as follows:

Notes You should only provide the api fields that correspond to the selected cloud provider. In the case of Azure, take into account the different access methods. As an example, if you selected AWS, your integration file should look as follows:

lumu:

  uuid: "COMPANY-UUID"

  collector_key: "COLLECTOR-KEY"

  collector_id: "COLLECTOR-ID"

app:

  name: "UNIQUE-NAME"

  cloud_provider: "CLOUD-PROVIDER"

  stream_id : "STREAM-ID"

storage:

  name: "BUCKET-NAME"

csv:

  delimiter: "DELIMITER"

api:

  aws_access_key_id: "AWS-ACCESS-KEY-ID"

  aws_secret_access_key: "AWS-SECRET-ACCESS-KEY"

  aws_region: "AWS-REGION"

  • COMPANY-UUID with the Company UUID collected in the Collect your Lumu company UUID step, under the Preliminary setup - Lumu portal section.
  • COLLECTOR-KEY with the Collector Key collected in the Collect the Lumu Collector Key step, under the Preliminary setup - Lumu portal section.
  • COLLECTOR-ID with the Collector ID collected in the Collect the Lumu Collector ID step, under the Preliminary setup - Lumu portal section.
  • UNIQUE-NAME with a distinctive name for the integration. We recommend using the customer's or Netskope Log Streaming tenant's name here.
  • CLOUD-PROVIDER AWS, Azure or Google Cloud Storage.
  • STREAM-ID obtained during the Preliminary Setup - Netskope Log Streaming step.
  • BUCKET-NAME for either the Azure container name, the Google Cloud Storage name or the AWS the Bucket name.
  • DELIMITER with the csv delimiter, default is “comma” “,”
  • AWS-ACCESS-KEY-ID if you are using AWS S3 - Optional: can be null if using IAM roles or other automatic authentication methods.
  • AWS-SECRET-ACCESS-KEY if you are using AWS S3 - Optional: can be null if using IAM roles or other automatic authentication methods.
  • AWS-REGION if you are using AWS S3 - e.g., us-east-1, us-west-2, eu-west-1
  • PATH-TO-GCP-CREDENTIALS-JSON-FILE with the path where you stored the credentials file obtained during the Create credentials for the selected cloud provider step, corresponding to the GCP configuration. For example, ./credentials.json. We recommend leaving it in the data folder.
  • AZURE-STORAGE-ACCOUNT-NAME for the Storage account name obtained during the Create credentials for the selected cloud provider step.
  • AZURE-TENANT-ID for your Azure Tenant ID.
  • AZURE-CLIENT-ID for your Client ID obtained during the Create credentials for the selected cloud provider step.
  • AZURE-CLIENT-SECRET for the client secret obtained during the Create credentials for the selected cloud provider step.
  • AZURE-ACCOUNT-KEY for the account key obtained during the Create credentials for the selected cloud provider step.
  • AZURE-CONNECTION-STRING for the Connection string obtained during the Create credentials for the selected cloud provider step.
Alert You must fill in the configuration data carefully. If there are any mistakes or missing data, you’ll receive errors. Please refer to the Troubleshooting section at the end of this article for further reference.

Lumu introduced the Makefile model to assist customers in deploying the integration as a Docker container. To deploy the integration, locate yourself in the <app_lumu_root> folder, and run the following command:

make docker-run-build
Notes Please monitor the console output for any unexpected errors. Fix them based on the command output and run the command again.

Deploy Integration as a Python script

Notes In some Python installations, the executable name could vary from python to python3. If any Python command shows an error, change the python string in the presented command by python3.

We encourage you to create a Python environment to deploy the integration as a Python script. You will find specific instructions in the Create a Virtual Environment document. Install the required dependencies by running the following commands:

For Windows environments:

ENV_FOLDER\Scripts\activate.bat
python -m pip install -r requirements

For Unix-based environments:

source ENV_FOLDER\bin\activate
python -m pip install -r requirements

Replace the ENV_FOLDER placeholder with the name of your virtual environment folder.

Script details

To use the script, you must locate yourself on the path selected for deployment (<app_lumu_root>). Use the following command to show all options available for the package:

python run.py --help

Usage: run.py [OPTIONS]

╭─ Options ─────────────────────────────────────────────────────────────────────────────────╮

│ --verbose -v Enable verbose mode.                                                         │

│ --logging-type -l [screen|file] Logging output type: 'screen' or 'file' [default: screen] │

│ --config TEXT Path to the configuration file. [default: integrations.yml]                 │

│ --help Show this message and exit.                                                        │

╰───────────────────────────────────────────────────────────────────────────────────────────╯

Options Description
-h, --help Show this message and exit.
--config TEXT Path to the configuration file. [default: integrations.yml]
--logging-type -l [screen|file] Logging output type: 'screen' or 'file' [default: screen]
--verbose, -v Enable verbose mode.

Usage Examples

Task: poll and inject Netskope Log Streaming Traffic logs into Lumu

Run the following command to poll all the Netskope Log Streaming logs and push them into the Lumu custom data collector.

python run.py

Deploy as a Docker container (Optional)

If you have a Docker environment, you can select this option to run the integration as a Docker process. To deploy and run your integration as a Docker container, locate yourself in the <app_lumu_root> folder, and follow these instructions:

1. To build the container, run the following command. Change all the flags based on the reference given in the script section above.

docker build --tag img-netskope-log-streaming-collection --file Dockerfile .
Notes Do not forget the dot "." at the end of the line

2. To run the container, run the following command:

docker run -v ./integrations.yml:/app/integrations.yml -v ./data:/app/data -d --restart unless-stopped --log-driver json-file --log-opt max-size=30m --log-opt max-file=3 --name lumu-netskope-log-streaming-collection img-netskope-log-streaming-collection

Expected results

After you configure the integration, you will see the processed events in the custom collector created in Lumu portal. Lumu integration will process events from the previous 10 minutes since the integration activation time.

Lumu Portal


Troubleshooting

The commands defined in this section will allow you to troubleshoot the operation of your integration. Keep in mind that you must locate yourself in the <app_lumu_root> folder before running any of them.

Deployment via Makefile as a Docker container

The following are the troubleshooting commands for this deployment option:

  • Checking integration logs
    Run the following command to check your integration logs.
    make docker-logs

  • Checking integration errors
    Run the following command to check errors in your integration.
    make docker-errors
  • Check the status of the integration
    Run the following command to check the status of the integration.
    make docker-ps

  • Stopping the integration
    Run the following command if you need to stop the integration.
    make docker-stop
  • Starting the integration
    Run the following command to start the integration.
    make docker-start
  • Fixing issues with sudo for Docker
    If you cannot run Docker commands with your current user, run the following command.
    make docker-fix-sudo
  • Reinstalling integration from scratch

Run the following command to reinstall the integration from scratch.

docker-reset-force
  • Collecting and packaging logs for Lumu support

Run the following command to collect and package the integration logs to share them with the Lumu support team. This command will create the support.tar package file that contains relevant information for the Lumu support team.

make docker-support

Deployment via Python script

To identify failures in the script, please use the -v flag. This will allow you to identify failures in the script execution.

Deployment as a Docker container

For troubleshooting purposes, you can run the following commands:

  • Logging in to the container using an interactive shell
docker exec -it lumu-netskope-log-streaming-collection bash
  • Collecting integration logs
docker logs -f lumu-netskope-log-streaming-collection

        • Related Articles

        • Illumio Custom Data Collection Integration

          Learn how to enhance the detection & response capabilities of your organization by integrating Illumio with Lumu’s data collection capabilities to pull, transform and inject the activity network logs recorded by Illumio into Lumu. Requirements An ...
        • DNSFilter Custom Data Collection Integration

          In this article, you will find out how to configure your DNSFilter subscription and its Lumu Custom Data Collection integration to pull, transform, and inject the query logs recorded by DNSFilter into Lumu to enhance the detection & response ...
        • Akamai SIA Custom Data Collection Integration

          In this article, you will find out how to configure your Akamai Secure Internet Access Enterprise (SIA) subscription and the Lumu Custom Data Collection integration to pull, transform, and inject the DNS query and Proxy logs recorded by Akamai into ...
        • Netskope Out-of-the-Box Data Collection Integration

          To learn more about Out-of-the-box Integrations and their benefits, please refer to this article. Configure Netskope Next Gen Secure Web Gateway 1. Log in to your Netskope UI. 2. Navigate to Event Streaming . Following the next path Settings > Tools ...
        • Zero Networks Custom Data Collection Integration

          Learn how to enhance the detection & response capabilities of your organization by integrating Zero Networks with Lumu’s data collection capabilities to pull, transform and inject the activity network logs recorded by Zero Networks into Lumu. ...