In this article, you will find out how to configure your Netskope Log Streaming subscription and its Lumu Custom Data Collection integration to pull, transform, and inject the Web Transactions by Netskope Log Streaming into Lumu to enhance the detection & response capabilities of your organization.
Ensure your script host can communicate with the following hosts. These are required for the operation of this integration.
Lumu Custom Data Collection integration with Netskope Log Streaming uses the logs pushed by Netskope on the Cloud Provider, then collects their results, processes them as Lumu events, and sends them to Lumu Cloud.
To set up the integration, you must prepare your Netskope Log Streaming instance to communicate with the Lumu integration. To do this, you need the following:
This guide outlines the steps for setting up credentials to store transaction logs across three major cloud providers: AWS, Azure, and GCP. Each provider uses a unique identity and access management (IAM) system, so the process for generating credentials differs.
We encourage you to follow the official documentation. This guide outlines the three main steps for setting up a log stream in Netskope, from configuring the stream itself to selecting the data and choosing a destination.
Configure a New Stream
Choose Data Sets
Select the types of log data you want to stream. The two primary data sets available are:
Select Transaction Events for the integration with Lumu.
Configure the Destination
Once you have selected your destination type and data sets, you must provide the necessary credentials to connect your Netskope stream to the cloud storage bucket. The information required varies depending on the destination type selected. Follow the instructions based on your preferred cloud storage provider.
Amazon S3
For the Amazon S3 destination field, fill in the following fields:
Microsoft Azure Blob
For the Azure Blob destination field, fill in the following fields:
For Google Cloud Storage (GCS):
For the Google Cloud Storage destination field, fill in the following fields:
netskope/logs/{%Y}, Google Cloud Storage doesn’t create new folders for Netskope, logs, and {%Y} in the bucket. Instead, the objects are stored in one bucket and named netskope/logs/{%Y}/filename. To learn more about this topic, go to About Cloud Storage objects.Activate the Streams
Activate the stream upon saving. Once you have two active streams, you cannot create more. You will see that the Create Stream button will be grayed out once you reach the limit. You can click the ellipses at the end of the stream name to edit its configuration.
This is a necessary, independent step for listing and downloading the log file that was previously uploaded by the Netskope log stream.
Existing authentication methods may be reused; however, the administrator has the discretion to either reuse them or create new ones for download purposes.
For Amazon S3
If your host integration uses AWS resources, you can use IAM Role as the recommended option. If not, you need to create a user with the minimal privileges to generate the access key ID and the Secret key pair values as follows.
Establish an IAM Policy with the minimum necessary permissions.
Navigate to IAM > Policies and select Create Policies. Then follow these instructions:
1. Paste the following snippet in the first step as the next image shows
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::temp-eparra-test"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::NAME/*"
}
]
}Replace NAME with your bucket name.
2. Then, continue to the next step and save the policy. You will see the following confirmation.
Create a user and assign the created permissions.
Navigate to IAM > Users and select Create User. Then, follow these steps:
1. For Step 1, make sure the Provide access to the AWS Management Console box remains unchecked. Then, click Next.
2. On Step 2, select the policy created previously. Then, click Next followed by Create in the last step.
3. After creating the user, access its details and navigate to the Security credentials tab and click Create access key (1).
4. Select the Application running outside AWS option, then click Next.
5. Copy the Access key and Secret access key. They are required as input parameters for the integration.
For GCP
Follow these instructions to create the credentials.
Create a Role with the least permissions
From the GCP console, go to IAM > Roles and create a new role. Assign only the
storage.objects.getandstorage.objects.listpermissions as shown below.Create a Service Account
Within the GCP console go to IAM > Service Accounts and create a new service account.
1. First provide a name for the account.
2. Then, add the role created previously for the integration and click Done (1) to create the account.
3. Enter the Service account created, navigate to the Keys tab. Then, click Add Key and select JSON format in the creation pop-up.
4. Once you create the new key, download it and save the path of the file for the integration. You will get a file that looks as follows:
For Azure Blob Storage
We support three access methods: Entra ID client credentials, the Storage connection string, and the Storage key.
Client Credentials (Recommended)
Follow these steps to create the credentials for the integration.
1. Within Entra ID, navigate to Integration Tools|App Registration and select New registration.
2. Within the creation window, provide a name, choose the single tenant option, and then click Register to finish.
3. Go to the overview section of the new app registry, and copy the Application (client) ID. Keep it at hand as it will be required for the integration.
4. Now, navigate to the Manage section, and select Certificates & Secrets (1). Go to the Client Secrets tab, then click New client secret (2) and configure a name, and an expiration date to generate the client secret. Copy the Client Secret Value. This value will only be visible once, so save it for the integration.
5. Now, access the Storage Account to which you will assign the Storage Blob Data Contributor Role, then select Access Control (IAM) (1). Click + Add (2) and assign the Storage Blob Data Contributor role to the previously created app as a member.
Connection string
Access storage account and select Access keys (1) under Security + networking. Then copy the Connection String. Save it for later as it will be needed for the integration.
Account key
Access storage account and select Access keys (1) under Security + networking. Then copy the Key and the Storage account name. Save them for later as they will be needed for the integration.
Although using the Connection string or the Storage key is viable, we highly recommend using the Client credentials, which utilizes Microsoft Entra Authentication.
The integration set-up process needs you to collect this information from Lumu portal:
Log in to your Lumu portal and run the following procedures to collect this data.
To collect the Lumu Collector key, please refer to the Collector key section of the Custom Collector API document.
To collect the Lumu Custom Collector key, please refer to the Collector ID section of the Custom Collector API document.
To collect your Lumu company UUID, log in to your Lumu portal. Once you are in the main window, copy the string below your company name.
There are 2 environment options to deploy the script. Select the one that best fits your current infrastructure.
Whichever alternative you select, you must unpack the integration package shared by our Support team.
Unpack the deployment package provided by Lumu in your preferred path/folder. Keep in mind this location, as it will be required for further configurations. From now on, we will refer to this folder as <app_lumu_root>.
Before starting, ensure your integration environment can communicate with the hosts listed in the Contacted hosts section.
You can deploy your integration using the following alternatives:
Follow the instructions based on the selected deployment method.
If Docker is your chosen deployment method, you may skip this step.
If Python is your chosen deployment method, you will need to create a Virtual environment for each integration to avoid conflicts between them and your operating system tools. Make sure you follow the steps in our Preparing Environment for Custom Integrations article.
If Python is your chosen deployment method, you may skip this step.
If Docker is your chosen deployment method, you must follow the Docker installation documentation that corresponds to your OS. Ensure you follow the Post-installation steps for Linux before deploying the integration.
For Windows users, follow the Install Docker Desktop for Windows documentation to install the Docker Engine.
You need to add and edit the integrations.yml configuration file to set up the integration.
You will find the integrations_template.yml sample file inside the integrations package. Use it to build your configuration file.
All the parameters in red should be replaced with the real data necessary for your integration deployment. For example, the parameter “COMPANY-UUID” should end up as something similar to “aa11bb22bb33-123a-456b-789c-11aa22bb33cc”. Follow these indications for all similar parameters.
The integrations.yml file contains the information required by the integration to collect the network activity data from your Netskope Log Streaming console, transform it, and send it to the Lumu Cloud.
lumu:
uuid: "COMPANY-UUID"
collector_key: "COLLECTOR-KEY"
collector_id: "COLLECTOR-ID"
app:
name: "UNIQUE-NAME"
cloud_provider: "CLOUD-PROVIDER"
stream_id: "STREAM-ID"
storage:
name: "BUCKET-NAME"
csv:
delimiter: "DELIMITER"
api:
aws_access_key_id: "AWS-ACCESS-KEY-ID"
aws_secret_access_key: "AWS-SECRET-ACCESS-KEY"
aws_region: "AWS-REGION"
gcp_credentials_json_file: "PATH-TO-GCP-CREDENTIALS-JSON-FILE"
azure_storage_account_name: "AZURE-STORAGE-ACCOUNT-NAME"
azure_tenant_id: "AZURE-TENANT-ID"
azure_client_id: "AZURE-CLIENT-ID"
azure_client_secret: "AZURE-CLIENT-SECRET"
azure_account_key: "AZURE-ACCOUNT-KEY"
azure_connection_string: "AZURE-CONNECTION-STRING"
Replace the highlighted placeholders as follows:
You should only provide the api fields that correspond to the selected cloud provider. In the case of Azure, take into account the different access methods. As an example, if you selected AWS, your integration file should look as follows:
lumu:
uuid: "COMPANY-UUID"
collector_key: "COLLECTOR-KEY"
collector_id: "COLLECTOR-ID"
app:
name: "UNIQUE-NAME"
cloud_provider: "CLOUD-PROVIDER"
stream_id : "STREAM-ID"
storage:
name: "BUCKET-NAME"
csv:
delimiter: "DELIMITER"
api:
aws_access_key_id: "AWS-ACCESS-KEY-ID"
aws_secret_access_key: "AWS-SECRET-ACCESS-KEY"
aws_region: "AWS-REGION"
You must fill in the configuration data carefully. If there are any mistakes or missing data, you’ll receive errors. Please refer to the Troubleshooting section at the end of this article for further reference.
Lumu introduced the Makefile model to assist customers in deploying the integration as a Docker container. To deploy the integration, locate yourself in the <app_lumu_root> folder, and run the following command:
Please monitor the console output for any unexpected errors. Fix them based on the command output and run the command again.
In some Python installations, the executable name could vary from python to python3. If any Python command shows an error, change the python string in the presented command by python3.
We encourage you to create a Python environment to deploy the integration as a Python script. You will find specific instructions in the Create a Virtual Environment document. Install the required dependencies by running the following commands:
For Windows environments:
For Unix-based environments:
Replace the ENV_FOLDER placeholder with the name of your virtual environment folder.
To use the script, you must locate yourself on the path selected for deployment (<app_lumu_root>). Use the following command to show all options available for the package:
Usage: run.py [OPTIONS]
╭─ Options ─────────────────────────────────────────────────────────────────────────────────╮
│ --verbose -v Enable verbose mode. │
│ --logging-type -l [screen|file] Logging output type: 'screen' or 'file' [default: screen] │
│ --config TEXT Path to the configuration file. [default: integrations.yml] │
│ --help Show this message and exit. │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
| Options | Description |
| -h, --help | Show this message and exit. |
| --config TEXT | Path to the configuration file. [default: integrations.yml] |
| --logging-type -l [screen|file] | Logging output type: 'screen' or 'file' [default: screen] |
| --verbose, -v | Enable verbose mode. |
Task: poll and inject Netskope Log Streaming Traffic logs into Lumu
Run the following command to poll all the Netskope Log Streaming logs and push them into the Lumu custom data collector.
If you have a Docker environment, you can select this option to run the integration as a Docker process. To deploy and run your integration as a Docker container, locate yourself in the <app_lumu_root> folder, and follow these instructions:
1. To build the container, run the following command. Change all the flags based on the reference given in the script section above.
Do not forget the dot "." at the end of the line
2. To run the container, run the following command:
After you configure the integration, you will see the processed events in the custom collector created in Lumu portal. Lumu integration will process events from the previous 10 minutes since the integration activation time.
The commands defined in this section will allow you to troubleshoot the operation of your integration. Keep in mind that you must locate yourself in the <app_lumu_root> folder before running any of them.
The following are the troubleshooting commands for this deployment option:
Run the following command to reinstall the integration from scratch.
docker-reset-force
Run the following command to collect and package the integration logs to share them with the Lumu support team. This command will create the support.tar package file that contains relevant information for the Lumu support team.
make docker-support
To identify failures in the script, please use the -v flag. This will allow you to identify failures in the script execution.
For troubleshooting purposes, you can run the following commands:
docker exec -it lumu-netskope-log-streaming-collection bash
docker logs -f lumu-netskope-log-streaming-collection