Microsoft Entra ID NSG Flow Logs Custom Data Collection Integration

Microsoft Entra ID NSG Flow Logs Custom Data Collection Integration

Microsoft Azure is now called Entra ID
In this article, you will find out how to configure your Microsoft Entra ID subscription and its Lumu Custom Data Collection integration to pull, transform, and inject Entra ID Network Security Group flow logs into Lumu to enhance the detection & response capabilities of your organization.

Requirements

  • An active Entra ID subscription.
    • An Entra ID subscription is required to enable the NSG flow logs. It is important to have access with an administrator user
  • Lumu Custom Collector API configuration.
    • A Lumu custom collector ID and client key are required to set up the collection process. Information on how to create a custom collector in your Lumu portal can be found in Manage Custom Collectors .
  • Script host.
    • A scripting host is required to deploy the integration. This host must have Internet visibility over Lumu Custom Collector endpoints and Microsoft Entra ID Blob storage. According to the deployment model you select, you will need a host with:
      • Python 3.10+, or
      • A Docker-enabled host
  • Script Package

Set up your Entra ID subscription to collect NSG flow logs

Network Security Group (NSG) flow logs is a feature of Entra ID Network Watcher that allows you to log information about IP traffic flowing through a network security group (NSG). For more information about this feature, please refer to Flow logs for network security groups in the Entra ID documentation.

Enable NSG flow logs

First, you need to configure your Entra ID subscription to collect logs for your Network Security Logs. The overall process to enable NSG flow logs is:

  • Register the Microsoft.insights provider.
  • Enable flow logging for a network security group using Network Watcher NSG flow logs.

To get a detailed step-by-step guide on how to enable this feature, go to the Tutorial: Log network traffic to and from a virtual machine using the Entra ID portal .

Configure the Shared Access Signature for the Storage Account

All the Entra ID Network Security Group (NSG) logs are stored in a Storage account. To grant only the required access to the integration, you need to enable and configure Shared Access Signature in the Storage Account where you are recording your flow logs. To do so, follow these steps:

1. In your Entra ID portal, use the search back. Look for storage accounts . Click on the result.

2. In the Storage Accounts screen, click on the account used to store the NSG flow logs.
3. Using the left navigation bar, click on the Shared access signature menu under the Security + networking section.

4. Set up read-only storage permissions to the account.
    1. Allowed services: Blob.
    2. Allowed resource types: Service, Container, Object.
    3. Allowed permissions: Read, List.
    4. No Blob versioning permissions.
    5. Set the expiration period. We recommend at least one year.
    6. Allowed protocols: HTTPS only.
    7. Signing key: select any of the options.
5. Click on the Generate SAS and connection string button.


Take note of the resulting SAS token string, it will be required later.

After setting your Entra ID NSG logs, it will take some time for Entra ID to show the files with the stored logs. You can check the file logs following these steps:

  1. Go to your Storage Accounts under your Entra ID Portal.
  2. Click on the storage account you created to store your NSG logs.
  3. Using the left navigation pane, click on the Containers menu under the Data storage
  4. In the Containers window, click on the insights-logs-networksecuritygroupflowevent container. There, you will find your NSG flow logs.

All the logs are stored in a long-named hierarchy. The actual files with the NSG flow log entries will be stored following the structure y=<YEAR>/m=<MONTH>/d=<DAY>/h=<HOUR>/m=<MINUTE>/macAddress=<MAC ADDRESS>/PT1H.json. Each file stores the NSG flow logs for a particular interface inside your Network Security Group.

Deploy the integration

There are 2 environment options to deploy the script, select the one that fits better in your current infrastructure. Whatever alternative you select, you need to unpack first the integration package shared by our Support team. Unpack the deployment package provided by Lumu in your preferred path/folder. Keep in mind this location, as it will be required for further configurations. From now on, we will refer to this folder as <nsg_lumu_root> .

The integration works with Python 3.10. If your environment has prior versions, we recommend deploying the integration as a Docker Container.

Common steps to deploy your integration

Whatever alternative you choose to deploy the integration, you need to set up your configuration parameters first. In the package, you will find a file named .config_sample with the required parameters you need. Rename it to .config and modify it according to the information resulting from the steps depicted above. The following table describes the parameters you need to provide.

Parameter

Description

Default value

client-key Lumu Custom Collector API key NA
collector-id Lumu Custom Collector ID NA
account Entra ID Storage account name NA
token Entra ID Shared Access Signature (SAS) for the Storage Account NA
logging Logging model for the integration file
delta-unit The type of offset to use for the first run (hours, days, weeks) hours
delta-value The number of offset units to use for the first run 1

The .config file will look as follows:

  1. ## Configuration file # Lumu client-key=<CLIENT_KEY> collector-id=<COLLECTOR_ID> # Entra ID account=<Entra ID_STORAGE_ACCOUNT> token=<Entra ID_SAS_TOKEN> # Misc logging=[screen|file] #delta-unit=[hours|days|weeks](default hours) #delta-value=<NUMBER>(default 1)

Deploy as script

The package contains the integration script. To use the script, you must locate yourself on the path selected for deployment ( <nsg_lumu_root> ). In the following sections, you will find the directions to deploy it.

Install requirements

We recommend creating a Python virtual environment if you run different Python scripts in the selected host to preserve the integrity of other tools. To do so, follow these steps:

1. Using a command line tool, locate yourself in the <nsg_lumu_root> folder

2. Run the following command to create the virtual environment

python3 -m venv <venv_folder>

3. Activate the virtual environment running the following

source <venv_folder>/bin/activate

The file requirements.txt contains the list of requirements for this integration. After deploying the package locally, run the following command from the deployment folder:

pip install -r ./requirements.txt

Script details

To use the script, you must locate yourself on the path selected for deployment ( <nsg_lumu_root> ). Use the following command to show all options available for the package:

python3 main.py -h

usage: main.py [options]

Options

Description

-h, --help show this help message and exit
--proxy-host PROXY_HOST--proxy_host PROXY_HOST Proxy host (if required)
--proxy-port PROXY_PORT--proxy_port PROXY_PORT Proxy port (if required)
--proxy-user PROXY_USER--proxy_user PROXY_USER Proxy user (if required)
--proxy-password PROXY_PASSWORD--proxy_password PROXY_PASSWORD Proxy password (if required)
--client-key CLIENT_KEY--client_key CLIENT_KEY Lumu Client Key (Custom Collector API key).
--collector-id COLLECTOR_ID--collector_id COLLECTOR_ID Lumu Custom Collector ID.
--logging {screen,file} Logging option (default screen).
--verbose, -v Verbosity level.
--wait WAIT_TIME--wait-time WAIT_TIME--wait_time WAIT_TIME Set the wait time in seconds between iterations of each process (default 120)
--account ACCOUNT-a ACCOUNT Entra ID Storage account name
--container CONTAINER-c CONTAINER Entra ID Storage container name
--token TOKEN-t TOKEN SAS token for the Entra ID storage container
--unit DELTA_UNIT--delta-unit DELTA_UNIT--delta_unit DELTA_UNIT Delta unit to use to collect initial data (default hours) (supported values hours, days, weeks)
--value DELTA_VALUE--delta-value DELTA_VALUE--delta_value DELTA_VALUE Unit to be used as delta value (default 1)

These parameters are displayed for reference. Remember to use the .config file provided and described in the steps above.

Usage Examples

Task: collect and push NSG flow logs from Entra ID to Lumu generated one hour before the initial runtime (default)

By default, the script will collect and push NSG flow logs from Entra ID to Lumu generated one hour before the initial runtime.

python3 main.py

Task: collect and push NSG flow logs from Entra ID to Lumu generated from a specific timeframe and unit before the initial runtime

If you need to collect logs from a time unit with a defined value before the initial runtime, you can use the arguments --unit and --value to set this timeframe according to your needs.

python3 main.py --unit [hours|days|weeks] --value <value>

For example, if you want to collect data from the last 30 days, you can use the following line:

python3 main.py --unit days --value 30

Further considerations

The script is intended to be used as a daemon process. It is recommended to use it using complementary tools like nohup. Use the following line as an example:

If you are using a Python virtual environment

nohup <venv_path>/bin/python <nsg_lumu_root>/main.py &

If you are NOT using a Python virtual environment

nohup python3 <nsg_lumu_root>/main.py &

To guarantee this script recovers from unexpected behaviors, you need to implement a Scheduled task in Windows or a Cron task in Unix-based systems. We recommend that the scheduled job runs every 15 minutes.

Following, you have an example of how this Cron job should look using the recommended time.

If you are using a Python virtual environment

*/15 * * * * <venv_path>/bin/python <nsg_lumu_root>/main.py

If you are NOT using a Python virtual environment

*/15 * * * * python3 <nsg_lumu_root>/main.py

If you need to work with another scheduling time, you can use the crontab guru service.

To avoid race conditions, you can run only one instance. If you have one running, the second one will be canceled immediately.

Deploy as a Docker container (Optional)

If you have a Docker environment, you can select this option to run the integration as a Docker process. To deploy and run your integration as a docker container, locate yourself in the <nsg_lumu_root> folder, and follow these instructions:

  1. Modify the .config file with the parameters according to your environment.
  2. To build the container, run the following command. Change all the flags based on the reference given in the script section above.
    docker build docker build --tag lumu-nsg .
    Do not forget the dot "." at the end of the line
  3. To run the container, run the following command:
    docker run -d --name nsg-collector lumu-nsg

Troubleshooting

For troubleshooting purposes, you can run the following commands:

To log in to your container using an interactive shell:

docker exec -it nsg-collector bash

To collect integration logs:

docker logs -f nsg-collector

Expected results

After running the integration, you will see new events processed by the custom collector you have created.


Troubleshooting and known issues

To identify failures in the script execution, use the -v flag. The script execution log will show more detailed information.

Another instance is running

If you receive the following error.

Error: Another instance is running. Quitting.

There could be another instance running. To check this, open the pid.pid file in the integration folder. This file stores the process id if it’s running. Search for this process in your system. The following pictures show the process in Windows and Linux.





If the previous validation indicates that another instance is running, please, check its progress using the integration’s log lumu.log .

Deploy a different container with different parameters

If you need to deploy a new container with different parameters, you need to rebuild the image by changing the .config file parameters first. Please, refer to the Deploy as a Docker container section for further reference.


        • Related Articles

        • DNSFilter Custom Data Collection Integration

          In this article, you will find out how to configure your DNSFilter subscription and its Lumu Custom Data Collection integration to pull, transform, and inject the query logs recorded by DNSFilter into Lumu to enhance the detection & response ...
        • Akamai SIA Custom Data Collection Integration

          In this article, you will find out how to configure your Akamai Secure Internet Access Enterprise (SIA) subscription and the Lumu Custom Data Collection integration to pull, transform, and inject the DNS query and Proxy logs recorded by Akamai into ...
        • Cato Networks Custom Data Collection Integration

          In this article, you will find out how to configure your Cato Networks subscription and its Lumu Custom Data Collection integration to pull, transform, and inject the FW logs recorded by Cato Networks into Lumu to enhance the detection & response ...
        • Cloudflare - S3 Compatible Storage Custom Data Collection Integration

          In this article, you will find out how to configure your Cloudflare Enterprise subscription and the Lumu Custom Data Collection integration to pull, transform, and inject the DNS Gateway logs recorded by Cloudflare into Lumu to enhance the detection ...
        • Microsoft Sentinel and Lumu Universal SIEM

          Remember to add the Universal SIEM Out-of-the-Box SecOps Integration before proceeding. Lumu Universal SIEM can be used to deliver Lumu detections and operating events to Microsoft Sentinel deployment leveraging Azure Log Analytics Data Collection ...