First, you need to configure your Entra ID subscription to collect logs for your Network Security Logs. The overall process to enable NSG flow logs is:
To get a detailed step-by-step guide on how to enable this feature, go to the Tutorial: Log network traffic to and from a virtual machine using the Entra ID portal .
All the Entra ID Network Security Group (NSG) logs are stored in a Storage account. To grant only the required access to the integration, you need to enable and configure Shared Access Signature in the Storage Account where you are recording your flow logs. To do so, follow these steps:
1. In your Entra ID portal, use the search back. Look for storage accounts . Click on the result.2. In the Storage Accounts screen, click on the account used to store the NSG flow logs.3. Using the left navigation bar, click on the Shared access signature menu under the Security + networking section.4. Set up read-only storage permissions to the account.
5. Click on the Generate SAS and connection string button.Take note of the resulting SAS token string, it will be required later.
After setting your Entra ID NSG logs, it will take some time for Entra ID to show the files with the stored logs. You can check the file logs following these steps:
There are 2 environment options to deploy the script, select the one that fits better in your current infrastructure. Whatever alternative you select, you need to unpack first the integration package shared by our Support team. Unpack the deployment package provided by Lumu in your preferred path/folder. Keep in mind this location, as it will be required for further configurations. From now on, we will refer to this folder as <nsg_lumu_root> .
Whatever alternative you choose to deploy the integration, you need to set up your configuration parameters first. In the package, you will find a file named .config_sample with the required parameters you need. Rename it to .config and modify it according to the information resulting from the steps depicted above. The following table describes the parameters you need to provide.
Parameter |
Description |
Default value |
---|---|---|
client-key | Lumu Custom Collector API key | NA |
collector-id | Lumu Custom Collector ID | NA |
account | Entra ID Storage account name | NA |
token | Entra ID Shared Access Signature (SAS) for the Storage Account | NA |
logging | Logging model for the integration | file |
delta-unit | The type of offset to use for the first run (hours, days, weeks) | hours |
delta-value | The number of offset units to use for the first run | 1 |
The .config file will look as follows:
## Configuration file # Lumu client-key=<CLIENT_KEY> collector-id=<COLLECTOR_ID> # Entra ID account=<Entra ID_STORAGE_ACCOUNT> token=<Entra ID_SAS_TOKEN> # Misc logging=[screen|file] #delta-unit=[hours|days|weeks](default hours) #delta-value=<NUMBER>(default 1)
The package contains the integration script. To use the script, you must locate yourself on the path selected for deployment ( <nsg_lumu_root> ). In the following sections, you will find the directions to deploy it.
We recommend creating a Python virtual environment if you run different Python scripts in the selected host to preserve the integrity of other tools. To do so, follow these steps:
1. Using a command line tool, locate yourself in the <nsg_lumu_root> folder
2. Run the following command to create the virtual environment
python3 -m venv <venv_folder>3. Activate the virtual environment running the following
source <venv_folder>/bin/activate
The file requirements.txt contains the list of requirements for this integration. After deploying the package locally, run the following command from the deployment folder:
To use the script, you must locate yourself on the path selected for deployment ( <nsg_lumu_root> ). Use the following command to show all options available for the package:
usage: main.py [options]
Options |
Description |
---|---|
-h, --help | show this help message and exit |
--proxy-host PROXY_HOST--proxy_host PROXY_HOST | Proxy host (if required) |
--proxy-port PROXY_PORT--proxy_port PROXY_PORT | Proxy port (if required) |
--proxy-user PROXY_USER--proxy_user PROXY_USER | Proxy user (if required) |
--proxy-password PROXY_PASSWORD--proxy_password PROXY_PASSWORD | Proxy password (if required) |
--client-key CLIENT_KEY--client_key CLIENT_KEY | Lumu Client Key (Custom Collector API key). |
--collector-id COLLECTOR_ID--collector_id COLLECTOR_ID | Lumu Custom Collector ID. |
--logging {screen,file} | Logging option (default screen). |
--verbose, -v | Verbosity level. |
--wait WAIT_TIME--wait-time WAIT_TIME--wait_time WAIT_TIME | Set the wait time in seconds between iterations of each process (default 120) |
--account ACCOUNT-a ACCOUNT | Entra ID Storage account name |
--container CONTAINER-c CONTAINER | Entra ID Storage container name |
--token TOKEN-t TOKEN | SAS token for the Entra ID storage container |
--unit DELTA_UNIT--delta-unit DELTA_UNIT--delta_unit DELTA_UNIT | Delta unit to use to collect initial data (default hours) (supported values hours, days, weeks) |
--value DELTA_VALUE--delta-value DELTA_VALUE--delta_value DELTA_VALUE | Unit to be used as delta value (default 1) |
By default, the script will collect and push NSG flow logs from Entra ID to Lumu generated one hour before the initial runtime.
If you need to collect logs from a time unit with a defined value before the initial runtime, you can use the arguments --unit and --value to set this timeframe according to your needs.
For example, if you want to collect data from the last 30 days, you can use the following line:
The script is intended to be used as a daemon process. It is recommended to use it using complementary tools like nohup. Use the following line as an example:
If you are using a Python virtual environment
If you are NOT using a Python virtual environment
To guarantee this script recovers from unexpected behaviors, you need to implement a Scheduled task in Windows or a Cron task in Unix-based systems. We recommend that the scheduled job runs every 15 minutes.
Following, you have an example of how this Cron job should look using the recommended time.
If you are using a Python virtual environment
If you are NOT using a Python virtual environment
If you need to work with another scheduling time, you can use the crontab guru service.
To avoid race conditions, you can run only one instance. If you have one running, the second one will be canceled immediately.
If you have a Docker environment, you can select this option to run the integration as a Docker process. To deploy and run your integration as a docker container, locate yourself in the <nsg_lumu_root> folder, and follow these instructions:
For troubleshooting purposes, you can run the following commands:
To log in to your container using an interactive shell:
To collect integration logs:
After running the integration, you will see new events processed by the custom collector you have created.
To identify failures in the script execution, use the -v flag. The script execution log will show more detailed information.
If you receive the following error.
There could be another instance running. To check this, open the
pid.pid
file in the integration folder. This file stores the process id if it’s running. Search for this process in your system. The following pictures show the process in Windows and Linux.
If you need to deploy a new container with different parameters, you need to rebuild the image by changing the .config file parameters first. Please, refer to the Deploy as a Docker container section for further reference.