In this article, you will find out how to configure your Cloudflare Enterprise subscription and the Lumu Custom Data Collection integration to pull, transform, and inject the DNS Gateway logs recorded by Cloudflare into Lumu to enhance the detection & response capabilities of your organization.
For creating an S3-compatible bucket in Oracle Cloud Infrastructure (OCI), please follow the directions given in the Oracle Cloud Infrastructure Documentation .
To set up access to your bucket using your S3-compatible API, you need to collect the following information from your Oracle Cloud Infrastructure
Follow the instructions given in the Oracle Cloud Infrastructure Documentation for the Amazon S3 Compatibility API to collect the required information. To create the Access and Secret key pair, refer to the Managing User Credentials in the Oracle Cloud Infrastructure Documentation.
Now, it’s time to set up your Cloudflare Logpush feature to collect its DNS Gateway logs. Log in to your Cloudflare and follow these steps:
1. Using the left navigation panel, click on the Logpush menu under the Logs section.
2. Under the Logpush screen, click on the Connect a service button . Fill in the required data as indicated. Click the Next button.
a. Type a Job name.
b. Select Gateway DNS as Data set.
c. In the Data fields section, select at least: Date time, Source IP, Query name, and Query type name. Optionally, select Device name, Email, and Location.
d. In the Timestamp format field under the Advanced settings, select RFC3339.
3. Click the Select button under the S3 Compatible box as the Cloud service.
4. In the Connect a storage service, fill in the requested information. Use the data collected from the Set up your S3 compatible service step. Click on the Push button.
After some time, you will have stored within your S3-compatible bucket your Cloudflare deployment logs.
There are 2 environment options to deploy the script, select the one that fits better in your current infrastructure. Whatever alternative you select, you need to unpack first the integration package shared by our Support team. Unpack the deployment package provided by Lumu in your preferred path/folder. Keep in mind this location, as it will be required for further configurations. From now on, we will refer to this folder as <cloudflare_lumu_root> .
In the package, you will find the script required to run the integration. To use the script, you must locate yourself on the path selected for deployment ( <cloudflare_lumu_root> ). Specific directions are included in the next sections.
If you are running different Python scripts in the selected host, it’s recommended to create a virtual environment to preserve the integrity of other tools. To do so, follow these steps:
1. Using a command line tool, locate yourself in the <cloudflare_lumu_root> folder
2. Run the following command to create the virtual environment
python3 -m venv <venv_folder>3. Activate the virtual environment running the following
source <venv_folder>/bin/activate
To use the script, you must locate yourself on the path selected for deployment ( <cloudflare_lumu_root> ). Use the following command to show all options available for the package:
Usage:
Options |
Description |
---|---|
-h, --help | show this help message and exit |
--aws_access_key_id AWS_ACCESS_KEY_ID | AWS Access Key |
--aws_secret_access_key AWS_SECRET_ACCESS_KEY | AWS Secret Key |
--aws_region AWS_REGION | AWS region |
--aws_bucket_name AWS_BUCKET_NAME | AWS aws_bucket_name |
--aws_bucket_s3_compatible_url AWS_BUCKET_S3_COMPATIBLE_URL | AWS aws_bucket_s3_compatible_url e.g. https://<OCI-Tenancy-Namespace>.compat.objectstorage.<OCI-Region>.oraclecloud.com |
--aws_s3_marker_key AWS_S3_MARKER_KEY | OPTIONAL: Object Key of the S3 compatible bucket |
-key LUMU_CLIENT_KEY--lumu_client_key LUMU_CLIENT_KEY | Lumu Client key for the collector |
-cid LUMU_COLLECTOR_ID--lumu_collector_id LUMU_COLLECTOR_ID | Lumu Collector id |
--logging {screen,file} | Logging option (default screen). |
--verbose, -v | Verbosity level. |
Run the following command to poll all the Cloudflare-Oracle logs and push them into the Lumu custom data collector.
Run the following command to poll all the Cloudflare-Oracle logs and push them into the Lumu custom data collector.
- lumu_client_key=<LUMU_CLIENT_KEY>
lumu_collector_id=<LUMU_COLLECTOR_ID>
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
aws_region=<AWS_REGION>
aws_bucket_name=<AWS_BUCKET_NAME>
aws_bucket_s3_compatible_url=<AWS_BUCKET_S3_COMPATIBLE_URL>
[aws_s3_marker_key=<AWS_S3_MARKER_KEY>]
To redirect all the output from the execution process to a file, use the --logging file argument. The integration output will be stored in a file called lumu.log.
It’s recommended to set this flag. The script runs as a daemon process. The information stored in the file lumu.log is useful for tracing progress or troubleshooting.
The script is intended to be used as a daemon process. It is recommended to use it using complementary tools like nohup. Use the following line as an example:
If you are using a Python virtual environment
If you are NOT using a Python virtual environment
If you are using a Python virtual environment and Lumu Virtual Appliance with .config File
Set Cronjob to wake the script up every X time using these indications.Remember, only one instance of the integration will run per host machine.
Where
Following, you have an example with these values:
To identify failures in the script, please use the -v flag. This will allow you to identify failures in the script execution.
The integration is intended to run one instance per host machine, if another instance is trying to run the outcome will be like:
Stopping the integration 755294 , it might have another older instance running, check if is feasible or not
older pid: 738408 - cwd: /home/lumu/Documents/repos/cflare-s3comp-collection - since: 2023-08-20 07:30:56.490000
/home/lumu/.local/share/virtualenvs/cflare-s3comp-collection-AME_8FaP/bin/python /home/lumu/Documents/repos/cflare-s3comp-collection/cloudflare_lumu.py
If you have a Docker environment, you can select this option to run the integration as a Docker process. To deploy and run your integration as a docker container, locate yourself in the <cloudflare_lumu_root> folder, and follow these instructions:
1. To build the container, run the following command. Change all the flags based on the reference given in the script section above.
docker build --build-arg aws_access_key_id='xxx' --build-arg aws_secret_access_key='xxx' --build-arg aws_region='xxx' --build-arg aws_bucket_name='xxx' --build-arg aws_bucket_s3_compatible_url='xxx' --build-arg lumu_client_key='xxx' --build-arg lumu_collector_id='xxx' --tag python-lumu-cflare-s3comp .
Do not forget the dot "." at the end of the line.2. To run the container, run the following command:
docker run -d --restart unless-stopped --name lumu-cflare-s3comp python-lumu-cflare-s3comp
For troubleshooting purposes, you can run the following commands:
To log in to your container using an interactive shell:
To collect integration logs: