Cloudflare - S3 Compatible Storage Custom Data Collection Integration

Cloudflare - S3 Compatible Storage Custom Data Collection Integration

In this article, you will find out how to configure your Cloudflare Enterprise subscription and the Lumu Custom Data Collection integration to pull, transform, and inject the DNS Gateway logs recorded by Cloudflare into Lumu to enhance the detection & response capabilities of your organization.

This procedure covers how to set up your Cloudflare subscription to push DNS Gateway into Oracle Cloud. This procedure should apply for other S3 compatible services. Please refer to your Cloud provider documentation to check the details for setting up this type of services.

Requirements

  • An active Cloudflare Enterprise subscription.
  • Lumu Custom Collector API configuration for DNS Logs.
    • A Lumu custom collector ID and client key are required to set up the collection process. Information on how to create a custom collector in your Lumu portal can be found in Manage Custom Collectors .
  • Script host.
    • A scripting host is required to deploy the integration. This host must have Internet visibility over the Lumu Custom Collector API and the Cato Networks API endpoints. According to the deployment model you select, you will need a host with:
      • Python 3.10+, or
      • A Docker-enabled host
  • Script package.
    • Contact the Lumu support team to request the package we created to deploy the required files.

Set up your S3 compatible service

The following steps show how to set the Oracle Cloud Infrastructure OCI Object Storage feature to store raw logs from Cloudflare. These instructions are applicable to other S3-compatible technologies. Make sure you follow your vendor's directions for setting up your service. For further reference, you can read the Enable Logpush to S3-compatible endpoints reference from Cloudflare.

Creating a S3-compatible bucket

For creating an S3-compatible bucket in Oracle Cloud Infrastructure (OCI), please follow the directions given in the Oracle Cloud Infrastructure Documentation .

Set up the S3-compatible bucket access through API

To set up access to your bucket using your S3-compatible API, you need to collect the following information from your Oracle Cloud Infrastructure

  • Bucket URL
  • Namespace
  • Region
  • Access and Secret key pair

Follow the instructions given in the Oracle Cloud Infrastructure Documentation for the Amazon S3 Compatibility API to collect the required information. To create the Access and Secret key pair, refer to the Managing User Credentials in the Oracle Cloud Infrastructure Documentation.

Save this information for setting up your Cloudflare service in the next steps.

Set up Cloudflare Logpush to collect DNS Gateway logs in your S3-compatible bucket

Now, it’s time to set up your Cloudflare Logpush feature to collect its DNS Gateway logs. Log in to your Cloudflare and follow these steps:

1. Using the left navigation panel, click on the Logpush menu under the Logs section.

2. Under the Logpush screen, click on the Connect a service button . Fill in the required data as indicated. Click the Next button.




a. Type a Job name.

b. Select Gateway DNS as Data set.

c. In the Data fields section, select at least: Date time, Source IP, Query name, and Query type name. Optionally, select Device name, Email, and Location.

d. In the Timestamp format field under the Advanced settings, select RFC3339.

3. Click the Select button under the S3 Compatible box as the Cloud service.

4. In the Connect a storage service, fill in the requested information. Use the data collected from the Set up your S3 compatible service step. Click on the Push button.

After some time, you will have stored within your S3-compatible bucket your Cloudflare deployment logs.

Deploy the integration

There are 2 environment options to deploy the script, select the one that fits better in your current infrastructure. Whatever alternative you select, you need to unpack first the integration package shared by our Support team. Unpack the deployment package provided by Lumu in your preferred path/folder. Keep in mind this location, as it will be required for further configurations. From now on, we will refer to this folder as <cloudflare_lumu_root> .

The integration works with Python 3.10+. If your environment has prior versions, we recommend deploying the integration as a Docker Container.

Deploy as script

In the package, you will find the script required to run the integration. To use the script, you must locate yourself on the path selected for deployment ( <cloudflare_lumu_root> ). Specific directions are included in the next sections.

Install requirements

If you are running different Python scripts in the selected host, it’s recommended to create a virtual environment to preserve the integrity of other tools. To do so, follow these steps:

1. Using a command line tool, locate yourself in the <cloudflare_lumu_root> folder

2. Run the following command to create the virtual environment

python3 -m venv <venv_folder>

3. Activate the virtual environment running the following

source <venv_folder>/bin/activate
The file requirements.txt contains the list of requirements for this integration. After deploying the package locally, run the following command from the deployment folder:

pip install -r ./requirements.txt

Script details

To use the script, you must locate yourself on the path selected for deployment ( <cloudflare_lumu_root> ). Use the following command to show all options available for the package:

python3 cloudflare_lumu.py -h

Usage: 

cloudflare_lumu [-h] [--aws_access_key_id AWS_ACCESS_KEY_ID] [--aws_secret_access_key AWS_SECRET_ACCESS_KEY] [--aws_region AWS_REGION] [--aws_bucket_name AWS_BUCKET_NAME] [--aws_bucket_s3_compatible_url AWS_BUCKET_S3_COMPATIBLE_URL] [--aws_s3_marker_key AWS_S3_MARKER_KEY] [-key LUMU_CLIENT_KEY] [-cid LUMU_COLLECTOR_ID] [-v] [-l {screen,file}]

                      

Options

Description

-h, --help show this help message and exit
--aws_access_key_id AWS_ACCESS_KEY_ID AWS Access Key
--aws_secret_access_key AWS_SECRET_ACCESS_KEY AWS Secret Key
--aws_region AWS_REGION AWS region
--aws_bucket_name AWS_BUCKET_NAME AWS aws_bucket_name
--aws_bucket_s3_compatible_url AWS_BUCKET_S3_COMPATIBLE_URL AWS aws_bucket_s3_compatible_url e.g. https://<OCI-Tenancy-Namespace>.compat.objectstorage.<OCI-Region>.oraclecloud.com
--aws_s3_marker_key AWS_S3_MARKER_KEY OPTIONAL: Object Key of the S3 compatible bucket
-key LUMU_CLIENT_KEY--lumu_client_key LUMU_CLIENT_KEY Lumu Client key for the collector
-cid LUMU_COLLECTOR_ID--lumu_collector_id LUMU_COLLECTOR_ID Lumu Collector id
--logging {screen,file} Logging option (default screen).
--verbose, -v Verbosity level.

Usage Examples

Task: poll and inject Cloudflare logs into Lumu

Run the following command to poll all the Cloudflare-Oracle logs and push them into the Lumu custom data collector.

python3 cloudflare_lumu.py --aws_access_key_id AWS_ACCESS_KEY_ID --aws_secret_access_key AWS_SECRET_ACCESS_KEY --aws_region AWS_REGION --aws_bucket_name AWS_BUCKET_NAME --aws_bucket_s3_compatible_url AWS_BUCKET_S3_COMPATIBLE_URL -key LUMU_CLIENT_KEY -cid LUMU_COLLECTOR_ID

Task: poll and inject Cloudflare logs into Lumu with the input parameters within a .config file

Run the following command to poll all the Cloudflare-Oracle logs and push them into the Lumu custom data collector.

  1. lumu_client_key=<LUMU_CLIENT_KEY>
    lumu_collector_id=<LUMU_COLLECTOR_ID>

    aws_access_key_id=<AWS_ACCESS_KEY_ID>
    aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
    aws_region=<AWS_REGION>
    aws_bucket_name=<AWS_BUCKET_NAME>
    aws_bucket_s3_compatible_url=<AWS_BUCKET_S3_COMPATIBLE_URL>

    [aws_s3_marker_key=<AWS_S3_MARKER_KEY>]

python cloudflare_lumu.py

Task: store execution records in a file

To redirect all the output from the execution process to a file, use the --logging file argument. The integration output will be stored in a file called lumu.log.

python3 cloudflare_lumu.py --aws_access_key_id AWS_ACCESS_KEY_ID --aws_secret_access_key AWS_SECRET_ACCESS_KEY --aws_region AWS_REGION --aws_bucket_name AWS_BUCKET_NAME --aws_bucket_s3_compatible_url AWS_BUCKET_S3_COMPATIBLE_URL -key LUMU_CLIENT_KEY -cid LUMU_COLLECTOR_ID -v -l file 

It’s recommended to set this flag. The script runs as a daemon process. The information stored in the file lumu.log is useful for tracing progress or troubleshooting.

Further considerations

The script is intended to be used as a daemon process. It is recommended to use it using complementary tools like nohup. Use the following line as an example:

If you are using a Python virtual environment

nohup <venv_path>/bin/python <cloudflare_lumu_root>/cloudflare_lumu.py <flags and arguments> &

If you are NOT using a Python virtual environment

nohup python3 <cloudflare_lumu_root>/cloudflare_lumu.py <flags and arguments> &

If you are using a Python virtual environment and Lumu Virtual Appliance with .config File

Set Cronjob to wake the script up every X time using these indications.Remember, only one instance of the integration will run per host machine.

*/6 * * * * cd  <IWD> && <IWD>/<PVENV>/bin/python cloudflare_lumu.py -l file  2>&1

Where

  • IWD: Integration working directory
  • PYENV: Python virtual environment folder

Following, you have an example with these values:

  • IWD: /home/applianceadmin/lumuio-cflare-s3comp-collection
  • PYENV: .venv

*/6 * * * * cd /home/applianceadmin/lumuio-cflare-s3comp-collection && /home/applianceadmin/lumuio-cflare-s3comp-collection/.venv/bin/python cloudflare_lumu.py -l file  2>&1

Troubleshooting

To identify failures in the script, please use the -v flag. This will allow you to identify failures in the script execution.

The integration is intended to run one instance per host machine, if another instance is trying to run the outcome will be like:

20-08-2023 09:13:24 - lumu-at - [139966867072832]:[INFO] - ----------------- Lumu CloudFlare Custom Collector -----------------

Stopping the integration 755294 , it might have another older instance running, check if is feasible or not

older pid: 738408 - cwd: /home/lumu/Documents/repos/cflare-s3comp-collection - since: 2023-08-20 07:30:56.490000  

/home/lumu/.local/share/virtualenvs/cflare-s3comp-collection-AME_8FaP/bin/python /home/lumu/Documents/repos/cflare-s3comp-collection/cloudflare_lumu.py

Deploy as a Docker container (Optional)

If you have a Docker environment, you can select this option to run the integration as a Docker process. To deploy and run your integration as a docker container, locate yourself in the <cloudflare_lumu_root> folder, and follow these instructions:

1. To build the container, run the following command. Change all the flags based on the reference given in the script section above.

docker build --build-arg aws_access_key_id='xxx' --build-arg aws_secret_access_key='xxx' --build-arg aws_region='xxx' --build-arg aws_bucket_name='xxx' --build-arg aws_bucket_s3_compatible_url='xxx' --build-arg lumu_client_key='xxx' --build-arg lumu_collector_id='xxx' --tag python-lumu-cflare-s3comp .

Do not forget the dot "." at the end of the line.

2. To run the container, run the following command:

docker run -d --restart unless-stopped --name lumu-cflare-s3comp python-lumu-cflare-s3comp

Troubleshooting

For troubleshooting purposes, you can run the following commands:

To log in to your container using an interactive shell:

docker exec -it lumu-cflare-s3comp bash

To collect integration logs:

docker logs -f lumu-cflare-s3comp


        • Related Articles

        • DNSFilter Custom Data Collection Integration

          In this article, you will find out how to configure your DNSFilter subscription and its Lumu Custom Data Collection integration to pull, transform, and inject the query logs recorded by DNSFilter into Lumu to enhance the detection & response ...
        • Microsoft Entra ID NSG Flow Logs Custom Data Collection Integration

          Microsoft Azure is now called Entra ID In this article, you will find out how to configure your Microsoft Entra ID subscription and its Lumu Custom Data Collection integration to pull, transform, and inject Entra ID Network Security Group flow logs ...
        • Akamai SIA Custom Data Collection Integration

          In this article, you will find out how to configure your Akamai Secure Internet Access Enterprise (SIA) subscription and the Lumu Custom Data Collection integration to pull, transform, and inject the DNS query and Proxy logs recorded by Akamai into ...
        • Cato Networks Custom Data Collection Integration

          In this article, you will find out how to configure your Cato Networks subscription and its Lumu Custom Data Collection integration to pull, transform, and inject the FW logs recorded by Cato Networks into Lumu to enhance the detection & response ...
        • Zscaler Custom Data Collection Integration

          Before going through this article, check our Out-of-the-box App Integrations category. This is the recommended way to integrate the components of your cybersecurity stack with Lumu. If the product you are looking to integrate is there, it is advised ...