Cato Networks Custom Data Collection Integration

Cato Networks Custom Data Collection Integration

In this article, you will find out how to configure your Cato Networks subscription and its Lumu Custom Data Collection integration to pull, transform, and inject the FW logs recorded by Cato Networks into Lumu to enhance the detection & response capabilities of your organization.

There are 2 ways to collect Cato Networks logs:

  • Generate an API key to use the Cato Networks GraphQL API
  • Configure Cato Network subscription to store logs in an AWS S3 Bucket.

Requirements

  • An active Cato Networks subscription.
  • Lumu Custom Collector API configuration for FW Logs.
    • A Lumu custom collector ID and client key are required to set up the collection process. Information on how to create a custom collector in your Lumu portal can be found in Manage Custom Collectors .
  • Script host.
    • A scripting host is required to deploy the integration. This host must have Internet visibility over the Lumu Custom Collector API and the Cato Networks API endpoints. According to the deployment model you select, you will need a host with:
      • Python 3.10+, or
      • A Docker-enabled host
  • Script package.
    • Contact the Lumu support team to request the package we created to deploy the required files.

Set up Cato Networks

Option 1: Create an API for collecting logs using Cato GraphQL API

This method is recommended for small and medium deployments where the overall traffic does not exceed 6 billion records per week.

To create an API Key for collecting events using the Cato GraphQL integration, go to the Cato Networks portal and follow these directions:

1. Go to Administration on the top tab.

2. On left submenu click on API & Integrations

3. Create an API Key with View permissions. For improved security, you can allow specific IP addresses for collecting the data.

If you decide to define a specific IP address to collect Cato logs, remember that the address must be the public IP used for the scripting host to connect to Internet. This address must be a static IP.

Remember to copy the API Key. It will be used to configure the integration script.

Option 2: Configure your Cato Network subcription to store logs in AWS

To configure your Cato deployment to store your logs in an AWS S3 bucket, log in to your Cato Network portal and follow these directions:

1. Set up an AWS S3 bucket following the directions given in the Configuring the AWS S3 Bucket section of the Integrating Cato Events with AWS S3 article. Take note of the bucket data, it will be needed for setting up your Cato deployment and later the integration.

2. In your Cato Networks portal, create a new application on the Event Integration tab.

3. Follow the steps depicted in the Adding Amazon S3 Integration for Events section of the Integrating Cato Events with AWS S3 article

After configuring your Cato deployment, you should see logs in your AWS S3 bucket as follows:


It's your responsibility to manage your bucket to avoid preserving old files.

Deploy the integration

There are 2 environment options to deploy the script, select the one that fits better in your current infrastructure. Whatever alternative you select, you need to first unpack the integration package shared by our Support team. Unpack the deployment package provided by Lumu in your preferred path/folder. Keep in mind this location, as it will be required for further configurations. From now on, we will refer to this folder as <cato_lumu_root> .

The integration works with Python 3.10. If your environment has prior versions, we recommend deploying the integration as a Docker Container.

Deploy as script

In the package, you will find the script required to run the integration. To use the script, you must locate yourself on the path selected for deployment ( <cato_lumu_root> ). Specific directions are included in the next sections.

Install requirements

If you are running different Python scripts in the selected host, it’s recommended to create a virtual environment to preserve the integrity of other tools. To do so, follow these steps:

1. Using a command line tool, locate yourself in the <cato_lumu_root> folder

2. Run the following command to create the virtual environment

python3 -m venv <venv_folder>

3. Activate the virtual environment running the following

source <venv_folder>/bin/activate

The file requirements.txt contains the list of requirements for this integration. After deploying the package locally, run the following command from the deployment folder:

pip install -r ./requirements.txt

Script details

It is known there are 2 paths to collect logs, then, there are 2 combinations of CLI commands to make the integration run with each option chosen, by APIKey GraphQL and AWS S3 Bucket.

To use the script, you must locate yourself on the path selected for deployment ( <cato_lumu_root> ). Use the following command to show all options available for the package:

python3 cato_lumu.py -h

usage: cato_lumu [-h] -key LUMU_CLIENT_KEY -cid LUMU_COLLECTOR_ID [-v] [-l {screen,file}] {GraphQL,S3Bucket}


Options

Description

-h, --help show this help message and exit
-key LUMU_CLIENT_KEY--lumu_client_key LUMU_CLIENT_KEY Lumu Client key for the collector
-cid LUMU_COLLECTOR_ID--lumu_collector_id LUMU_COLLECTOR_ID Lumu Collector id
--logging {screen,file} Logging option (default screen).
--verbose, -v Verbosity level.

cato_lumu.py GraphQL --help

usage: cato_lumu GraphQL [-h] -acc CATO_ACCOUNTS_IDS -ckey CATO_API_KEYCATO_API_KEY -key LUMU_CLIENT_KEY -cid LUMU_COLLECTOR_ID [-v] [-l {screen,file}]

Options

Description

-h, --help show this help message and exit
-acc CATO_ACCOUNTS_IDS--cato_accounts_ids CATO_ACCOUNTS_IDS Cato Account IDs e.g. 8012 or 8013,8012,8015
-ckey CATO_API_KEY--cato_api_key CATO_API_KEY Cato API key for query GRAPHQL

python3 cato_lumu.py S3Bucket --help

usage: cato_lumu S3Bucket [-h] --aws_access_key_id AWS_ACCESS_KEY_ID --aws_secret_access_key AWS_SECRET_ACCESS_KEY --aws_region AWS_REGION --aws_bucket_name AWS_BUCKET_NAME --aws_bucket_folder AWS_BUCKET_FOLDER [--aws_s3_obj_last_updated AWS_S3_OBJ_LAST_UPDATED]

Options

Description

-h, --help show this help message and exit
--aws_access_key_id AWS_ACCESS_KEY_ID AWS Access Key
--aws_secret_access_key AWS_SECRET_ACCESS_KEY AWS Secret Key
--aws_region AWS_REGION AWS region
--aws_bucket_name AWS_BUCKET_NAME AWS aws_bucket_name
--aws_bucket_folder AWS_BUCKET_FOLDER AWS aws_bucket_folder
--aws_s3_obj_last_updated AWS_S3_OBJ_LAST_UPDATED Optional, he datetime in UTC-0 in str format you want to start to collect from, e.g. 2023-08-08 16:20:00

Usage Examples

Task: poll and inject Cato logs into Lumu

Run the following command to poll all the Cato Networks logs and push them into the Lumu custom data collector.

API KEY via GraphQL:

python cato_lumo.py -key LUMU_CLIENT_KEY -cid LUMU_COLLECTOR_ID GraphQL --cato_accounts_ids CATO_ACCOUNTS_IDS --cato_api_key CATO_API_KEY

S3 Source:

python cato_lumu.py -key LUMU_CLIENT_KEY -cid LUMU_COLLECTOR_ID S3Bucket --aws_access_key_id AWS_ACCESS_KEY_ID --aws_secret_access_key AWS_SECRET_ACCESS_KEY --aws_region AWS_REGION --aws_bucket_name AWS_BUCKET_NAME --aws_bucket_folder AWS_BUCKET_FOLDER

Task: poll and inject Cato logs into Lumu with the input parameters within a .config file

Build the .config file using the following syntax:

  1. lumu_client_key=<LUMU-CLIENT-KEY>
    lumu_collector_id=<LUMU-COLLECTOR-ID>

    # S3Bucket Mode
    aws_access_key_id=<AWS_ACCESS_KEY_ID>
    aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
    aws_region=<AWS_REGION>
    aws_bucket_name=<AWS_BUCKET_NAME>
    aws_bucket_folder=<AWS_BUCKET_FOLDER>

    # GraphQL mode
    cato_accounts_ids=<ACCOUNT_ID(S)> # COMMA-SEPARATED IF THERE MORE THAN ONE

    cato_api_key=<CATO-API-KEY>

Then, use one of the following commands, depending on your deployment method:

python cato_lumo.py GraphQL

or

python cato_lumo.py S3Bucket

Task: store execution records in a file

To redirect all the output from the execution process to a file, use the --logging file argument. The integration output will be stored in a file called lumu.log.

API KEY via GraphQL

python cato_lumo.py -v -l file -key LUMU_CLIENT_KEY -cid LUMU_COLLECTOR_ID GraphQL --cato_accounts_ids CATO_ACCOUNTS_IDS --cato_api_key CATO_API_KEY

S3 Source

python cato_lumu.py -v -l file -key LUMU_CLIENT_KEY -cid LUMU_COLLECTOR_ID S3Bucket --aws_access_key_id AWS_ACCESS_KEY_ID --aws_secret_access_key AWS_SECRET_ACCESS_KEY --aws_region AWS_REGION --aws_bucket_name AWS_BUCKET_NAME --aws_bucket_folder AWS_BUCKET_FOLDER

It’s recommended to set this flag. The script runs as a daemon process. The information stored in the file lumu.log is useful for tracing progress or troubleshooting.

Further considerations

The script is intended to be used as a daemon process. It is recommended to use it using complementary tools like nohup. Use the following line as an example:

If you are using a Python virtual environment

nohup <venv_path>/bin/python <cato_lumu_root>/cato_lumu.py <flags and arguments> &

If you are NOT using a Python virtual environment

nohup python3 <cato_lumu_root>/cato_lumu.py <flags and arguments> &

Troubleshooting

To identify failures in the script, please use the -v flag. This will allow you to identify failures in the script execution.

Deploy as a Docker container (Optional)

If you have a Docker environment, you can select this option to run the integration as a Docker process. To deploy and run your integration as a docker container, locate yourself in the <cato_lumu_root> folder, and follow these instructions:

1. To build the container, run the following command. Change all the flags based on the reference given in the script section above.

GraphQL

docker build --build-arg cato_source='GraphQL' --build-arg cato_accounts_ids='xxx' --build-arg cato_api_key='xxx' --build-arg lumu_client_key='xxx' --build-arg lumu_collector_id='xxx' --tag python-lumu-cato .

S3Bucket

docker build --build-arg cato_source='S3Bucket' --build-arg aws_access_key_id='xxx' --build-arg aws_secret_access_key='xxx' --build-arg aws_region='xxx' --build-arg aws_bucket_name='xxx' --build-arg aws_bucket_folder='xxx' --build-arg lumu_client_key='xxx' --build-arg lumu_collector_id='xxx' --tag python-lumu-cato .

Do not forget the dot "." at the end of the line

2. To run the container, run the following command:

docker run -d --restart unless-stopped --name lumu-cato python-lumu-cato

Troubleshooting

For troubleshooting purposes, you can run the following commands:

To log in to your container using an interactive shell:

docker exec -it lumu-cato bash

To collect integration logs:

docker logs -f lumu-cato


        • Related Articles

        • Cloudflare - S3 Compatible Storage Custom Data Collection Integration

          In this article, you will find out how to configure your Cloudflare Enterprise subscription and the Lumu Custom Data Collection integration to pull, transform, and inject the DNS Gateway logs recorded by Cloudflare into Lumu to enhance the detection ...
        • DNSFilter Custom Data Collection Integration

          In this article, you will find out how to configure your DNSFilter subscription and its Lumu Custom Data Collection integration to pull, transform, and inject the query logs recorded by DNSFilter into Lumu to enhance the detection & response ...
        • Microsoft Entra ID NSG Flow Logs Custom Data Collection Integration

          Microsoft Azure is now called Entra ID In this article, you will find out how to configure your Microsoft Entra ID subscription and its Lumu Custom Data Collection integration to pull, transform, and inject Entra ID Network Security Group flow logs ...
        • Akamai SIA Custom Data Collection Integration

          In this article, you will find out how to configure your Akamai Secure Internet Access Enterprise (SIA) subscription and the Lumu Custom Data Collection integration to pull, transform, and inject the DNS query and Proxy logs recorded by Akamai into ...
        • Zscaler Custom Data Collection Integration

          Before going through this article, check our Out-of-the-box App Integrations category. This is the recommended way to integrate the components of your cybersecurity stack with Lumu. If the product you are looking to integrate is there, it is advised ...