In this article, you will find out how to configure your Cato Networks subscription and its Lumu Custom Data Collection integration to pull, transform, and inject the FW logs recorded by Cato Networks into Lumu to enhance the detection & response capabilities of your organization.
There are 2 ways to collect Cato Networks logs:
To create an API Key for collecting events using the Cato GraphQL integration, go to the Cato Networks portal and follow these directions:
1. Go to Administration on the top tab.
2. On left submenu click on API & Integrations
3. Create an API Key with View permissions. For improved security, you can allow specific IP addresses for collecting the data.
If you decide to define a specific IP address to collect Cato logs, remember that the address must be the public IP used for the scripting host to connect to Internet. This address must be a static IP.
To configure your Cato deployment to store your logs in an AWS S3 bucket, log in to your Cato Network portal and follow these directions:
1. Set up an AWS S3 bucket following the directions given in the Configuring the AWS S3 Bucket section of the Integrating Cato Events with AWS S3 article. Take note of the bucket data, it will be needed for setting up your Cato deployment and later the integration.
2. In your Cato Networks portal, create a new application on the Event Integration tab.
3. Follow the steps depicted in the Adding Amazon S3 Integration for Events section of the Integrating Cato Events with AWS S3 article
After configuring your Cato deployment, you should see logs in your AWS S3 bucket as follows:
There are 2 environment options to deploy the script, select the one that fits better in your current infrastructure. Whatever alternative you select, you need to first unpack the integration package shared by our Support team. Unpack the deployment package provided by Lumu in your preferred path/folder. Keep in mind this location, as it will be required for further configurations. From now on, we will refer to this folder as <cato_lumu_root> .
In the package, you will find the script required to run the integration. To use the script, you must locate yourself on the path selected for deployment ( <cato_lumu_root> ). Specific directions are included in the next sections.
If you are running different Python scripts in the selected host, it’s recommended to create a virtual environment to preserve the integrity of other tools. To do so, follow these steps:
1. Using a command line tool, locate yourself in the <cato_lumu_root> folder
2. Run the following command to create the virtual environment
3. Activate the virtual environment running the following
The file requirements.txt contains the list of requirements for this integration. After deploying the package locally, run the following command from the deployment folder:
It is known there are 2 paths to collect logs, then, there are 2 combinations of CLI commands to make the integration run with each option chosen, by APIKey GraphQL and AWS S3 Bucket.
To use the script, you must locate yourself on the path selected for deployment ( <cato_lumu_root> ). Use the following command to show all options available for the package:
usage: cato_lumu [-h] -key LUMU_CLIENT_KEY -cid LUMU_COLLECTOR_ID [-v] [-l {screen,file}] {GraphQL,S3Bucket}
Options |
Description |
---|---|
-h, --help | show this help message and exit |
-key LUMU_CLIENT_KEY--lumu_client_key LUMU_CLIENT_KEY | Lumu Client key for the collector |
-cid LUMU_COLLECTOR_ID--lumu_collector_id LUMU_COLLECTOR_ID | Lumu Collector id |
--logging {screen,file} | Logging option (default screen). |
--verbose, -v | Verbosity level. |
usage: cato_lumu GraphQL [-h] -acc CATO_ACCOUNTS_IDS -ckey CATO_API_KEYCATO_API_KEY -key LUMU_CLIENT_KEY -cid LUMU_COLLECTOR_ID [-v] [-l {screen,file}]
Options |
Description |
---|---|
-h, --help | show this help message and exit |
-acc CATO_ACCOUNTS_IDS--cato_accounts_ids CATO_ACCOUNTS_IDS | Cato Account IDs e.g. 8012 or 8013,8012,8015 |
-ckey CATO_API_KEY--cato_api_key CATO_API_KEY | Cato API key for query GRAPHQL |
usage: cato_lumu S3Bucket [-h] --aws_access_key_id AWS_ACCESS_KEY_ID --aws_secret_access_key AWS_SECRET_ACCESS_KEY --aws_region AWS_REGION --aws_bucket_name AWS_BUCKET_NAME --aws_bucket_folder AWS_BUCKET_FOLDER [--aws_s3_obj_last_updated AWS_S3_OBJ_LAST_UPDATED]
Options |
Description |
---|---|
-h, --help | show this help message and exit |
--aws_access_key_id AWS_ACCESS_KEY_ID | AWS Access Key |
--aws_secret_access_key AWS_SECRET_ACCESS_KEY | AWS Secret Key |
--aws_region AWS_REGION | AWS region |
--aws_bucket_name AWS_BUCKET_NAME | AWS aws_bucket_name |
--aws_bucket_folder AWS_BUCKET_FOLDER | AWS aws_bucket_folder |
--aws_s3_obj_last_updated AWS_S3_OBJ_LAST_UPDATED | Optional, he datetime in UTC-0 in str format you want to start to collect from, e.g. 2023-08-08 16:20:00 |
Run the following command to poll all the Cato Networks logs and push them into the Lumu custom data collector.
API KEY via GraphQL:
S3 Source:
Build the .config file using the following syntax:
lumu_client_key=<LUMU-CLIENT-KEY>lumu_collector_id=<LUMU-COLLECTOR-ID># S3Bucket Mode
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
aws_region=<AWS_REGION>
aws_bucket_name=<AWS_BUCKET_NAME>
aws_bucket_folder=<AWS_BUCKET_FOLDER>
# GraphQL mode
cato_accounts_ids=<ACCOUNT_ID(S)> # COMMA-SEPARATED IF THERE MORE THAN ONEcato_api_key=<CATO-API-KEY>
Then, use one of the following commands, depending on your deployment method:
To redirect all the output from the execution process to a file, use the --logging file argument. The integration output will be stored in a file called lumu.log.
API KEY via GraphQL
S3 Source
It’s recommended to set this flag. The script runs as a daemon process. The information stored in the file lumu.log is useful for tracing progress or troubleshooting.
The script is intended to be used as a daemon process. It is recommended to use it using complementary tools like nohup. Use the following line as an example:
If you are using a Python virtual environment
If you are NOT using a Python virtual environment
To identify failures in the script, please use the -v flag. This will allow you to identify failures in the script execution.
If you have a Docker environment, you can select this option to run the integration as a Docker process. To deploy and run your integration as a docker container, locate yourself in the <cato_lumu_root> folder, and follow these instructions:
1. To build the container, run the following command. Change all the flags based on the reference given in the script section above.
GraphQL
docker build --build-arg cato_source='GraphQL' --build-arg cato_accounts_ids='xxx' --build-arg cato_api_key='xxx' --build-arg lumu_client_key='xxx' --build-arg lumu_collector_id='xxx' --tag python-lumu-cato .S3Bucket
docker build --build-arg cato_source='S3Bucket' --build-arg aws_access_key_id='xxx' --build-arg aws_secret_access_key='xxx' --build-arg aws_region='xxx' --build-arg aws_bucket_name='xxx' --build-arg aws_bucket_folder='xxx' --build-arg lumu_client_key='xxx' --build-arg lumu_collector_id='xxx' --tag python-lumu-cato .
Do not forget the dot "." at the end of the line2. To run the container, run the following command:
docker run -d --restart unless-stopped --name lumu-cato python-lumu-cato
For troubleshooting purposes, you can run the following commands:
To log in to your container using an interactive shell:
To collect integration logs: