In this article, you will find out how to configure your Cato Networks subscription and its Lumu Custom Data Collection integration to pull, transform, and inject the FW logs recorded by Cato Networks into Lumu to enhance the detection & response capabilities of your organization.
There are 2 ways to collect Cato Networks logs:
To create an API Key for collecting events using the Cato GraphQL integration, go to the Cato Networks portal and follow these directions:
a. Enter a distinctive Key Name.b. Set the API Permission to View.c. Set the Allow access from IPs configuration based on your security needs. You must know the public IP address your integration host will use to set this property. If you are not sure, leave its value as Any IP.d. Set the Expired at value based on your security policies. We encourage you to define this date to no less than 180 days in the future.e. When finished, click on the Apply button to save your API key.
To configure your Cato deployment to store your logs in an AWS S3 bucket, first set up an AWS S3 bucket following the directions given in the Configuring the AWS S3 Bucket section of the Integrating Cato Events with AWS S3 article. Take note of the bucket data, it will be needed for setting up your Cato deployment and later the integration. Then, log in to your Cato Network portal and follow these directions:
a. Select Amazon S3 as the Integration.b. Give the integration a distinctive Name.c. Enter the Bucket Name as created in AWS.d. Enter the Folder if you defined one in AWS.e. Select the Region where you created your AWS S3 bucket.f. Enter the Role ARN as you created it in AWS IAM.g. When finished, click on the Apply button to save your new event integration.
In the package, you will find the script required to run the integration. To use the script, you must locate yourself on the path selected for deployment ( <cato_lumu_root> ). Specific directions are included in the next sections.
If you are running different Python scripts in the selected host, it’s recommended to create a virtual environment to preserve the integrity of other tools. To do so, follow these steps:
1. Using a command line tool, locate yourself in the <cato_lumu_root> folder
2. Run the following command to create the virtual environment
3. Activate the virtual environment running the following
The file requirements.txt contains the list of requirements for this integration. After deploying the package locally, run the following command from the deployment folder:
It is known there are 2 paths to collect logs, then, there are 2 combinations of CLI commands to make the integration run with each option chosen, by APIKey GraphQL and AWS S3 Bucket.
To use the script, you must locate yourself on the path selected for deployment ( <cato_lumu_root> ). Use the following command to show all options available for the package:
usage: cato_lumu [-h] -key LUMU_CLIENT_KEY -cid LUMU_COLLECTOR_ID [-v] [-l {screen,file}] {GraphQL,S3Bucket}
Options |
Description |
---|---|
-h, --help | show this help message and exit |
-key LUMU_CLIENT_KEY--lumu_client_key LUMU_CLIENT_KEY | Lumu Client key for the collector |
-cid LUMU_COLLECTOR_ID--lumu_collector_id LUMU_COLLECTOR_ID | Lumu Collector id |
--logging {screen,file} | Logging option (default screen). |
--verbose, -v | Verbosity level. |
usage: cato_lumu GraphQL [-h] -acc CATO_ACCOUNTS_IDS -ckey CATO_API_KEYCATO_API_KEY -key LUMU_CLIENT_KEY -cid LUMU_COLLECTOR_ID [-v] [-l {screen,file}]
Options |
Description |
---|---|
-h, --help | show this help message and exit |
-acc CATO_ACCOUNTS_IDS--cato_accounts_ids CATO_ACCOUNTS_IDS | Cato Account IDs e.g. 8012 or 8013,8012,8015 |
-ckey CATO_API_KEY--cato_api_key CATO_API_KEY | Cato API key for query GRAPHQL |
usage: cato_lumu S3Bucket [-h] --aws_access_key_id AWS_ACCESS_KEY_ID --aws_secret_access_key AWS_SECRET_ACCESS_KEY --aws_region AWS_REGION --aws_bucket_name AWS_BUCKET_NAME --aws_bucket_folder AWS_BUCKET_FOLDER [--aws_s3_obj_last_updated AWS_S3_OBJ_LAST_UPDATED]
Options |
Description |
---|---|
-h, --help | show this help message and exit |
--aws_access_key_id AWS_ACCESS_KEY_ID | AWS Access Key |
--aws_secret_access_key AWS_SECRET_ACCESS_KEY | AWS Secret Key |
--aws_region AWS_REGION | AWS region |
--aws_bucket_name AWS_BUCKET_NAME | AWS aws_bucket_name |
--aws_bucket_folder AWS_BUCKET_FOLDER | AWS aws_bucket_folder |
--aws_s3_obj_last_updated AWS_S3_OBJ_LAST_UPDATED | Optional, he datetime in UTC-0 in str format you want to start to collect from, e.g. 2023-08-08 16:20:00 |
Run the following command to poll all the Cato Networks logs and push them into the Lumu custom data collector.
API KEY via GraphQL:
S3 Source:
Build the .config file using the following syntax:
lumu_client_key=<LUMU-CLIENT-KEY>lumu_collector_id=<LUMU-COLLECTOR-ID># S3Bucket Mode
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
aws_region=<AWS_REGION>
aws_bucket_name=<AWS_BUCKET_NAME>
aws_bucket_folder=<AWS_BUCKET_FOLDER># GraphQL modecato_accounts_ids=<ACCOUNT_ID(S)> # COMMA-SEPARATED IF THERE MORE THAN ONEcato_api_key=<CATO-API-KEY>
Then, use one of the following commands, depending on your deployment method:
To redirect all the output from the execution process to a file, use the --logging file argument. The integration output will be stored in a file called lumu.log.
API KEY via GraphQL
S3 Source
It’s recommended to set this flag. The script runs as a daemon process. The information stored in the file lumu.log is useful for tracing progress or troubleshooting.
The script is intended to be used as a daemon process. It is recommended to use it using complementary tools like nohup. Use the following line as an example:
If you are using a Python virtual environment
If you are NOT using a Python virtual environment
If you have a Docker environment, you can select this option to run the integration as a Docker process. To deploy and run your integration as a docker container, locate yourself in the <cato_lumu_root> folder, and follow these instructions:
1. To build the container, run the following command. Change all the flags based on the reference given in the script section above.
GraphQL
docker build --build-arg cato_source='GraphQL' --build-arg cato_accounts_ids='xxx' --build-arg cato_api_key='xxx' --build-arg lumu_client_key='xxx' --build-arg lumu_collector_id='xxx' --tag python-lumu-cato .S3Bucket
docker build --build-arg cato_source='S3Bucket' --build-arg aws_access_key_id='xxx' --build-arg aws_secret_access_key='xxx' --build-arg aws_region='xxx' --build-arg aws_bucket_name='xxx' --build-arg aws_bucket_folder='xxx' --build-arg lumu_client_key='xxx' --build-arg lumu_collector_id='xxx' --tag python-lumu-cato .
Do not forget the dot "." at the end of the line
2. To run the container, run the following command:
docker run -d --restart unless-stopped --name lumu-cato python-lumu-cato
To log in to your container using an interactive shell:
To collect integration logs: