Cisco Secure Access Custom Data Collection Integration

Cisco Secure Access Custom Data Collection Integration

In this article, you will find out how to configure your Cisco Secure Access tenant to pull, transform, and inject the DNS and/or Web logs into Lumu to enhance the detection & response capabilities of your organization.

Requirements

  • An active Cisco Secure Access subscription.
    • A Cisco Secure Access administrator user
  • Lumu Custom Collector API configuration for DNS/Web Logs.
    • A Lumu custom collector ID and client key are required to set up the collection process. Information on how to create a custom collector in your Lumu portal can be found in Manage Custom Collectors.
    • For collecting Cisco DNS logs setup a Lumu DNS Query Collector and For Cisco Web logs configure a Lumu Proxy Entry Collector
  • Script host.
    • A Docker-enabled host is required to deploy the integration. This host must have Internet visibility over the Lumu Custom Collector API and the Cisco Secure Access API endpoints.
  • Script package.
    • Contact the Lumu support team to request the package we created to deploy the required files.

Contacted hosts

Ensure your script host can communicate with the following hosts. These are required for the operation of this integration.

  • Your Cloud Provider Storage
  • api.lumu.io
  • docker.io
  • ghcr.io
  • *.ubuntu.com
  • *.launchpad.net
  • canonical.com
  • debian.org
  • *.debian.org
  • debian-security.org
  • pypi.python.org
  • pypi.org
  • pythonhosted.org
  • files.pythonhosted.org

Integration’s overview

Lumu’s Custom Data Collection integration with Cisco Secure Access uses the logs pushed by Cisco Secure Access on AWS S3, then collects their results, processes them as Lumu events, and sends them to Lumu Cloud.

Preliminary Setup - Cisco Secure Access

To set up the integration, you must prepare your Cisco Secure Access instance to communicate with the Lumu integration. To do this, you need the following:

  • Activate logs storage with Cisco-managed S3
  • Enable logging for access policy rules

The following sections will guide you on how to perform these tasks

Activate the Cisco-managed S3 log storage

Perform the following steps to activate the Cisco-managed S3 log storage:

1. Navigate to the Admin using the menu on the left. Then, proceed to Management, and select Log Management.

2. In the Amazon S3 section, select Use Cisco-managed Amazon S3 storage. Then, select the appropriate region, and define the retention period that aligns with your organization's security policies. When finished, click Save.

3. Once you save the changes, a window with the credentials will pop-up. Save the Data path, Access Key, and Secret Key values. Keep them at hand for later use during the Set up the configuration files step. When finished, click CONTINUE.

Notes The Data Path is an S3 URI that includes the Bucket Name and an identifier, which will serve as the Parent folder. Extract the Bucket name and Parent folder as follows:
  • Bucket name: cisco-managed-us-west-1
  • Parent folder: 8370364_935aa3f0******

4. In the confirmation window, activate the Log Https Queries and Include Headers options.

Notes Please note that there may be a delay in generating and exporting the logs to the S3 storage; allow some time for them to show up in the Cloud storage.

Enable logging for access policy rules

After enabling log storage, it is necessary to enable logging for the desired access policy rule.

1. Navigate to Secure in the left menu. Then, go to Policy > Access Policy to access the list of rules.

2. Open the configuration menu (1) for the rule for which the logging is to be enabled and click Edit.

Notes It is highly recommended to enable logging for rules that match your outgoing network traffic.
Alert To successfully log HTTPS traffic, please ensure your security profile is configured to permit traffic decryption.

3. Turn on the Log Request toggle. Then, click Save.

Preliminary setup - Lumu portal

The integration set-up process needs you to collect this information from Lumu portal:

  • Lumu Collector key
  • Lumu Collector ID
  • Lumu Company UUID

Log in to your Lumu portal and run the following procedures to collect this data.

Collect your Lumu Collector Key

To collect the Lumu Collector key, please refer to the Collector key document.

Collect your Lumu Collector ID

To collect the Lumu Custom Collector key, please refer to the Collector ID document.

Collect your Lumu Company UUID

To collect your Lumu company UUID, log in to your Lumu portal. Once you are in the main window, copy the string below your company name.

Preliminary Setup - Prepare your integration environment

Notes Before starting, ensure your integration environment can communicate with the hosts listed in the Contacted Hosts section.

The integration is deployed in a Docker environment; therefore, adhere to the subsequent guidance to prepare the hosting environment.

  • Run it as a Docker container.
    • By using the Makefile model (Unix-based systems).
    • By using Docker commands (Unix-based systems and Docker Desktop for Windows).

Whichever alternative you select, you must unpack the integration package shared by our Support team.

Unpack the deployment package provided by Lumu in your preferred path/folder. Keep in mind this location, as it will be required for further configurations. From now on, we will refer to this folder as <app_lumu_root>.

Prepare Docker in your environment

You must follow the Docker installation documentation that corresponds to your OS. Ensure you follow the Post-installation steps for Linux before deploying the integration.

Notes For Windows users, follow the Install Docker Desktop for Windows documentation to install the Docker Engine.

Set up the configuration files

You need to add and edit the integrations.toml configuration file to set up the integration.

Notes You will find the integrations_template.toml sample file inside the integrations package. Use it to build your configuration file.
Notes All the parameters in red should be replaced with the real data necessary for your integration deployment. For example, the parameter “COMPANY-UUID” should end up as something similar to “aa11bb22bb33-123a-456b-789c-11aa22bb33cc”. Follow these indications for all similar parameters.

The integrations.toml file contains the information required by the integration to collect the network activity data from your Cisco Secure Access console, transform it, and send it to the Lumu Cloud.

[[integration]]
[integration.lumu]
uuid = "COMPANY_UUID_HERE"
collector_id = "COLLECTOR_ID_HERE"
collector_key = "COLLECTOR_KEY_HERE"

[integration.app]
name = "UNIQUE_APP_NAME_HERE"

[integration.app.storage]
name = "S3_BUCKET_NAME_HERE" # e.g., "cisco-managed-us-west-1"
parent_folder = "S3_CISCO_PARENT_FOLDER_ID_HERE" # if S3 Cisco managed put here the parent_folder ID
child_folder = "CISCO_LOG_TYPE_HERE" # e.g., "dnslogs", "proxylogs"

[integration.app.api]
aws_access_key_id = "AWS_ACCESS_KEY_ID_HERE"
aws_secret_access_key = "AWS_SECRET_ACCESS_KEY_HERE"
aws_region = "AWS_REGION_HERE" # e.g., "us-west-1"

Replace the highlighted placeholders as follows:

Alert You must fill in the configuration data carefully. If there are any mistakes or missing data, you’ll receive errors. Please refer to the Troubleshooting and known issues section at the end of this article for further reference.
Notes To create additional dnslogs or proxylogs, you must configure them as separate instances within the configuration file, ensuring that the child_folder parameter is updated to reflect the desired CISCO_LOG_TYPE_HERE value. When done correctly, your integrations.toml file should look as follows

Lumu introduced the Makefile model to assist customers in deploying the integration as a Docker container. To deploy the integration, locate yourself in the <package-path> folder, and run the following command:

make run-build
Notes Please monitor the console output for any unexpected errors. Fix them based on the command output and run the command again.

Deploy as a Docker container (Optional)

If you have a Docker environment, you can select this option to run the integration as a Docker process. To deploy and run your integration as a Docker container, locate yourself in the <app_lumu_root> folder, and follow these instructions:

1. To build the container, run the following command. Change all the flags based on the reference given in the script section above.

docker build --tag img-cisco-secure-access-collection --file Dockerfile .
Notes Do not forget the dot "." at the end of the line.

2. To run the container, run the following command:

docker run -v ${PWD}/integrations.toml:/app/integrations.toml -v ${PWD}/data:/app/data -d --restart unless-stopped --log-driver json-file --log-opt max-size=30m --log-opt max-file=3 --name lumu-cisco-secure-access-collection img-cisco-secure-access-collection

Expected results

After you configure the integration, you will see the processed events in the custom collector created in Lumu portal. Lumu integration will process events starting from 10 minutes before the integration activation time.

Lumu Portal

Troubleshooting

The commands defined in this section will allow you to troubleshoot the operation of your integration. Keep in mind that you must locate yourself in the <app_lumu_root> folder before running any of them.

Deployment via Makefile as a Docker container

The following are the troubleshooting commands for this deployment option:

  • Checking integration logs
    Run the following command to check your integration logs.
    make logs
  • Checking integration errors
    Run the following command to check errors in your integration.
    make errors
  • Check the current state of the integration against the state of the Cloud Storage objects
    Run the following command to see and validate the state of local markers, objects missing for processing, and the root-level path of the Cloud Provider Storage.
    make validate
  • Check the status of the integration
    Run the following command to check the status of the integration.
    make stats

  • Stopping the integration
    Run the following command if you need to stop the integration.
    make stop
  • Starting the integration
    Run the following command to start the integration.
    make start
  • Fixing issues with sudo for Docker
    If you cannot run Docker commands with your current user, run the following command.
    make docker-fix-sudo
Notes After running this command the host machine should be rebooted.
  • Reinstalling integration from scratch
    Run the following command to reinstall the integration from scratch:
    make reset-force
  • Collecting and packaging logs for Lumu support
    Run the following command to collect and package the integration logs to share them with the Lumu support team. This command will create the support.tar package file that contains relevant information for the Lumu support team.
    make support

Deployment as a Docker container

For troubleshooting purposes, you can run the following commands:

  • Logging in to the container using an interactive shell
    docker exec -it lumu-cisco-secure-access-collection bash
  • Collecting integration logs
    docker logs -f lumu-cisco-secure-access-collection

Known issues

In this section, we collect all the potential issues you will find after you run the troubleshooting commands from the above section.

Building errors

Most of the issues building the component are due to network issues like not having a proper Docker Network connection or temporarily unavailable repositories. Ensure your Docker has an active DNS resolution and a stable Internet connection.

You might receive errors like failed download, reset connection



Docker permission execution

If you got some error building the integration related to docker: permission denied while trying to connect to the Docker daemon socket, run the make docker-fix-sudo command to fix this issue.



Input Validation

If you receive errors like this:


It means you are using the wrong key parameters or values. Review the parameters entered during the Set up the configuration files step and run the integration again.

Authentication Failed

You will get the following log when the authentication fails. Ensure you are using the right credentials and try again.


Bucket Name Wrong

You will get the following log when there are problems with the parameters of the integration. Make sure you are entering the correct parameters and try again.


Bucket Path Not Found

You will get the following log when there are problems with the parameters of the integration. Make sure you are entering the correct parameters and try again.


Network Connection Problems

You may encounter issues such as Reset, Read Timeout, and others associated with general network connection failures. Make sure you have a stable Internet connection and try again.




Permission Errors

An error of this nature often indicates that while the credentials may be valid, the associated permissions are likely misconfigured and getting forbidden 403 error. Please verify the assigned role and permissions.


Another instance is running

If you receive the following error. Ensure you stop the other instance a try again.


      Get an AI Summary

          • Related Articles

          • Akamai SIA Custom Data Collection Integration

            In this article, you will find out how to configure your Akamai Secure Internet Access Enterprise (SIA) subscription and the Lumu Custom Data Collection integration to pull, transform, and inject the DNS query and Proxy logs recorded by Akamai into ...
          • Illumio Custom Data Collection Integration

            Learn how to enhance the detection & response capabilities of your organization by integrating Illumio with Lumu’s data collection capabilities to pull, transform and inject the activity network logs recorded by Illumio into Lumu. Requirements An ...
          • Netskope Log Streaming Custom Data Collection Integration

            In this article, you will find out how to configure your Netskope Log Streaming subscription and its Lumu Custom Data Collection integration to pull, transform, and inject the Web Transactions by Netskope Log Streaming into Lumu to enhance the ...
          • Cato Networks Custom Data Collection Integration

            In this article, you will find out how to configure your Cato Networks subscription and its Lumu Custom Data Collection integration to pull, transform, and inject the FW logs recorded by Cato Networks into Lumu to enhance the detection & response ...
          • Zero Networks Custom Data Collection Integration

            Learn how to enhance the detection & response capabilities of your organization by integrating Zero Networks with Lumu’s data collection capabilities to pull, transform and inject the activity network logs recorded by Zero Networks into Lumu. ...