This article shows how to leverage the Lumu Defender API and Elastic Security API to mitigate security risks.
Follow these steps to set up your Elastic Security console to work with Lumu integration.
For security reasons, it’s recommended to create a specific role with the required permissions. Follow these steps to create the lumu-integration role.
1. Expand the left hamburger menu, then click on the Stack management menu under the Management section.
2. Click on the Roles menu under the Security section.
3. Click on the Create role button.
4. Under the Create role window, fill in the required data using these guidelines:
a. Role name: lumu-integration
b. Add manage_own_api_key as a Cluster privilege
c. Add a Kibana privilege.
i. Spaces: All spaces
ii. Expand the Security section. Customize the Blocklist privilege to All
5. Finish clicking on the Create global privilege button
After creating the limited role, you need to create a user for the integration and assign to it the lumu-integration role.
1. Expand the left hamburger menu, then click on the Stack management menu under the Management section.
2. Click on the Users menu under the Security section.
3. Click on the Create user button.
4. Under the Create user window, fill in the required data. You can leave the Full name and Email address fields blank.
5. Finish clicking on the Create user button.
To configure the integration, you need to generate a Personal API key for the user the integration is being set up for. To create this Personal API key follow these steps while logged in your Elastic console with the user you will use for the integration:
1. Expand the left hamburger menu, then click on the Stack management menu under the Management section.2. Click on the API Keys menu under the Security section on the left navigation bar.3. Click on the Create API key button.4. Under the Create API key window, fill in the required data.5. Finish clicking on the Create API key button.
The Elastic base URL is required for setting up the integration in later steps. You can extract this piece of data while logged into your Elastic console. Check your navigation bar and copy the part of the URL shown in the image.
If you want to attach the pushed IOCs to a particular set of policies, you need to identify them and collect their UUIDs using your Elastic console.
1. Expand the left hamburger menu, then click on the Manage menu under the Security section.
2. Click on the Manage menu under the Security menu section.
3. In the Manage window, click on the Policies menu.
4. In the Policies window, click on the policy you want to attach the IOCs to.
5. From the navigation bar, extract the policy UUID according to the next image
6. Repeat the steps 4 and 5 for each policy.
The integration set-up process needs you to collect this information from Lumu portal:
Log in to your Lumu portal and run the following procedures to collect these data.
To collect the Lumu Defender API key, please refer to the Defender API document.
To collect your Lumu company UUID, log in to your Lumu portal. Once you are in the main window, copy the string below your company name.
There are 2 environment options to deploy the script, select the one that best fits your current infrastructure.
- Creates a Python virtual run time and its dependencies for you
- Installs the crontab line in the host
Whichever alternative you select, you need to first unpack the integration package shared by our Support team.
Unpack the deployment package provided by Lumu in your preferred path/folder. Keep in mind this location, as it will be required for further configurations. From now on, we will refer to this folder as <app_lumu_root>.
To set up the integration, you need to add and edit two configuration files:
The companies file is in charge of defining how the integration connects to Lumu and extracts the information of the incidents and related indicators of compromise.
- -
lumu:
uuid: "<COMPANY-UUID>"
[name: "<COMPANY-NAME>"]
[contact_name: "<CONTACT_NAME>"]
[contact_email: "<CONTACT_EMAIL>"]
defender_key: "<DEFENDER_API_KEY>"
hash_type: "<HASH_ALG>" # sha256 | sha1 | md5
ioc_types: # list of ioc types, option one, many or all
- ip
- domain
- url
- hash
adversary: # list of adversary types, option one, many or all
- C2C
- Malware
- Mining
- Spam
- Phishing
days: 30 # MIN 1, MAX 30
Within this file, COMPANY_UUID and DEFENDER_API_KEY fields are mandatory. Please use the values captured in the previous steps. The ioc_types values must match with the IOC types required by the integration, in this case, hash.
The integration file contains the information required for the integration to connect and interact with your Elastic deployment:
- - lumu:
uuid: "<COMPANY_UUID>"
days: 30 # Int(get incidents from X days of the ioc manager local db)
app:
clean: false # true | false
ioc:
- hash
hash_type: sha256 # sha256 | sha1 | md5
api:
KibanaUrl: "<KIBANA_FULL_URL_PORT>"
ApiKey: "<KIBANA_APIKEY>"
Policies: # Allow multiple policies, the global policy takes precedence, default value is [], it means, global policy
- "policy:<POLICY_UUID1>"
- "policy:<POLICY_UUID2>"
- "policy:<POLICY_UUID3>"
- "policy:all"
Within this file, COMPANY_UUID, KIBANA_FULL_URL_PORT, and KIBANA_APIKEY fields are mandatory. If you want to attach the hashes to a particular set of Elastic policies, fill in the UUID information according to the procedure described in Identify and collect UUID from policies.
To deploy the integration as script, you need to run the install.sh script inside the integration package.
To run the installation script, locate yourself in the app_lumu_root folder, then execute this line through CLI.
The installation script will set up the Python environment and two different cron jobs.
To use the script, you must locate yourself on the path selected for deployment (<app_lumu_root>). Use the following command to show all options available for the package:
Usage: elastic_lumu [-h] [--config CONFIG] [--ioc-manager-db-path IOC_MANAGER_DB_PATH] [-v] [-l {screen,file}] [--hours HOURS]
Options | Description |
---|---|
-h, --help | show this help message and exit |
--config CONFIG | default: integrations.yml, CONFIG FILE PATH of the companies, follow the nex YML template. |
--ioc-manager-db-path IOC_MANAGER_DB_PATH | default path: ./db.sqlite, PATH where the integration goes to read the Lumu Incidents |
--logging {screen,file} | Logging option (default screen). |
--verbose, -v | Verbosity level. |
--hours HOURS | keep db log record from [x hours], for auto maintenance local db purpose |
To query all the hashes related to Lumu incidents triggered in the last 30 days, run the following command.
To clean the existing records in Elastic, just set-up the clean flag in the integrations.yml file to true.
Then, run the integration script as follows:
According to your needs, you can combine the examples shown, also, adding the –logging {file, screen} and –verbose argument can be used for better understanding of what can be rolling wrong.
If you have a Docker environment, you can select this option to run the integration as a Docker process. To deploy and run your integration as a docker container, locate yourself at the <app_lumu_root> folder, and follow these instructions:
1. Build the container by running the following command.
2. Run the container by using the following command.
With this mode, your integration will run every 30 minutes.
For troubleshooting purposes, you can run the following commands:
To log in to your container using an interactive shell:
To collect integration logs:
After running the integration, you will see new entries in the Blocklist section, if there was any detection with hash records like this.
To identify failures in the script execution, use the -v flag. The script execution log will show more detailed information.
The application logs will be redirected to lumu.log file. The file errors.log stores only the errors to make them easier to find and aid the troubleshooting process.
If you receive the following error.
There could be another instance running. To check this, open the pid.pid file in the integration folder. This file stores the process ID if it’s running.