Requirements

Perform the following actions to deploy the necessary resources.

Get UNX-OBP Artifacts

Download and extract the contents of the UNX-OBP package.

Update Terraform Variables

  1. Create an EC2 key pair in the AWS region where you will deploy, and store the private key locally:
export AWS_REGION="us-east-1"
aws ec2 create-key-pair --key-name aws --query 'KeyMaterial' --output text > aws.pem
  1. Update the "key_pair_name" variable to match your "--key-name" value, excluding the ".pem" or ".ppk" file extension.

  2. Restrict possible connections to the EC2 instance by updating the "my_client_pubnets" variable to include the client IP address(es) or CIDR block(s) from which you will connect.

  3. Update the "environment", "project", and "owner" variables as appropriate for your deployment.

You can modify the terraform/variables.tf file directly, or you can set all these values at once by creating a terraform.tfvars file at the root of the terraform folder:

touch terraform/terraform.tfvars

and passing in your variable assignments. An example terraform.tfvars is:

key_pair_name = "aws"
my_client_pubnets = [
  "1.2.3.0/24",
  "4.5.6.0/24",
]
environment = "test"
project = "unx-obp"
owner = "CHANGEME"

By default, this will deploy to us-east-1 AWS region.

If you want to change that, adjust the "region" variable in variables.tf.

If you need to deploy to AWS GovCloud (US), you must update the following:

Create and Store API Credentials in Secrets Manager

We are aware that you can no longer sign up for a Community Edition RiskIQ PassiveTotal account. RiskIQ was acquired by Microsoft, and now the only option for new accounts is Microsoft Defender for Threat Intelligence. If you have an existing RiskIQ PassiveTotal account, you can follow along below. If you don't, then just disregard that part of the guide.

You will create two secrets. One for the Censys.io API and one for the RiskIQ PassiveTotal API. You will need at least a free community edition account on each service.

See the Create an AWS Secrets Manager secret guide for the full up to date process of interacting with AWS Secrets Manager. For our process, note the following:

As regards API quotas, it is recommended to have an enterprise-level subscription to these services, but community/free accounts should work fine if you're just trying this out.

Importantly, stuff will continue to function even if one or both API quotas are reached, but the output will be less useful. See Error Handling and Monitoring for more information about discovering when enrichment errors occur.

Build Lambda Deployment Packages

In the "scripts" folder, run:

./update_lambda_deployment_packages.sh

This will build fresh Lambda deployment packages with the latest required Python libraries and move them into the appropriate terraform directory.

You can just run this any time you update the Lambda function code.

Note, if you add any other non-standard Python libraries, you will need to modify this bash script to include those for the relevant function(s).

Deploy Infrastructure with Terraform

Make sure appropriate AWS credentials are available to the shell. Consult Configure the AWS provider to authenticate with your AWS credentials. I recommend a screen session with the appropriate environment variables set.

In the "terraform" folder, run:

terraform init
terraform validate
terraform apply

Review the plan, type 'yes' and hit Enter.

This will deploy the following:

Wait for it to complete. It can take about 20-30 minutes. Once the Terraform apply is complete, you should give a few extra minutes for the EC2 instance user data (start up) script to complete.

Make note of the Terraform outputs. You can always run "terraform output" to get these values. Never include the quotes when you copy/paste these elsewhere.

SSH to EC2 Instance with SOCKS Configured

You'll access the NiFi Web UI and the Kibana Web UI via a dynamic port forward through the SSH session.

PuTTY

Command Line

ssh -i key_pair_name.pem -D28080 ec2-user@nifi_instance_public_dns

Setup and enable a SOCKS5 proxy for your browser (e.g., using Foxy Proxy in Firefox):

Now in your browser you should be able to hit the NiFI Web UI via https://localhost:8443/nifi.

And you should be able to hit the Kibana Web UI via https://<es_kibana_endpoint> - see the Terraform outputs for the "es_kibana_endpoint".

Prepare Kibana

In the "artifacts" folder you will find a file, unx-obp-public-kibana-saved-objects-export-<date>-<revision>.ndjson, of exported Kibana Saved Objects including index patterns, saved searches, visualizations, and dashboards.

Import this file in the Kibana interface from Menu (top left) via Stack Management > Saved Objects > Import.

Reload/refresh the page.

In Stack Management > Index Patterns, in the unx-obp-alert-* index pattern, search for and edit the "anchor.full.uid" field. Update the "URL Template" with the value of the "anchor_full_uid_url_template" Terraform output. Save field.

Navigate to the Dashboards page.

Prepare NiFi

Login to NiFi WebUI (credentials are automatically generated and should have displayed when you connected to the EC2 instance). If you don't see it, cat /etc/motd.

Upload the provided NiFi Template file, unx-obp-public-nifi-dataflow-smb-getfile-<date>-<revision>.xml, from the "artifacts" folder. Right click the canvas > Upload template.

Load the template onto the canvas by dragging the Template icon in the top navigation bar onto the canvas.

In the UNX-OBP INPUT Processor Group...

  1. Adjust the Queue URL and Region properties of the "Send to Analytic Incoming Queue" processor to the appropriate values for this deployment. Queue URL should be set to the URL of the Incoming queue. See the Terraform output value of "sqs_incoming_queue_url".

In the UNX-OBP OUTPUT Processor Group...

  1. Adjust the Queue URL and Region properties of the "Receive from Analytic Outgoing Queue" processor to the appropriate values for this deployment. Queue URL should be set to the URL of the Outgoing queue. See the Terraform output value of "sqs_outgoing_queue_url".

  2. Update the Bucket and Region properties of the PutS3Object processor to the appropriate values for this deployment. See the Terraform output value of "unx_obp_main_bucket".

If you're using AWS GovCloud, you will need to update the Assume Role STS Endpoint property in the AWSCredentialsProviderControllerService Controller Service (mouse-over the question mark next to the property for more information).

Enable all associated Controller Services from the main canvas (NiFi Flow view). Right click the canvas → Enable all controller services.

Wait for a refresh, or right click the canvas and Refresh.

Confirm no processors are in an invalid state (see NiFi User Interface for more information).

Start the entire imported dataflow via the Operate Palette (on the left side).

Generate Traffic

From the EC2 instance, sudo su - to root and you can use the generate_todays_protocol_traffic.py script to generate pseudo-protocol traffic from the provided PSV file.

You can generate traffic in bulk to a local SiLK binary file, which you can then move into the /root/input directory. NiFi will automatically pick up and process files in that directory and send records on to the SQS queue.

python3 generate_todays_protocol_traffic.py -p smb -f smb-base-traffic-adj.psv --silk

Remember to move the generated SiLK file to the /root/input directory.

Alternatively, you can generate traffic in real time directly to SQS, bypassing NiFi all together.

python3 generate_todays_protocol_traffic.py -p smb -f smb-base-traffic-adj.psv \
    -q <sqs_incoming_queue_url>

Refresh and review NiFi dataflow for basic stats and any errors.

Review alert outputs in Kibana via the "UNX-OBP Alerts Overview" dashboard. Review allowlisted hits via the "UNX-OBP Allowlisted Overview" dashboard.

If you are not seeing anything populate the dashboards, see Error Handling and Monitoring.

Run the script with -h to see further options.