IMPORTANT NOTE:
This article will be deprecated by February 28th, 2025. For all API Co-signer documentation, visit this overview article, which also links to updated Co-signer content in the Developer Portal.
Introduction
The Fireblocks customer API Co-Signer is a server hosted on your company premises that holds one (or more) key shares for your workspaces. This setup is part of the Fireblocks MPC architecture.
Transaction requests and configuration changes are approved using signatures from all three Co-Signers (Fireblocks hosts two on our cloud premises and your company hosts one on your premises). As a key component of transaction signing, you must monitor the health and status of the server and software components for any issues.
Splunk is a monitoring and alerting system you can integrate with various servers for various monitoring needs. You can configure it to read server files (specifically log files) to index on a Splunk machine in a unified searchable log database and configure alerts based on the logs.
By integrating Splunk with your API Co-Signer, you can easily index the API Co-Signer’s logs. Some advantages are that you can:
- Monitor the API Co-Signer’s status.
- Generate alerts based on the logs.
- Add additional health indicators that the cloud provider’s tool suite might not provide.
You also are able to process logs and generate different fields from the log messages based on your specific needs and standards.
Prerequisites
To index API Co-Signer logs in Splunk, you must first:
- Create an account on Splunk.
- Set up a machine to house a Splunk instance. It can be cloud-based or not.
- Set up access to the Splunk machine through port 9997 (via the internet or a local network)
- You can change the port as you would like.
- An Azure Co-Signer machine
Note
There are no special requirements or implementation demands. You can configure the Splunk instance and Splunk universal forwarder (described later) however you like.
Implementation: Splunk machine
For the Splunk machine implementation, you must add a listening port. Below we use port 9997:
- Via command line interface (CLI):
Splunk> ./splunk enable listen 9997
- Via browser:
Implementation: API Co-Signer machine
- Download and install Splunk Universal Forwarder (do not try to install it on the docker).
- Configure the indexing server (only possible via CLI):
cosigner> ./splunk add forward-server <ip>:<port>
- Add a monitor for the Co-Signer log:
cosigner> ./splunk add monitor /databases/cosigner/log/customer_cosigner.log -auth username:password
- The machine will now send events generated as part of the log file to the Splunk Co-Signer. Use the below command to ensure it is sending events. You should see the same output as below.
cosigner> ./splunk list forward-server Active forwards: <ip>:<port> Configured but inactive forwards: None
If you do not see the ip:port pair under Active forwards, review Appendix 2.
Appendix 1: Breaking the logs into fields
Once your entries start going to the Splunk instance, you may want to break them up to simplify generating reports or alerts based on various fields. To do this:
- Find any entry from the Co-Signer and select
expand on the top left.
-
Select Event Actions, then select Extract Fields in the dropdown.
- Select Regex as the field extraction method, then select Next.
- Mark the parts that correspond to fields. In our example, we use fields such as:
- The message level (marked INFO)
- The message timestamp (You can skip this. It is the timestamp of the message generated on the Co-Signer, not when it arrived on the Splunk instance)
- The message itself (the entire string)
When you are done, select Next. - Verify that the fields match what you expected in the provided messages. Select Next.
- Configure the permissions and the extraction name and select Finish.
To view the configured fields, go to the search in Splunk and search for some entries from the Co-Signer. Above the list of entries, select List, then select Table in the dropdown.
On the left side, select the newly created fields so the table will have columns for those fields.
Appendix 2 - Troubleshooting universal forwarder forward server connection
If you do not see the configured forward server as active in the universal forwarder’s CLI output:
- Restart the forwarder via the CLI:
cosigner> ./splunk restart
- Verify that there’s a connection to the port you defined when configuring the listening port on the Splunk machine. In our example it was 9997, so we check for that:
cosigner> telnet <ip> 9997
- Verify that the Splunk machine is listening on the relevant port (in our example it’s 9997):
splunk> # netstat -nap | grep 9997 tcp 0 0 0.0.0.0:9997 0.0.0.0:* LISTEN 238821/splunkd
If you still have issues, contact Splunk support for further assistance.