Developer Portal
Welcome to the Developer Portal! Here you will find resources to help you integrate with the Transaction Monitoring API, including quickstarts, workflows, use-cases, tutorials, and many other resources.
Register a transfer with cURL
Learn how to register a received transfer using the POST /v2/users/{userId}/transfers
endpoint.
Before you begin
Ensure you create an API key before you continue to the steps below.
Add your headers
For the POST /v2/users/{userId}/transfers
endpoint, use the Token
header for your API key and a Content-type: application/json
header to indicate you're sending JSON content. The following is an example:
curl -X POST 'https://api.chainalysis.com/api/kyt/v2/users/{userId}/transfers' \
--header 'Token: {YOUR_API_KEY}' \
--header 'Content-type: application/json' \
Create a user id
Use the userId
path parameter to create a unique identifier for this user, such as user0001
. For more information, see User IDs.
Create your request body
The properties you need to send in your request body depend on the asset's tier. Transfers on mature and emerging networks require at a minimum the following properties:
network
asset
transferReference
direction
In this example, you register a received transfer of the asset ETH
on the Ethereum
network. The transferReference
is a combination of the transaction hash (0xdd6364536f5f05cc1ea75709b676e2b1b37fad2792d3a71fb537db13100fc6b8
) and receiving address, separated by a :
. Since your platform received this transfer, the output address is an address you control (0x5cc17d0fa620FE99dAEAa87365C63b453BC47664
) and the direction is received
.
Using the information above, the following is an example of a properly formatted registration call:
curl POST 'https://api.chainalysis.com/api/kyt/v2/users/user0001/transfers' \
-H 'Token: {YOUR-API-KEY}' \
-H 'Content-Type: application/json' \
--data-raw '{
"network": "Ethereum",
"asset": "ETH",
"transferReference": "0xdd6364536f5f05cc1ea75709b676e2b1b37fad2792d3a71fb537db13100fc6b8:0x5cc17d0fa620FE99dAEAa87365C63b453BC47664",
"direction": "received"
}'
Assess your response
If properly formatted, you should received a 202
code with a response similar to the following:
{
"updatedAt": null,
"asset": "ETH",
"network": "ETHEREUM",
"transferReference": "0xdd6364536f5f05cc1ea75709b676e2b1b37fad2792d3a71fb537db13100fc6b8:0x5cc17d0fa620FE99dAEAa87365C63b453BC47664",
"tx": null,
"idx": null,
"usdAmount": null,
"assetAmount": null,
"timestamp": null,
"outputAddress": null,
"externalId": "2774e6d5-aafe-3a26-ad0b-7812c295cb48"
}
Save the externalId
for future use. You will need it to identify the transfer when retrieving alerts, exposure, or identifications.
Verify Transaction Monitoring processed your transfer
Transaction Monitoring indicates it has processed your transfer by returning a non-null
value for updatedAt
. Since it takes a number of seconds to process, updatedAt
typically returns null
immediately after registering the transfer. To check whether the transfer was processed, call the GET /v2/transfers/{externalId}
summary endpoint with your stored externalId
until updatedAt
returns a timestamp. After your transfer processes, the other properties (assetAmount
, timestamp
, tx
, and others) should also populate.
Below is an example request body from the summary endpoint for the above transfer:
{
"updatedAt": "2022-06-30T16:43:53.172475",
"asset": "ETH",
"network": "ETHEREUM",
"transferReference": "0xdd6364536f5f05cc1ea75709b676e2b1b37fad2792d3a71fb537db13100fc6b8:0x5cc17d0fa620FE99dAEAa87365C63b453BC47664",
"tx": "dd6364536f5f05cc1ea75709b676e2b1b37fad2792d3a71fb537db13100fc6b8",
"idx": 0,
"usdAmount": 2744.02,
"assetAmount": 2.6773581135091926,
"timestamp": "2018-01-18T07:26:28.000+00:00",
"outputAddress": "5cc17d0fa620fe99daeaa87365c63b453bc47664",
"externalId": "2774e6d5-aafe-3a26-ad0b-7812c295cb48"
}
What's next
To learn more, check out the following resources:
Register a withdrawal attempt with cURL
Learn how to register a withdrawal attempt using the POST /v2/users/{userId}/withdrawal-attempts
endpoint.
Before you begin
Ensure you create an API key before you continue to the steps below.
Format your headers
For the POST /v2/users/{userId}/withdrawal-attempts
endpoint, use the Token
header for your API key and a Content-type: application/json
header to indicate you're sending JSON content. The following is an example:
curl -X POST 'https://api.chainalysis.com/api/kyt/v2/users/{userId}/withdrawal-attempts' \
--header 'Token: {YOUR_API_KEY}' \
--header 'Content-type: application/json' \
Create a user id
Use the userId
path parameter to create a unique identifier for this user, such as user0002
. For more information, see User IDs.
Specify the address and hash format
Use the formatType
query parameter and specify either to_display
or normalized
to determine the format in which crypto addresses and transaction hashes are returned in the response JSON. For example, append formatType=normalized
to specify that all addresses and hashes return in a normalized format:
https://api.chainalysis.com/api/kyt/v2/users/{userId}/withdrawal-attempts?formatType=normalized
To learn more about the differences between these formats, see Address and hash formats.
Create your request body
In your request body, supply at least the following request body properties:
network
asset
address
attemptIdentifier
assetAmount
attemptTimestamp
In this example, you register a withdrawal attempt for five BTC
on the Bitcoin
network. The attempt is made to the address 1EM4e8eu2S2RQrbS8C6aYnunWpkAwQ8GtG
. Create identification for the attempt with a unique attemptIdentifier
.
Using the information above, the following is an example of a properly formatted registration request:
curl -X POST 'https://api.chainalysis.com/api/kyt/v2/users/user0002/withdrawal-attempts?formatType=normalized' \
--header 'Token: {YOUR_API_KEY}' \
--header 'Content-type: application/json' \
--data '{
"network": "Bitcoin",
"asset": "BTC",
"address": "1EM4e8eu2S2RQrbS8C6aYnunWpkAwQ8GtG",
"attemptIdentifier": "attempt1",
"assetAmount": 5.0,
"attemptTimestamp": "2020-12-09T17:25:40.008307"
}'
Assess your response
If the request was successful, you should receive a 202
code with a response similar to the following:
{
"asset": "BTC",
"network": "BITCOIN",
"address": "1EM4e8eu2S2RQrbS8C6aYnunWpkAwQ8GtG",
"attemptIdentifier": "attempt1",
"assetAmount": 5.0,
"usdAmount": null,
"updatedAt": null,
"externalId": "e27adb25-7344-3ae3-9c80-6b4879a85826"
}
Save the externalId
for future use. You will need it to identify the withdrawal attempt when retrieving alerts, exposure, or identifications.
Verify Transaction Monitoring processed your withdrawal
Transaction Monitoring indicates it has processed your withdrawal by returning a non-null
value for updatedAt
. Since it takes a number of seconds to process, updatedAt
typically returns null
immediately after registration. To check whether the withdrawal was processed, call the GET /v2/withdrawal-attempts/{externalId}
summary endpoint with your stored externalId
until updatedAt
returns a timestamp. After your withdrawal processes, usdAmount
should also populate.
Below is an example request body from the summary endpoint for the above withdrawal:
{
"asset": "BTC",
"network": "BITCOIN",
"address": "1EM4e8eu2S2RQrbS8C6aYnunWpkAwQ8GtG",
"attemptIdentifier": "attempt1",
"assetAmount": 5.0,
"usdAmount": 5000.00,
"updatedAt": "2022-04-27T20:21:43.803+00:00",
"externalId": "e27adb25-7344-3ae3-9c80-6b4879a85826"
}
Register for continuous monitoring
You can also register the withdrawal attempt as a sent transfer. Doing so will configure Transaction Monitoring to continuously monitor the transfer for updates in exposure and identifications, generating alerts according to your risk settings. To register the transfer as sent follow the steps in Quickstart: Register a transfer, but this time indicate the direction as sent
in the request body.
What's next
To learn more, check out the following resources:
Workflows
The API implementation guide details three common use cases:
- Crediting a user's funds upon deposit
- Processing a user's withdrawal attempt
- Retrieving alerts for transaction monitoring
This diagram shows a general overview of common workflows. For information on which endpoints to use, and in what order, see the procedures below.
Crediting a user’s funds
Typically, services do not have control over received transfers. This guide details how to use Transaction Monitoring data to take programmatic action when receiving a user deposit. Outlined below is an example describing how to use the API to navigate received deposits.
When you receive a user deposit:
- Call the
POST /v2/users/{userId}/transfers
endpoint. Be sure to indicate the"direction"
as"RECEIVED"
in the request body. If successful, you will:- Receive a 202 response.
- Receive an external identifier (
externalId
). Store this external identifier in your system for later use.
Call the
GET /v2/transfers/{externalId}
summary endpoint using theexternalId
received in step 1. You will need to poll this endpoint untilupdatedAt
is no longernull
. Once populated, Transaction Monitoring generates alerts according to your organization’s alert rules.Note: How quickly the
updatedAt
field populates depends on how many confirmations Chainalysis requires before processing transactions for a given asset. Some require fewer confirmations or are quicker than others. Learn more about polling the summary endpoints here.Once the
updatedAt
field populates, determine whether the asset is part of a mature or emerging network or pre-growth network and follow the corresponding procedure below.
User deposits for mature and emerging assets
With mature and emerging assets, you can retrieve the following additional information about the transfer, if available:
- Direct exposure information
- Alerts specific to the transfer
To retrieve additional information about the received transfer:
- Call the
GET /v2/transfers/{externalId}/exposures
endpoint to retrieve any available direct exposure information. - Call the
GET /v2/transfers/{externalId}/alerts
endpoint to retrieve any generated alerts specific to this transfer.
Learn more about retrieving alert data for ongoing transaction monitoring here.
User deposits for pre-growth assets
For pre-growth assets, you can retrieve the following additional information about the transfer, if available:
- Alerts specific to the transfer
- Network Identifications - learn more about Network Identifications here.
To retrieve additional information about the received transfer:
- Call the
GET /v2/transfers/{externalId}/alerts
endpoint to retrieve any generated alerts specific to this transfer. - Call the
GET /v2/transfers/{externalId}/network-identifications
endpoint to retrieve the counterparty name for any asset and transaction hash matches.
Learn more about retrieving alert data for ongoing transaction monitoring here.
Note: Depending on the transfer, direct exposure information may be available. You can check by calling the GET /v2/transfers/{externalId}/exposures
endpoint.
Processing withdrawals
Sometimes services do not have information about the counterparty where their users attempt to make a withdrawal. This guide details how to use Transaction Monitoring data to take programmatic action when users attempt withdrawals. Outlined below is an example describing how to use the API to navigate withdrawal attempts.
When a user attempts a withdrawal:
- Call the
POST /v2/users/{userId}/withdrawal-attempts
endpoint. If successful, you will:- Receive a 202 response.
- Receive an external identifier (
externalId
). Store the external identifier in your system for later use.
- Call the
GET /v2/withdrawal-attempts/{externalId}
summary endpoint, using theexternalId
received in step 1. You will need to poll this endpoint untilupdatedAt
is no longernull
. Once populated, Transaction Monitoring generates alerts according to your organization’s alert rules. - Once the
updatedAt
field populates, determine whether the asset is part of a mature or emerging network or pre-growth network and follow the corresponding procedure below.
Withdrawal attempts for mature and emerging assets
With mature and emerging assets, you can retrieve the following additional information, if available:
- Direct exposure information
- Alerts specific to the withdrawal attempt
- Chainalysis Address Identifications
To retrieve additional information about the withdrawal attempt:
- Call the
GET /v2/withdrawal-attempts/{externalId}/exposures
endpoint to retrieve any counterparty exposure information. - Call the
GET /v2/withdrawal-attempts/{externalId}/alerts
endpoint to retrieve any available alerts specific to this counterparty. - Call the
GET /v2/withdrawal-attempts/{externalId}/high-risk-addresses
endpoint to check if the counterparty has any Chainalysis Address Identifications. - After successfully processing a user’s withdrawal, call the
POST /v2/users/{userId}/transfers
endpoint and indicate the"direction"
as"SENT"
to register the transfer for ongoing monitoring.
Withdrawal attempts for pre-growth assets
With pre-growth assets, you can retrieve the following additional information, if available:
- Alerts specific to the withdrawal attempt
- Chainalysis Address Identifications
- Network Identifications - learn more about Network Identifications here.
To retrieve additional information about the withdrawal attempt:
- Call the
GET /v2/withdrawal-attempts/{externalId}/alerts
endpoint to retrieve any alerts specific to the counterparty. - Call the
GET /v2/withdrawal-attempts/{externalId}/high-risk-addresses
endpoint to check if the counterparty has any Chainalysis Address Identifications. - Call the
GET /v2/withdrawal-attempts/{externalId}/network-identifications
endpoint to retrieve the counterparty name of any asset and transaction hash matches. - After successfully processing a user’s withdrawal, call the
POST /v2/users/{userId}/transfers
endpoint and indicate the"direction"
as"SENT"
to register the transfer for ongoing monitoring.
Retrieving alerts for transaction monitoring
After you've decided whether to credit a user's funds or process a user's withdrawal attempt, Transaction Monitoring automatically monitors the transaction and generates alerts according to your Alert Rules. You can retrieve those alerts with the GET /v1/alerts
endpoint.
If you call the endpoint without any query parameters, you will retrieve all of the alerts within your organization. To retrieve specific alerts, you can filter or sort with various query parameters.
Use the following query parameters to filter the alerts you wish to retrieve:
asset
- the asset used in the transaction.userId
- the user's unique identifier as defined in the transfer registration.level
- the severity of the alert, for example,SEVERE
,HIGH
,MEDIUM
, orLOW
.createdAt_lte
- the timestamp less than or equal to when the alert generated.createdAt_gte
- the timestamp greater than or equal to when the alert generated.alertStatusCreatedAt_lte
- the timestamp less than or equal to when the most recent alert status was created.alertStatusCreatedAt_gte
- the timestamp greater than or equal to when the most recent alert status was created.
Use the sort
parameter with one of the following items to sort the order in which you retrieve alerts:
timestamp
- the blockchain date of the transfer that caused the alert.createdAt
- the date the alert generated.alertStatusCreatedAt
- the date the alert status was last updated.level
- The severity of the alert, for example,SEVERE
,HIGH
,MEDIUM
, orLOW
.alertAmountUsd
- The amount of the transfer that triggered the alert.
After choosing the item you wish to sort by, you must add a URL encoded space character and indicate the order as either ascending (asc
) or descending (desc
). For example, sort=createdAt%20asc
.
Retrieving high severity alerts for a specified user
You can specify a combination of these query parameters to retrieve a specific result. As an example, to retrieve only HIGH
severity alerts of a specific userId
generated after a particular timestamp (createdAt_gte
), call GET /v1/alerts?level=HIGH&userId=user001&createdAt_gte=2020-01-06
This will return all high severity alerts for user001
after January 6th, 2020. Notice the timestamp includes only the date. You can filter even further with the inclusion of time to retrieve alerts down to the microsecond: &createdAt_gte=2020-01-06T12:42:14.124Z
If you want to sort the alerts by their creation date, add sort=createdAt%20asc
as a query parameter.
Retrieving severe alerts for a specified date range
You can use both createdAt_gte
andcreatedAt_lte
to retrieve alerts for a determined time period. As an example, to retrieve only SEVERE
alerts within your organization between the hours of 1:00AM and 2:00AM, call GET /v1/alerts?level=SEVERE&createdAt_gte=2021-05-01T00:00:00.00Z&createdAt_lte=2021-05-07T00:00:00.00Z
You can adjust these parameters to then retrieve all SEVERE
alerts between the hours of 2:00AM and 3:00AM, so on and so forth. Alternatively, you can discard the time information (T00:00:00Z
) from the query parameters altogether to retrieve alerts for certain days, weeks, or even months. The level of frequency you choose to call these endpoints depends on your organizational needs.
Legacy implementation
Suggested implementations of the Transaction Monitoring API are detailed below, increasing in complexity and the amount of functionality you will receive. You can choose to work entirely in the Transaction Monitoring user interface (UI), or build a response within your own internal systems based on feedback and analysis from Chainalysis.
Building the analysis and data from the API into your internal system provides robust functionality and gives you a more comprehensive picture of your risk. We suggest reviewing example compliance workflows to help assess which implementation best meets your needs.
At minimum, the two major endpoints required for integration are: /transfers/sent
and /transfers/received
. For basic functionality, you must execute at least those two requests.
Overview
The following graphic serves as a preview and comparison for the various Transaction Monitoring implementations. The implementations are described in detail in the sections below, including benefits and drawbacks as well as required endpoints for each.
To help you visualize a compliance workflow using data from the Transaction Monitoring API, here is a basic implementation.
Prepare for launch 1
Transaction Monitoring UI only
This is the minimum implementation and keeps all data and functionality entirely within Chainalysis’s environment. It does not pull data into your internal systems.
You will receive the full functionality of the UI, but you will not have the ability to automate actions on transfers within your internal systems based on the information from Transaction Monitoring.
This implementation requires registering /transfers/sent
and /transfers/received
. /depositaddresses
is optional.
- Register a received transfer Registers a received transfer to a user and deposit address at your organization. The API response will contain a risk rating (high/low/unknown) of the counterparty that sent the transfer. If the cluster has been identified, the entity name will also be provided.
POST /users/{userId}/transfers/received
- Register a sent transfer
Registers a completed outgoing transfer from a user at your organization to another entity. Usually this is called after the
/withdrawaladdresses
endpoint that pre-screens a counterparty before allowing the transfer to proceed.
POST /users/{userId}/transfers/sent
- Optional: Register deposit addresses In the future, Transaction Monitoring will be able to detect deposits (but not withdrawals) based on the deposit address. While this endpoint is not currently required for monitoring or integration, you can implement it now for use with upcoming functionality.
POST /users/{userId}/depositaddresses
Transactions must be associated with a User ID for user risk score calculation.
Prepare for launch 2
Internal systems only
While you can work entirely in the Chainalysis environment (see above), this implementation pulls Chainalysis’s data into your own system where you can incorporate Transaction Monitoring into your automated payments workflow.
Integrating Chainalysis’s data into your internal system helps you to get a more comprehensive picture of your risk activity and automate actions on transfers based on Chainalysis data. However, you will be missing out on functionality and convenience by not working within the UI.
This implementation requires registering /transfers/sent
and /transfers/received
(described above), as well as /withdrawaladdresses
.
- Pre-screen a withdrawal address Before allowing a transfer out of your organization, you can check the risk rating of the entity (cluster) associated with the address where the intended funds are going. After receiving a risk rating and information on the potential counterparty, you can determine whether to allow the withdrawal transfer to proceed.
For example, if you want to stop the withdrawal of funds to a sanction or terrorist financing entity, you would use this request.
POST /users/{userId}/withdrawaladdresses
Lift off
Transaction Monitoring UI & Internal systems
This setup is more robust than the previous implementations, as it encompasses both the Transaction Monitoring UI and your internal systems. You will be able to take action on transfers within your system and also have the full functionality of the UI, helping to provide continuity in the data you review between systems.
However, it does not take advantage of all features, such as alerts.
This implementation requires registering /transfers/sent
, /transfers/received
, and /withdrawaladdresses
(described above).
With this implementation you can:
- Create a URL link from your internal systems to the Transaction Monitoring UI.
- Set flags based on Chainalysis data. For example, a 'high risk' response from
transfers/received
puts risky deposits up for review by your compliance team. - Pull Chainalysis data into your payment review system. For example, you can include a transfer's risk rate in your payments review queue.
Cruising altitude
Transaction Monitoring UI & Internal systems
This implementation pulls Chainalysis data into both the UI and your internal systems with an additional API endpoint, /alerts
.
Cruising Altitude brings the most powerful data that Transaction Monitoring offers - alerts - into your internal system. You can perform the actions mentioned above (setting flags or pulling the risk score into payments queues) with the benefit that the data you are using is the most robust. This implementation can also help you match the alerts workflows that your compliance team is likely doing in the UI.
This implementation requires registering /transfers/sent
, /transfers/received
, /withdrawaladdresses
(described above), as well as /alerts
.
- Get alerts
Retrieves the details of all of your alerts and pulls those alerts into your own system. Alerts allow you to identify risky transfers on your platform.
GET /alerts
To the moon
Transaction Monitoring UI & Internal systems
This implementation pulls Chainalysis data into both the UI and your internal systems. It builds on the implementations above with an additional endpoint, /users
. GET /users
provides you with a user risk score for each of your users.
Alerts and user risk score are two powerful analytic metrics that are provided by the API for assessing your risk activity. User risk scores allow you to perform user-level automated review in your internal system on top of the transfer-level review from above.
This implementation requires registering /transfers/sent
, /transfers/received
, /withdrawaladdresses
, /alerts
(described above), as well as /users
.
- Get users Retrieves details, including the user risk score (as LOW, MEDIUM, HIGH, or SEVERE), on all registered users in your system. You can use the user risk score to manually review or hold transfers made by high-risk users.
GET /users
Compliance workflows
Below are example compliance workflows using Transaction Monitoring's most powerful risk assessment features to help you formulate policies and procedures based on the information provided by the Transaction Monitoring and Investigations UIs, and the Transaction Monitoring API.
The workflows below focus on three Transaction Monitoring features that help you prioritize risk activity: alerts, user risk score, and counterparty screen:
- Alerts are generated whenever a transfer involves a risky counterparty and/or crosses a value threshold. A single transfer can trigger multiple alerts.
- The user risk score helps you to identify high-risk users in your organization.
- Deposit and withdrawal requests registered with Chainalysis returns a counterparty risk rating as highRisk/unknown/lowRisk. If highRisk or lowRisk (if the counterparty is known), the Category (darknet market or exchange) and Name (Hydra Marketplace, Kraken.com) will also be returned.
We suggest using alerts as a notification and starting point for review and interacting with user risk profiles. The Transaction Monitoring UI gives you a high-level overview of your risk for managerial and organizational reporting/monitoring, while alerts is where an analyst will spend most of their time.
Deposit workflow
This is a suggested workflow for funds that are received by a user at your organization. Note that in most cases, you cannot stop an incoming transfer from occurring. However, Transaction Monitoring helps you to detect the risk associated with the transfer and take appropriate compliance action. For example, if a user receives funds directly from a cluster categorized as child abuse material, you can decide how you want to react (e.g. freeze funds, investigate, and likely file a SAR).
1. Register the transfer When funds arrive at a user address, POST the received transfer.
2. Check alerts and/or user risk score
You can check alerts:
- Via the UI: search for the transaction ID in the search bar at the top of the UI. If an alert was raised on the transfer, it will appear in the transfer details panel.
- Via the API: by using the
GET /alerts
. Alert levels are returned as SEVERE, HIGH, MEDIUM, and LOW.
We suggest checking alerts first to assess risky transfer activity and then moving to review all alerts at the user level by clicking the unique URL for the user. You can also check the user’s risk score to see if the user is high risk.
You can check user risk score:
- Via the UI: search for the user’s ID in the search bar at the top of the UI. The risk score is found on the user information page as Severe, High, Medium, or Low.
- Via the API: by using the
riskScore
property inGET /users/{userid}
.
3. Take action
Take action on the received funds. For example, you can hold the transfer and submit it to be manually reviewed, immediately freeze the user’s funds, or deem the transfer non-risky.
Chainalysis continually monitors for risk on each registered transfer.
Withdrawal workflow
This is a suggested workflow for funds that are being sent by a user at your organization. Unlike deposits, you can stop a risky withdrawal from occurring if at the time of the withdrawal, the withdrawal address has been identified as risky. For example, if a user requests a withdrawal towards a sanctioned address, you can decide to block this request.
1. Check Withdrawal Address
The withdrawal prescreen is used in real-time to pre-screen for counterparty risk. When a user requests a withdrawal, register the withdrawal address by making a POST request to the /withdrawaladdresses
endpoint. For known counterparties, a risk rating of high risk or low risk will be provided, as well as the counterparty’s name.
Note that it is common to withdraw to a previously unknown address, so the majority of addresses you check may have an unknown rating.
2. Take action
After performing the withdrawal address counterparty screen, take action on the pending transfer. You may choose to block a transfer that is high risk, or flag it for further review.
3. Register the transfer
Always be sure to register an approved withdrawal as a sent transfer upon completion so that your team can access it in the compliance dashboard and Transaction Monitoring can monitor the transfer. Note that Transaction Monitoring updates risk for transfers on an ongoing basis, but does not update withdrawal addresses. The latter will always return the rating when the address was first screened.
As with deposits, Chainalysis monitors sent transfers for you over time.
4. Check alerts
Look at the user information page in the UI for the user's associated withdrawal addresses and alerts. Alerts that have a sent direction mean that the withdrawal was approved.
Example compliance actions
Here are some examples of compliance actions taken by our customers when following the workflows above:
“If we notice a withdrawal request towards terrorism financing, we will show the user the withdrawal request is processing and call our Financial Intelligence Unit hotline to ask if they want us to block the transaction (and likely tip off the user) or allow it to occur (for ongoing monitoring).”
“If we detect a direct deposit from a darknet market, we will silently freeze the account. Typically we will file a SAR report, offboard the user, and allow the user to withdraw their funds via a fiat conversion to their bank account.”
“If we detect a pattern of risky (in)direct darknet market transactions, we will freeze the account. Typically we will file a SAR report, offboard the user, and allow the user to withdraw their cryptocurrency.”
“Transfers to mixing services are prohibited on our platform. If we see a withdrawal request to a mixer, we flag the transfer for review and ask the user to explain the purpose of the withdrawal.”
Internal systems
Note that instead of performing these checks manually, you can automate the process by building a response within your own internal systems. See the suggested implementations above for more information. The benefits of automating compliance include:
- Prevent successful money laundering.
- Proactively reduce risky activity from occurring.
- Nudge users toward correct behavior.
- More efficient and meaningful time usage of your compliance team.
Scripts
Dismiss indirect alerts with custom criteria
This article guides you through the creation of a Python script that dismisses indirect transfer alerts based on a specified categoryId and percentage of transfer threshold. For instance, you can dismiss all indirect sanctioned entity alerts where the percentage of the transfer is less than 1%.
After following the tutorial, you'll have a script that can be customized to fit your organization's needs for alert dismissal processes. The script does the following:
- Ingests your chosen category (
categoryId
) and dismissal thresholds. - Retrieves alert identifiers by calling the
GET /v1/alerts/
endpoint. - Dismisses any alerts that meet your dismissal criteria by calling the
POST /v1/alerts/{alertIdentifier}/statuses
endpoint with the stored alert identifiers.
Before you start
Before beginning this tutorial, make sure you have:
- Installed Python 3.6 or a newer version.
- A valid Transaction Monitoring API key.
Build the script
Import necessary libraries
To build this script, you will need the following libraries:
argparse
: to get variables from the command-line tool.requests
: to make API requests from Python.json
: to handle sent and received JSON data.
Add the following code to import these libraries:
import argparse
import requests
import json
Set command-line variables
By using command-line arguments, you can easily set variables that can be changed depending on how you run the script. The script has arguments for the following variables:
- API key.
- Percentage of transfer dismissal threshold.
- Entity categoryId for alert dismissal.
userId
to narrow the scope of alerts to a specific user (optional).
Add the following code to define these arguments in your script:
# Define command-line arguments for the API Key, transferReference, and userId values
parser = argparse.ArgumentParser()
parser.add_argument("-k", "--key", required=True, help="The API key to use for authentication.")
parser.add_argument("-p", "--percentage", required=True, help="The dismissal threshold percentage.")
parser.add_argument("-c", "--categoryId", required=True, help="The categoryID of the category you want to dismiss alerts for.")
parser.add_argument("-u", "--userId", required=False, help="The userId you want to dismiss alerts for.")
# Parse the command-line arguments for use later on
args = parser.parse_args()
Define the alerts endpoint
Both API endpoints in this script have the same base URL and require similar headers. Additionally, you will need to set the query parameters for the GET request and the request body for the POST request. For dismissing alerts, include values for the status
and comment
properties.
Add the following code to your script to define the API URL, headers, query parameters, and request body:
# Set the URL, query parameters, and API headers
url = "https://api.chainalysis.com/api/kyt/v1/alerts"
params = {
"limit": 20000, # Adjust this property to retrieve more alerts per GET request.
"offset": 0, # Change the offset to retrieve another batch of alerts.
# "createdAt_gte": "" # Enable this property to set a timestamp to only retrieve alerts that were generated after the specified timestamp.
# "userId": "" # Enable this property to retrieve alerts for a given userId.
}
headers = {
"token": args.key,
"Content-Type": "application/json",
"accept": "application/json",
}
# Set the request body for the POST request, create a counter for dismissed alerts, provide a print statement
dismissed_status = json.dumps({
"status": "Dismissed",
"comment": f"Dismissed because the alert was indirect with an alerted amount less than {args.percentage}% of the transfer.",
})
Retrieve alerts
Now that the API has been defined, you can call the GET endpoint to retrieve alerts. Store the alerts for later use and count the number of alerts retrieved.
Add the following code to your script to retrieve, store, and count alerts:
# Make the request, convert the JSON data into a Python dictionary, print the amount of retrieved alerts
response = requests.get(url, params=params, headers=headers)
if response.status_code != 200:
print(f"Error: An error occurred. HTTP status code {response.status_code} returned")
exit(1)
alerts_json = response.json()
print("Number of retrieved transfer alerts:", len(alerts_json["data"]))
Define the dismissal criteria
Now that alerts have been retrieved, filter through them so that only those that meet your criteria are processed. The script uses the arguments you defined earlier to provide flexibility.
Also, include a print statement to provide feedback in your command-line tool while dismissing alerts.
Add the following code to your script:
num_dismissed_alerts = 0
print('Dimissing alerts...')
# Iterate over each alert you retrieved (from the alerts_json["data"] array)
for item in alerts_json["data"]:
# Check if the alert meets the specified criteria
if (
item["alertStatus"] != "Dismissed"
and item["exposureType"] == "INDIRECT"
and item["alertAmountUsd"] > 0
and item["transferredValuePercentage"] < float(args.percentage)
and item["categoryId"] == args.categoryId
):
Action filtered alerts
Now that the alerts have been filtered to only those that meet your criteria, use the POST endpoint and the alertIdentifier
property to take action on them.
Add the following code to your script, making sure to maintain proper indentation:
# Set the path_param variable to the value of the alertIdentifier
path_param = item["alertIdentifier"]
# Make the POST request using the path_param variable as the path parameter
response2 = requests.post(f"{url}/{path_param}/statuses", headers=headers, data=dismissed_status)
if response2.status_code == 200:
print(f"Successfully dismissed alert with alertIdentifier: {path_param}")
num_dismissed_alerts += 1 # Counts how many alerts meet above critera/are dismissed
else:
print(f"Error: An error occured. HTTP status code {response2.status_code} returned.")
# Final print statement for total dismissed alerts
print(f"Number of alerts with a categoryId of \"{args.categoryId}\" dismissed: {num_dismissed_alerts}")
Note that the code above includes print statements to give you feedback and help you track progress in your command-line tool.
Put the script together
Below is a completed script created from all the parts above:
import argparse
import requests
import json
# Define command-line arguments for the API Key, transferReference, and userId values
parser = argparse.ArgumentParser()
parser.add_argument("-k", "--key", required=True, help="The API key to use for authentication.")
parser.add_argument("-p", "--percentage", required=True, help="The dismissal threshold percentage.")
parser.add_argument("-c", "--categoryId", required=True, help="The categoryID of the category you want to dismiss alerts for.")
parser.add_argument("-u", "--userId", required=False, help="The userId you want to dismiss alerts for.")
# Parse the command-line arguments for use later on
args = parser.parse_args()
# Set the URL, query parameters, and API headers
url = "https://api.chainalysis.com/api/kyt/v1/alerts"
params = {
"limit": 20000, # Adjust this property to retrieve more alerts per GET request.
"offset": 0, # Change the offset to retrieve another batch of alerts.
# "createdAt_gte": "" # Enable this property to set a timestamp to only retrieve alerts that were generated after the specified timestamp.
# "userId": "" # Enable this property to retrieve alerts for a given userId.
}
headers = {
"token": args.key,
"Content-Type": "application/json",
"accept": "application/json",
}
# Set the request body for the POST request, create a counter for dismissed alerts, provide a print statement
dismissed_status = json.dumps({
"status": "Dismissed",
"comment": f"Dismissed because the alert was indirect with an alerted amount less than {args.percentage}% of the transfer.",
})
# Make the request, convert the JSON data into a Python dictionary, print the amount of retrieved alerts
response = requests.get(url, params=params, headers=headers)
if response.status_code != 200:
print(f"Error: An error occurred. HTTP status code {response.status_code} returned")
exit(1)
alerts_json = response.json()
print("Number of retrieved transfer alerts:", len(alerts_json["data"]))
num_dismissed_alerts = 0
print('Dimissing alerts...')
# Iterate over each alert you retrieved (from the alerts_json["data"] array)
for item in alerts_json["data"]:
# Check if the alert meets the specified criteria
if (
item["alertStatus"] != "Dismissed"
and item["exposureType"] == "INDIRECT"
and item["alertAmountUsd"] > 0
and item["transferredValuePercentage"] < float(args.percentage)
and item["categoryId"] == args.categoryId
):
# Set the path_param variable to the value of the alertIdentifier
path_param = item["alertIdentifier"]
# Make the POST request using the path_param variable as the path parameter
response2 = requests.post(f"{url}/{path_param}/statuses", headers=headers, data=dismissed_status)
if response2.status_code == 200:
print(f"Successfully dismissed alert with alertIdentifier: {path_param}")
num_dismissed_alerts += 1 # Counts how many alerts meet above critera/are dismissed
else:
print(f"Error: An error occured. HTTP status code {response2.status_code} returned.")
# Final print statement for total dismissed alerts
print(f"Number of alerts with a categoryId of \"{args.categoryId}\" dismissed: {num_dismissed_alerts}")
Use the script
Once you've finished your script, run it in your command-line tool with the following command:
python3 dismiss.py -k
API_KEY
-p DISMISSAL_PERCENTAGE
-c ENTITY_CATEGORY_ID
For example, you can dismiss all indirect sanctioned entity alerts whose percentage of transfer is less than one (<1%) with the following command:
python3 dissmiss.py -k 123abc -p 1 -c sanctioned entity
If successful, you should see output in your command-line tool similar to the following:
Number of retrieved transfer alerts: 83
Dimissing alerts...
Successfully dismissed sanctioned entity alert with alertIdentifier: 0f1824fa-7ca8-11ed-a464-a77051ec428e
Successfully dismissed sanctioned entity alert with alertIdentifier: 0f180628-7ca8-11ed-a464-27d19535f166
Successfully dismissed sanctioned entity alert with alertIdentifier: 0f17a0de-7ca8-11ed-a464-f7dca967b16b
Successfully dismissed sanctioned entity alert with alertIdentifier: 0f175e26-7ca8-11ed-a464-bbb6707239e0
Successfully dismissed sanctioned entity alert with alertIdentifier: 0ef823da-7ca8-11ed-a464-b7f5b6483c01
Successfully dismissed sanctioned entity alert with alertIdentifier: 0ef8029c-7ca8-11ed-a464-7bf5ea6bd720
Successfully dismissed sanctioned entity alert with alertIdentifier: 0dd079ee-7ca8-11ed-a464-c3f361e0ebe3
Successfully dismissed sanctioned entity alert with alertIdentifier: 0dd020c0-7ca8-11ed-a464-17995b2d3f2c
Successfully dismissed sanctioned entity alert with alertIdentifier: 0c17221a-7ca8-11ed-a464-8b5786357333
Successfully dismissed sanctioned entity alert with alertIdentifier: 0bfdcdd8-7ca8-11ed-a464-877dcb602d21
Number of sanctioned entity alerts dismissed: 10
Customizations
This script can be tailored to meet the needs of your organization. Here are a few examples of how you could customize it:
- Filter alerts based on additional variables by using additional query parameters.
- Programmatically dismiss alerts for users in specific jurisdictions, such as those where gambling is legal.
- Programmatically dismiss alerts for offboarded users.
- Programmatically dismiss alerts for transfers older than a year.
To implement these customizations, you may need to join with your internal data to determine the status, jurisdiction, or other characteristics of a user.
Retrieve behavioral alerts by date
In this tutorial, you will create a Python script to retrieve and filter behavioral alerts from the GET /v1/alerts/
endpoint. With the script, you can retrieve all behavioral alerts based on a specified date (day, week, or month) and an optional userId
. Since behavioral alerts are not tied to an individual transfer, using dates can be a helpful way to retrieve and organize them.
The script does the following:
- Ingests your chosen date, which can be an individual day, week, or month.
- Translates your inputted date into variables that can filter alerts.
- Retrieves and filters behavioral alerts according to your input.
- Saves your alerts to a CSV and prints them to your command-line interface.
Prerequisites
Before starting, please ensure you have the following:
- Python and pip installed on your system (preferably Python 3.6 or later).
- A code editor or IDE of your choice.
- Basic knowledge of Python and web scraping.
Install required libraries
To begin, install the necessary libraries for your script. Open your command-line interface and run the following commands:
pip install requests python-dateutil
Note that the other libraries used in the script below are included in Python's standard library and don't need installation.
Import the required libraries
In your code editor, create a new Python file called behavioral_alerts.py
. Start by importing the required libraries:
import argparse
import csv
from datetime import datetime, timedelta, timezone
from dateutil.parser import isoparse
from requests import get
import json
Define helper functions
Your script will contain five helper functions to help organize different actions. The section of the tutorial will walk you through creating your helper functions.
Parse and format date range
These functions are responsible for handling date inputs and converting them into suitable formats for querying the API:
get_date_range(date_str)
: takes a date string in the ISO 8601 format as an argument and returns a tuple containing the start and end dates for the query. It handles day, week, and month inputs, which are useful for specifying the date range when making API requests.get_requested_window_size(date_str)
: takes a date string in the ISO 8601 format as an argument and returns the requested window size as a string (1 day
,7 days
,1 mon
). This value is used to filter the response by thewindowSize
property.
Add the following code to your file:
# Function to parse the date string and return the start and end dates for the query
def get_date_range(date_str):
date = isoparse(date_str)
if "W" in date_str:
year, week = map(int, date_str.split("-W"))
start_date = datetime.strptime(f"{year}-W{week:02d}-1", "%Y-W%W-%w")
end_date = start_date + timedelta(days=6)
elif date_str.count('-') == 1:
year, month = date.year, date.month
start_date = datetime(year, month, 1)
end_date = (start_date + timedelta(days=32)).replace(day=1) - timedelta(days=1)
else:
start_date = date
end_date = start_date + timedelta(days=1)
return start_date, end_date
# Function to determine the requested window size based on the date string
def get_requested_window_size(date_str):
date = isoparse(date_str)
if "W" in date_str:
return "7 days"
elif date_str.count('-') == 1:
return "1 mon"
else:
return "1 day"
Filter alerts
These functions help filter the retrieved alerts based on the requested window size and date range, ensuring that only relevant data is returned:
window_size_matches(alert, requested_window_size, start_date, end_date)
: checks if the window size of a given alert matches the requestedwindowSize
and if the alert'speriod_start
is within the requested date range. It returnsTrue
if both conditions are met andFalse
otherwise.filter_alerts(alerts, requested_window_size, start_date, end_date)
: takes a list of alerts and filters them based on the requested windows size (windowSize
) and date range (period
). It returns a new list containing only the alerts that meet the filtering criteria.
Add the following code to your file:
# Function to check if the window size of the alert matches the requested window size
def window_size_matches(alert, requested_window_size, start_date, end_date):
# Parse the period_start and period_end from the alert
period_start, period_end = [datetime.strptime(x.replace('+00', '+0000'), '%Y-%m-%d %H:%M:%S%z') for x in alert['period'][2:-2].split('","')]
# Set timezone for start_date and end_date
start_date = start_date.replace(tzinfo=timezone(timedelta(0)))
end_date = end_date.replace(tzinfo=timezone(timedelta(0)))
# Check if the period_start of the alert is within the requested date range
if not (start_date <= period_start < end_date):
return False
# Check if the window size of the alert matches the requested window size
if alert['windowSize'] != requested_window_size:
return False
return True
# Function to filter alerts based on the requested window size and date range
def filter_alerts(alerts, requested_window_size, start_date, end_date):
filtered_alerts = []
for alert in alerts:
if window_size_matches(alert, requested_window_size, start_date, end_date):
filtered_alerts.append(alert)
return filtered_alerts
Interact with the API
The final get_alerts(api_key, start_date, end_date, userId=None)
function sends a GET request to the API with the specified query parameters (including optional userId
) and returns the retrieved alerts as a list of dictionaries. The function raises an exception if the API response status code is not 200
.
Add the following code to your file:
# Function to GET alerts from the API
def get_alerts(api_key, start_date, end_date, userId=None):
# Define the API endpoint and headers
url = 'https://api.chainalysis.com/api/kyt/v1/alerts/'
headers = {
"token": api_key,
"Content-Type": "application/json",
"accept": "application/json"
}
# Set timezone for start_date and end_date
start_date = start_date.replace(tzinfo=timezone.utc)
end_date = end_date.replace(tzinfo=timezone.utc)
# Define the query parameters for the API request
params = {
"createdAt_gte": start_date.isoformat(),
"createdAt_lte": end_date.isoformat(),
"alertType": "BEHAVIORAL",
"limit": 20000,
}
# Adds userId query param if you submit as an argument
if userId:
params["userId"] = userId
# Send the API request and check for any errors
response = get(url, headers=headers, params=params)
if response.status_code != 200:
raise Exception(f"Error: {response.status_code} - {response.text}")
# Return the 'data' field from the API response
return response.json()['data']
Define the main function
The main function is responsible for coordinating the helper functions, handling command-line arguments, retrieving alerts then filtering alerts, and exporting the results to a CSV file. The rest of this tutorial guides you through the creation of the main function.
Handle command-line arguments
In this part of the script, use the argparse
library to define and parse the command-line arguments. This allows users to specify the required API key, date, and an optional 1 when running the script. Add the following code to your file:
if __name__ == "__main__":
# Parse command-line arguments
parser = argparse.ArgumentParser(description='Retrieve behavioral alerts')
parser.add_argument('-k', '--key', required=True, help='API key for authentication')
parser.add_argument('-d', '--date', required=True, help='Date for which to retrieve alerts (day, week, or month) in ISO 8601 format')
parser.add_argument('-u', '--userId', required=False, help='Optional userID query param to filter alerts by')
args = parser.parse_args()
Calculate date range and requested window size
Using the parsed command-line arguments, call the helper functions get_date_range()
and get_requested_window_size()
to calculate the date range and the requested window size for the query.
Add the following code to your main function, being sure to maintain indentation:
start_date, end_date = get_date_range(args.date)
requested_window_size = get_requested_window_size(args.date)
Retrieve and filter alerts
Next, call the get_alerts()
function to retrieve alerts from the API for the specified date range and the optional userId
. Then, use the filter_alerts()
function to filter the alerts based on the requested window size and date range.
Add the following code to your main function, being sure to maintain indentation:
alerts = get_alerts(args.key, start_date, end_date, args.userId)
filtered_alerts = filter_alerts(alerts, requested_window_size, start_date, end_date)
Export results to a CSV file
Finally, loop through the filtered alerts and write them to a CSV file using the csv
library. Also, you can print some relevant information about each alert, as well as the total number of retrieved alerts.
Add the following code to your main function, being sure to maintain indentation:
with open('filtered_alerts.csv', mode='w', newline='', encoding='utf-8') as csv_file:
if filtered_alerts:
fieldnames = list(filtered_alerts[0].keys())
writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
writer.writeheader()
for alert in filtered_alerts:
# Write the alert to the CSV file
writer.writerow(alert)
# Print select properties
print(f"Alert ID: {alert['alertIdentifier']}")
print(f"Alert Type: {alert['alertType']}")
print(f"Level: {alert['level']}")
print(f"Window Size: {alert['windowSize']}")
print(f"Period: {alert['period']}")
print(f"User ID: {alert['userId']}")
print(f"Created at: {alert['createdAt']}")
print()
# Print entire JSON
#print(json.dumps(filtered_alerts, indent=2)) # Commented
alert_count += 1
print(f"Total retrieved alerts: {alert_count}")
Execute the script
To execute the script:
- Save the provided Python script as behavioral_alerts.py in your working directory.
- Open your command-line tool and navigate to the directory containing the script.
- Run the script with the required arguments:
-k
or--key
: your API key for authentication.-d
or--date
: the date for which to retrieve alerts in ISO 8601 format. Use the format:YYYY-MM-DD
for a specific day, like 2023-01-01.YYYY-MM
for an entire month, like 2022-12.YYYY-Www
for a given week, whereww
is the week number, like 2022-W42.
-u
or--userId
(optional): A user ID to filter alerts by.
Example usage:
python3 behavioral_alerts.py -k
API_KEY
-d 2023-W01 -u USER_ID
The script will send a request to the GET /v1/alerts/
endpoint, retrieve alerts for the specified date range, and filter them based on the requested window size and date range. The filtered alerts will be saved as a CSV file named filtered_alerts.csv
in the same directory as the script, and the alert information will also be printed to the console. The total number of retrieved alerts will be displayed at the end of the script's execution.
Usage
You can implement the logic in this script to run daily for the previous day, weekly for previous week, and monthly for previous month to get newly generated behavioral alerts. What sets this logic apart from just using createdAt
is the included client-side filtering. In other words, depending on the type of date you enter, it'll filter according to corresponding windowSize
and period
JSON response properties. Once new alerts are retrieved, you can then action the userId
accordingly.
Retrieve alerts as Slack notifications
This article guides you through a Python script that retrieves alerts as Slack notifications. You can use the script as-is or customize it (for example, to create notifications for all alerts or only alerts that meet a target criteria).
The script is especially beneficial for team members in your organization who may not be familiar with the API or have time to monitor the Transaction Monitoring dashboard.
The Python script outlined here automates the following things:
- Checks for alerts generated since the last time you ran the script.
- Sends a Slack notification and link for any newly generated alerts.
- Saves the timestamp when you last ran the script for future reference.
Before you start
Before you continue this tutorial, please ensure you meet the following prerequisites:
- Install Python 3.6 or greater.
- Have a valid Transaction Monitoring API Key.
- Have a Slack workspace that can receive notifications.
Build the script
Import the necessary libraries
At its core, the Transaction Monitoring API is a client that makes HTTP requests and returns JSON. To interact with the API successfully, import the following libraries:
datetime
: provides the time in the appropriate format.requests
: allows Python to make API requests (to call the Transaction Monitoring API).argparse
: allows you to input variables on the command line.json
: parses the returned JSON responses.
Import each of these libraries:
from datetime import datetime
import requests
import argparse
import json
Enable multiple users to access the script
argparse
allows you to input arguments that the script can use as variables. In this case, your API key that points toward your organization and authenticates your requests. If multiple team members use the script, each can input their API key in the command line.
The following code creates an argument that asks for an API key upon running:
parser = argparse.ArgumentParser(description="Check for new alerts")
parser.add_argument("-k", "--key", required=True,
help="Chainalysis API Key",metavar="key")
args = parser.parse_args()
Retrieve alerts
This function checks for alerts generated since that script was last run. It calls the GET /v1/alerts/
endpoint with the query parameter createdAt_gte
to avoid creating notifications for old alerts.
As an example, GET https://api.chainalysis.com/api/kyt/v1/alerts/?createdAt_gte=2021-06-01T12:30:00
retrieves all alerts generated after 12:30PM UTC on June 1st, 2021.
The following code creates the function get_alerts
, which only retrieves alerts generated since the last time the script was run.
def get_alerts()
"""Retrieve alerts that were generated since the last time the script was run."""
base = "https://api.chainalysis.com"
h = {
'token': args.key,
'Content-Type': 'application/json',
'accept': 'application/json'
}
url = f"{base}/api/kyt/v1/alerts/?createdAt_get={last_runtime}"
response = requests.get(url,headers=h)
return response.json()
Send Slack notifications
Once you've retrieved your newly created alerts, you can use Slack's webhook to deliver them in a notification to your Slack workspace. To learn more about creating a webhook URL for your Slack workspace, see Slack's documentation Sending messages using Incoming Webhooks. Once you have this URL, replace it in the code below as the value for url
.
The following code creates the function send_slack_message
, which sends you a Slack notification with the following information:
- a notification title (for example, "You have new alerts!").
- the total number of new alerts.
- the timestamp of the last runtime.
- a link to view the alerts in the Transaction Monitoring dashboard.
def send_slack_message():
"""Trigger a Slack notification via webhook"""
url = "" # Insert your webhook URL here.
h = {'Content-Type': 'application/json'}
payload = {
"blocks": [
{
"type": "header",
"text": {
"type": "plain_text",
"text": "You have new alerts!"
}
},
{
"type": "section",
fields": [
{
"type": "mrkdwn",
"text": f"*Total new alerts:*\n{total_alerts}"
},
{
"type": "mrkdwn",
"text": f"*Last runtime:*\n{last_runtime[0:19]}"
}
]
},
{
"type": "section",
"fields": [
{
"type": "mrkdwn",
"text": f"<https://kyt.chainalysis.com/alerts?alertStatus=Unreviewed&createdAt_get={last_runtime}|*View new alerts*>"
}
]
}
]
}
response = requests.post(url,headers=h,data=json.dumps(payload))
# print(response.text) # If your webhook is working as expected, this will return 'ok'.
You can customize the payload
variable to present different information. This variable was constructed using Slack's Block Kit Builder.
The code and payload above generate a Slack notification that looks like this:
Update your latest run time
After you have retrieved any new alerts, you want to ensure you don't re-retrieve old alerts. To retrieve only newly generated alerts, the code below creates the function update_last_runtime
to rewrite the script and set the last_runtime
variable as the current time in UTC. The get_alerts
function uses the last_runtime
variable when checking for alerts.
def update_last_runtime():
"""Modify script to set last_runtime variable to the current time in UTC."""
with open(__file__, 'r') as f:
lines = f.read().split('\n')
# Find which line last_runtime variable resides and modify it to the current time
variable_index = [i for i, e in enumerate(lines) if "last_runtime = " in e]
lines[variable_index[0]] = (f'last_runtime = "{(str(datetime.utcnow())).replace("", "T")}"')
# Update the script to include last_runtime variable as time now in UTC.
with open(__file__, 'w') as f:
f.write('\n'.join(lines))
Putting it together
If you combine the sections above, you can write a template script to send a notification whenever there is more than one newly generated alert:
from datetime import datetime
import requests
import argparse
import sys
import json
parser = argparse.ArgumentParser(description="Check for new alerts")
parser.add_argument(
"-k", "--key", required=True, help="Chainalysis API Key", metavar="key"
)
args = parser.parse_args()
last_runtime = (str(datetime.utcnow())).replace(" ","T")
def get_alerts():
"""Get any alerts that were generated since the last time the script was run"""
base = "https://api.chainalysis.com"
h = {
"token": args.key,
"Content-Type": "application/json",
"accept": "application/json",
}
url = f"{base}/api/kyt/v1/alerts/?createdAt_gte={last_runtime}"
response = requests.get(url, headers=h)
if response.status_code != 200:
print("Failed to get alerts. Try again. Exiting...")
sys.exit()
return response.json()
def send_slack_message():
"""Trigger a Slack notification via webhook."""
url = "" # keep this secret
h = {"Content-Type": "application/json"}
payload = {
"blocks": [
{"type": "header", "text": {"type": "plain_text", "text": "You have new alerts!"}},
{
"type": "section",
"fields": [
{"type": "mrkdwn", "text": f"*Total new alerts:*\n{total_alerts}"},
{
"type": "mrkdwn",
"text": f"*Last runtime:*\n{last_runtime[0:19]}",
},
],
},
{
"type": "section",
"fields": [
{
"type": "mrkdwn",
"text": f"<https://kyt.chainalysis.com/alerts?alertStatus=Unreviewed&createdAt_gte={last_runtime}|*View new alerts*>",
}
],
},
]
}
response = requests.post(url, headers=h, data=json.dumps(payload))
# print(response.text) <- if your webhook is working as expected, this will return 'ok'
def update_last_runtime():
"""Modify script to set last_runtime variable to current time in UTC"""
with open(__file__, "r") as f:
lines = f.read().split("\n")
# find which line last_runtime variable resides
variable_index = [i for i, e in enumerate(lines) if "last_runtime = " in e]
lines[
variable_index[0]
] = f'last_runtime = "{(str(datetime.utcnow())).replace(" ","T")}"'
# update script to include last_runtime variable as time now in UTC
with open(__file__, "w") as f:
f.write("\n".join(lines))
if __name__ == "__main__":
alerts = get_alerts()
total_alerts = alerts["total"]
if total_alerts > 0:
send_slack_message()
update_last_runtime())
Use the script
After completing your script, follow the steps below to use it:
- Generate an API key here. (If you are unable to, contact support to check your permissions.).
- Using the sections above, build your script (we named ours
alerter.py
). - In your CLI, navigate to the directory where your script lives.
Run the following command, being sure to replace
$KEY
with your API key:python3 alerter.py -k $KEY
Congratulations! If successful, you should have received a Slack notification in your workspace detailing new alerts.
Frequency
Run the script at your desired frequency with a utility of choice. For example, cron is a job scheduler that allows you to run scripts (and other things) at fixed times, dates, or intervals. You can use cron to enable workflows that run only Monday-Friday, avoiding weekend notifications, once a month, or on other cadences. For some examples of how to write various scheduling expressions, see crontab guru.
Customizations
There are many ways to customize the type of alerts you retrieve. Depending on your volume of alerts and compliance workflows, you may want to restrict notifications to specific categories, alert severities, or user groups. The following examples below detail some of these possibilities:
Standard
The following code sends a notification whenever there is at least one new alert:
if total_alerts > 0:
send_slack_message()
Severity
The following code only sends notifications when a target alert severity of HIGH
or SEVERE
is found:
if total_alerts > 0:
for alert in alerts:
severity_count =
[alert for alert in alerts if alert['level'] == 'HIGH' or 'SEVERE']
if len(severity_count)> 0 :
send_slack_message()
You could filter the above to target only SEVERE
severities.
Category
The following code only sends notifications when the target alert category of sanctioned entity (categoryId
of 3
) is found:
if total_alerts > 0:
for alert in alerts:
category_count =
[alert for alert in alerts if alert['categoryId'] == 3]
if len(category_count) > 0:
send_slack_message()
You could filter the above to target any desired category.
User group
The following code only sends notifications for user IDs (userId
) specified in a list (for example, userid_list
):
if total_alerts > 0:
for alert in alerts:
target_user_count =
[alert for alert in alerts in alert['userId'] in userid_list]
if len(target_user_count) > 0 :
send_slack_message()
You could specify multiple different lists of different groups of user IDs.
Resources
Asset tiers
Chainalysis categorizes assets into three tiers, which determine what data you need to provide to receive immediate alerts on transfers and withdrawal attempts:
- Mature
- Emerging
- Pre-growth
For mature and emerging tier assets, the Transfers API populates your requests with blockchain and pricing data, reducing the data you need to provide.
Unlike in the higher tiers, pre-growth tier assets require you to send additional information if you want to receive immediate alerts. While the Transfers API will still accept and return a success code (2xx
) for transfers of pre-growth tier assets without the additional properties, you will not receive immediate alerts. If you choose not to send additional data at the time of registration, you may still receive future alerts if the asset graduates to a higher tier.
Very rarely, pricing data may be missing for mature and emerging tier assets. Often, this data will become available as the asset becomes widely traded. In these cases, transfers for mature tier assets will have the price data backfilled. For information about this, see Pricing data.
Required properties for immediate alerts
The following tables shows what properties are required if you want to receive immediate alerts at each tier. Note that you can register transfers of pre-growth tier assets with only the required properties of the mature and emerging tier assets and wait to receive alerts when the asset graduates, but that may never occur if the asset never meets our criteria.
Transfers
Property name | Mature tier | Emerging tier | Pre-growth tier |
---|---|---|---|
network |
✓ | ✓ | ✓ |
asset |
✓ | ✓ | ✓ |
transferReference |
✓ | ✓ | ✓ |
direction |
✓ | ✓ | ✓ |
transferTimestamp |
✓ | ||
assetAmount |
✓ | ||
outputAddress |
✓ | ||
inputAddresses |
✓ | ||
assetPrice |
If an asset is missing pricing data, wait for Chainalysis to obtain pricing data. | If an asset is missing pricing data, supply your own for immediate alerts. | ✓ |
assetDenomination |
If an asset is missing pricing data, wait for Chainalysis to obtain pricing data. | If an asset is missing pricing data, supply your own for immediate alerts. | ✓ |
Withdrawal attempts
Property name | Mature tier | Emerging tier | Pre-growth tier |
---|---|---|---|
network |
✓ | ✓ | ✓ |
asset |
✓ | ✓ | ✓ |
address |
✓ | ✓ | ✓ |
attemptIdentifier |
✓ | ✓ | ✓ |
attemptTimestamp |
✓ | ✓ | ✓ |
assetPrice |
If an asset is missing pricing data, wait for Chainalysis to obtain pricing data. The API will return a No pricing info for withdrawal attempt error message. |
If an asset is missing pricing data, supply your own for immediate alerts. | ✓ |
assetDenomination |
If an asset is missing pricing data, wait for Chainalysis to obtain pricing data. The API will return a No pricing info for withdrawal attempt error message. |
If an asset is missing pricing data, supply your own for immediate alerts. | ✓ |
Pricing data
Occasionally, less popular or narrowly adopted long-tail assets may lack pricing data even if they are categorized within the mature or emerging tiers. Though rare, this can occur for various reasons. To learn more about the reasons and for some solutions, see the knowledge base article Missing pricing data.
- For transfers, you can determine an asset lacks pricing data when you receive a
null
value forupdatedAt
and the request is:- Formatted correctly.
- For a mature or emerging tier network.
- For withdrawal attempts, you’ll get a
4xx
error with the message"No pricing info for withdrawal attempt"
.
For general recommendations about when to provide additional pricing data, see the tables in the Required properties for immediate alerts section above.
When assets graduate to higher tiers
If an asset graduates to a higher tier or if pricing data becomes available, the Transfers API will automatically update previously registered transfers with our data:
- If you have not supplied additional properties for pre-growth tier assets: you may now begin receiving alerts for these transfers as we backfill our sourced data.
- If you have supplied additional properties: we’ll begin using our sourced blockchain and pricing data and store your supplied data for audit purposes. Depending on the accuracy of the data you provided, alerts may generate, invalidate, or revalidate.
Previously registered withdrawal attempts will not be updated with pricing data. However, if you subsequently registered the withdrawal attempt as a sent transfer, that sent transfer will be updated.
Additionally, if you have supplied the additional properties for mature and emerging assets (as if they were a pre-growth tier asset), the Transfers API will prioritize our own sourced data and store your data for audit purposes.
The network and asset properties
The network
property indicates which blockchain network the transaction occurred on. For example, values could be Ethereum
, Avalanche
, Bitcoin
, Lightning
, or other network names.
The asset
property indicates which asset or token was transferred. For example, if Ether was transacted, you would supply ETH
as the value. If the ERC-20 Aave was transacted, you would supply AAVE
as the value.
The combination of network
and asset
allows the Transfers API to distinguish between assets that operate on multiple networks, such as the stablecoin USDT. Another example is the asset bitcoin, which can be transferred on both the Bitcoin and Lightning networks.
Examples
To register a transfer of USDT on the Avalanche network, you would supply the following pair in your request body:
{
"network": "Avalanche",
"asset": "USDT"
}
And to register a transfer of USDT on the Ethereum network, you would supply this pair:
{
"network": "Ethereum",
"asset": "USDT"
}
For lists of the networks Chainalysis currently supports, see Mature and emerging networks and Pre-growth networks.
Tokens with nonunique symbols
In rare instances, two tokens on a single network may share the same asset symbol. If you’re actively registering transfers of assets that you know share the same symbol, use assetId
during registration. To learn more, see Asset IDs.
If you’re not using assetId
during registration, generally nonunique asset symbols will still not be an issue since you identify the output address in the transferReference
property. The output address will effectively determine the correct asset in most instances.
In the extremely rare scenario that two tokens share an asset symbol and sent funds to the same output address in the same transaction, the Transfers API will process the first transfer seen. If this is the incorrect transaction, please contact Customer Support to correct this issue, then integrate with the assetId
property going forward.
The transferReference property
The transferReference
property enables the Transfers API to locate the transfer on a blockchain and typically comprises a transaction hash and output address.
The transaction hash is a unique identifier a blockchain generates for each transaction and is used to identify the transaction. The output address is the destination address for funds within the transaction. It is used to determine the particular transfer within a transaction (which may contain multiple transfers). For received transfers, the output address is an address you control. For sent transfers, the output address is an external address.
Because blockchains use different record-keeping models (for example, UTXO or account-based), you must supply different values for the transferReference
property depending on the blockchain. In our list of networks, we indicate the proper format for the property.
UTXO model
For networks using a UTXO model, you must reference a transaction hash and corresponding output address or corresponding transaction hash index:
{transaction_hash}:{output_address}
{transaction_hash}:{output_index}
As an example, let's say you want to register a transfer with the transaction hash 9022c2d59d1bed792b6449e956b2fe26b44b1043bbc1662d92ecd29526d7f34e and an output address of 18SuMh4AFgTSQRvwFzdYGieHtgKDveHtc, which is in the 6th place in the transaction.
To register the above transfer using an output address, the property should look like this:
{
"transferReference": "9022c2d59d1bed792b6449e956b2fe26b44b1043bbc1662d92ecd29526d7f34e:18SuMh4AFgTSQRvwFzdYGieHtgKDveHtc"
}
To register the transfer using the output index, the property should look like this:
{
"transferReference": "9022c2d59d1bed792b6449e956b2fe26b44b1043bbc1662d92ecd29526d7f34e:5"
}
The output index value above is 5
because the index starts at 0.
Account/balance model
For networks using an account model (for example, Ethereum and EVM-compatible networks), you must reference a transaction hash and corresponding output address: {transaction_hash}:{output_address}
.
As an example, let's say you want to register an Ether transfer with the transaction hash 0xe823c9b7895f9c47985c80e4611272f8194403e885c9cc603422cd609d738098 and output address 0x3d21a92285bf17cbdde5f77531b8b58ac400288a.
To register this transfer, the property should look like this:
{
"transferReference": "0xe823c9b7895f9c47985c80e4611272f8194403e885c9cc603422cd609d738098:0x3d21a92285bf17cbdde5f77531b8b58ac400288a"
}
If registering a transfer that sent funds to a smart contract, use the smart contract's contract address as the output address. If registering a transfer that received funds from a smart contract, use the end user's destination address as the output address.
In transactions where an output address is used multiple times (for example, some interactions with smart contracts), Transaction Monitoring will register the first output address where the transferred amount exceeds 0.
Solana
When registering transfers of SOL, the transferReference
property only accepts the System Account address. The property should be formatted like the following:
{transaction_hash}:{system_account_address}
When registering transfers of SPL tokens, the transferReference
property accepts either the System Account address (the wallet address) or the Token Account address (the ATA address). You can format the transferReference
property in either of the following ways:
{transaction_hash}:{system_account_address}
{transaction_hash}:{token_account_address}
When registering historical transfers of SPL tokens with a System Account address, use the System Account address that owned the Token Account at the time of transfer, which may not be the current address.
Other models
Some assets require a unique transferReference
value that does not fit into the above schemas.
Monero
For deposits, you should format transferReference
like the following:
{transaction_hash}:{output_index}:{receiving_address}:{payment_ID}
For withdrawals, you should format transferReference
like the following:
{transaction_hash}:{output_index}:{withdrawal_address}:{payment_ID}
Lightning Network
For the Lightning Network, you must supply a combination of the payment hash and recipient node key. You should format transferReference
like the following:
{payment_hash}:{node_key}
To learn more, see Registering Lightning Network transactions and withdrawals.
Just a transaction hash
Some blockchain protocols require just a transaction hash and no output address or output index. We indicate these networks in the transferReference
column of the network tables.
Polling the summary endpoints
Generally, Transaction Monitoring processes withdrawal attempts and transfers with blockchain confirmations in near real-time, thereby providing a quick risk assessment to make synchronous decisions. To verify Transaction Monitoring processed your request, you can call the summary endpoints (summary for withdrawal attempts or summary for transfers) and check whether updatedAt
returns a non-null
value. Once updatedAt
returns a non-null
value, you can attempt to retrieve alerts, exposure, or identifications.
Withdrawal attempts
Transaction Monitoring usually processes withdrawal attempts in less than a minute. You can begin polling the summary endpoint immediately after registering the withdrawal until updatedAt
returns a non-null
value, at which point you can begin checking for any generated alerts, exposure, or identifications.
While Transaction Monitoring often processes withdrawal attempts quickly, we still recommend setting a policy for how long to wait in case of latency or updatedAt
does return null
for an extended amount of time.
Transfers with confirmation
Assuming you have made the initial POST request after a block has been confirmed on the network, most transfers process in less than a minute (see the table below for more information). However, we do suggest building a buffer for instances of increased latency and setting a policy for how long to wait before crediting a user's account (for received transfers).
Many services require a few confirmations before crediting a user's funds, which usually takes several minutes. It is during this time that you can begin polling the GET /transfers/{externalId}
endpoint.
Transfers without confirmation
If you register a transfer before it has any confirmations, processing times may increase as the system first needs to validate the transaction exists on a blockchain, and updatedAt
will remain null
for longer.
If you register a transfer well before it is confirmed (for example, during high network congestion), it may take longer to process the transfer. We recommend registering the transfer either after it is confirmed on the blockchain or shortly before.
Processing times per network
Transaction Monitoring begins processing block data after a transaction achieves a predetermined number of confirmations. Often this threshold is a single confirmation, but some networks require more (for example, EOS). Additionally, the speed at which networks acquire confirmations varies from network to network (for example, Bitcoin’s block time is ~10 minutes while Ethereum’s block time is ~12-14 seconds).
See the table below for the confirmation threshold and approximate processing time per network. Note that our team is continually working to improve the speed at which we ingest and process block data.
Network | Confirmations required to ingest block data | After the threshold is met, the approximate processing time | |
---|---|---|---|
Bitcoin | 1 confirmation | Approx. 30 seconds | Caution: These times are estimates, and sometimes latency can fluctuate. After the threshold, it may take additional time to process depending on various factors, such as how long before confirmation you submitted a transfer. We suggest building a buffer in your internal system to account for potential irregularities. |
Bitcoin Cash | 1 confirmation | Approx. 10 seconds | |
Bitcoin SV | 1 confirmation | Approx. 1 minute | |
Ethereum | 1 confirmation | Approx. 5 seconds | |
Ethereum Classic | 1 confirmation | Approx. 5 minutes | |
Litecoin | 1 confirmation | Approx. 10 seconds | |
EOS | 360 confirmations | Approx. 3 minutes | |
XRP | 1 confirmation | Approx. 5 seconds | |
Zcash | 1 confirmation | Approx. 5 seconds | |
Dogecoin | 1 confirmation | Approx. 5 seconds | |
Dash | 1 confirmation | Approx. 5 seconds | |
TRON | 20 confirmations | Approx. 5 seconds | |
Polygon | 1 confirmation | Approx. 5 minutes |
Register Lightning Network transfers and withdrawals
You can use the Transaction Monitoring API to register Lightning Network (LN) transfers and withdrawal attempts. Registering LN transfers is similar to registering any other transfer, but with a few modifications to the values of a handful of request body properties.
This article describes how to register transfers using the POST /v2/users/{userId}/transfers
and POST /v2/users/{userId}/withdrawal-attempts
endpoints, but you can also use the v1 received transfers, sent transfers, and withdrawal pre-screening endpoints by applying the principles outlined in this article to the values of the v1 request body properties.
Updated request properties
Since Lightning Network transfers occur “off-chain”, Chainalysis requires you to treat certain request properties slightly differently than in typical “on-chain” transfers. The following properties are required and use slightly different values:
network
asset
transferReference
transferTimestamp
assetAmount
network
and asset
Be sure to use Lightning
as the value for the network
property and BTC
as the value for the asset
property. Failing to specify Lightning
may result in a failed request.
For example, the property value pair should look like the following: "network": "Lightning", "asset": "BTC"
.
transferReference
To uniquely identify a Lightning Network transfer, you must supply a combination of the payment hash and node key the funds are being sent to as the value for the transferReference
property. Separate the payment hash and node key with a colon (payment-hash:node-key
) and supply that string as the value for the property.
As an example, the property value pair should look similar to the following but supplied with your own payment hash and the recipient node key: "transferReference": "2db4d5579a0e198bd5ce8ffe83ecd0de94acde98cde94125337e2419ebb4cc50:02a0c9089ace681ef4e6ae5310b028d9c2a09187bfbc616da6251e3d08801851b8"
.
transferTimestamp
For Lightning Network transfers, transferTimestamp
is a required property. Chainalysis uses the timestamp to (1) identify the USD value for any transferred Bitcoin and (2) generate time-based behavioral alerts.
Be sure to send your timestamp in the UTC ISO 8601 format. For example, the property value pair should look similar to the following: "transferTimestamp": "2021-11-25T11:48:21.000000"
.
assetAmount
assetAmount
is also a required property and should be denominated in Bitcoin.
The following is an example of the property value pair: "assetAmount": ".00588"
.
Unrequired request properties
You aren't required to send the remaining request properties such as outputAddress
, assetPrice
, and assetDenomination
, but you won't receive an error response if you do send them. In the scenario that you do send them, you won't receive any values that you sent in the POST request in subsequent GET requests.
Since the Lightning Network handles transfers off-chain, there is no standard wallet address to use as a value for the outputAddress
property. Instead, use the Node Key as the value for this property, similarly to the transferReference
property.
Putting it all together
You can register transfers or withdrawal attempts with either the v1 or v2 endpoints; both provide the same capability. See the right-most column for request examples of each endpoint.
The following is an example to register a sent LN transfer using the v2/users/{userId}/transfers
endpoint:
curl -X POST 'https://api.chainalysis.com/api/kyt/v2/users/new_user_01/transfers' \
--header 'Token: {YOUR_API_KEY}' \
--header 'Content-type: application/json' \
--data '{
"network": "Lightning",
"asset": "BTC",
"transferReference": "2db4d5579a0e198bd5ce8ffe83ecd0de94acde98cde94125337e2419ebb4cc50:02a0c9089ace681ef4e6ae5310b028d9c2a09187bfbc616da6251e3d08801851b8",
"direction": "sent",
"transferTimestamp": "2021-11-25T11:48:21.000000",
"assetAmount": ".00588"
}'
The following is an example to register the same transfer as above, but with the v1/users/{userId}/transfers/sent
endpoint:
curl -X POST 'https://api.chainalysis.com/api/kyt/v1/users/new_user_01/transfers/sent' \
--header 'Token: {YOUR_API_KEY}' \
--header 'Content-type: application/json' \
--data-raw '[
{
"network": "Lightning",
"asset": "BTC",
"transferReference": "2db4d5579a0e198bd5ce8ffe83ecd0de94acde98cde94125337e2419ebb4cc50:02a0c9089ace681ef4e6ae5310b028d9c2a09187bfbc616da6251e3d08801851b8",
"transferTimestamp": "2021-11-25T11:48:21.000000",
"assetAmount": ".00588"
}
]'
Withdrawal attempts and withdrawal pre-screening
You can use either the POST /v2/users/{userId}/withdrawal-attempts
or POST /v1/users/{userId}/withdrawaladdresses
endpoint to assess the risk of a Node Key a user is attempting to withdraw funds to.
When registering with either endpoint, be sure to use Lightning
as the value for the network property, BTC
as the value for the asset property, and the Node Key as the value for the address
property. To learn more about using these endpoints, see Register a withdrawal attempt (v2) and Register withdrawal addresses (v1).
User IDs
User IDs are crucial for keeping a clean Transaction Monitoring instance. The API creates them from the userId
path parameter whenever you register a transfer or withdrawal attempt. If the userId
already exists in your instance, Transaction Monitoring will automatically link the new transfer or withdrawal attempt to it. If the userId
is new, Transaction Monitoring will create a corresponding user in your instance.
When creating a userId
:
- Keep the length between 1 and 200 characters.
- Use any combination of letters (
a-z
,A-Z
), numbers (0-9
), and special characters including hyphen (-
), underscore (_
), and colon (:
).
The following are some suggestions when creating a userId
:
- Use the same
userId
for a user's activity across all network and asset types. This consistency will give you a more accurate analysis of the user's overall activity and risk profile. - Ensure the
userId
corresponds to your internal user identification system. This will help your compliance team map transfers, withdrawals, and alerts to users in your own systems. - Ensure the
userId
does not include any PII (personally identifiable information).
Asset IDs
The assetId
request body property is a unique asset identifier that helps the Transfers API process transfers and withdrawal attempts. Since tokens may have non-unique symbols or rebrand symbols, assetId
provides a reliable way to identify assets when registering transfers and withdrawal attempts.
For EVM blockchains, assetId
should be the smart contract address. For non-EVM blockchains like Solana, assetId
should be the asset's unique address or ID as defined by the blockchain protocol. For example, Solana uses a Program ID and Algorand uses an ASC1.
Note that native assets like ETH
don’t have a smart contract address.
Implementation
Currently, assetId
is available only on networks with Dynamic Token Support and emerging networks (except for Lightning)
The assetId
property is optional and you must still send network
and asset
when registering transfers or withdrawal attempts. The following are some key scenarios:
- Submitting
network
andasset
withoutassetId
processes successfully. - When providing all
network
,asset
, andassetId
, the API prioritizesassetId
overasset
. - Submitting only
assetId
withoutasset
will result in an error.
The Transfers API validates assetId
against the asset and network and will reject the request if you submit an incorrect assetId
. For EVM blockchains, the API accepts both normalized and display formats.
Endpoint support
The assetId
property is currently available for the V2 registration endpoints:
POST /v2/users/{userId}/transfers
POST /v2/users/{userId}/withdrawal-attempts
It is also returned as a response property in the summary endpoints if used during registration:
GET /v2/transfers/{externalId}
GET /v2/withdrawal-attempts/{externalId}
The V1 transfer endpoints do not support assetId
, and attempts to use it will result in an error.
Address and hash formats
Crypto addresses and transaction hashes are cryptographic identifiers, and can be represented in various formats. Each has different use cases. Using the formatType
query parameter, you can choose which format type returns in your JSON response:
- Normalized: this format is often a more basic or simplified version of the address or hash. It's typically used in computer systems or machine-to-machine communications. Often, this format strips out prefixes or other human-readable elements included in the display format, leaving just the essential information needed. An example of a normalized address for EVM blockchains is:
7cb57b5a97eabe94205c07890be4c1ad31e486a8
- Display: this format usually refers to how an address or transaction hash appears when it's shown to end-users. For example, Bitcoin addresses typically start with a
1
,3
, orbc1
when displayed to users, while Ethereum addresses typically start with0x
. The display format can include checksums or error-detection codes to prevent typos. An example of a display address for EVM blockchains is:0x7cB57B5A97eAbe94205C07890BE4c1aD31E486A8
The formatType
query parameter is only available for the following endpoints and response properties:
GET /v2/transfers/{externalId}
tx
outputAddress
POST /v2/users/{userId}/withdrawal-attempts
address
GET /v2/withdrawal-attempts/{externalId}
address
Note that transferReference
, which consists of a transaction hash and output address, doesn't change according to the specified formatType
, and always reflects what you sent during transfer or withdrawal attempt registration.
Default display
If you don’t specify a format type in requests, the API defaults to different formats depending on the network tier of the transfer:
- With mature networks and assets, the API stores and sends data in a normalized format.
- With emerging networks and assets, the API stores and sends data in the display format.
Pre-growth networks and assets don’t support the formatType
query parameter. Using this parameter for pre-growth networks will result in a 4xx
error.
Troubleshooting
Below are a couple of common troubleshooting issues:
- The
POST /v1/users/{userId}/withdrawaladdresses
endpoint does not generate alerts or reference your customized alert rules. Instead, it references the Default Risky Categories when sending. You can customize these by contacting customer support. - In the response of the
POST /v1/users/{userId}/transfers/received
endpoint, therating
property will always returnlowRisk
regardless of the counterparty if you made the API request while the transaction was still in the mempool. When using this endpoint, ensure that the transaction has been confirmed on the blockchain network before registering. - When registering transfers on EVM networks (for example, Ethereum, Polygon, or Binance Smart Chain), Transaction Monitoring will return a
2xx
for syntactically validtransferReference
properties, even if the network and asset pair do not match. Transaction Monitoring will be unable to process transfers where the network and asset don't match, and eventually discard them. Ensure you specify the correct network and asset pair.
Glossary
Account
An Account, sometimes known as a Balance, is the method that some cryptocurrencies and tokens use to keep track of transactions in their transaction ledger in the Blockchain. The Account transaction method keeps track of the total currency in the Account in a global state. This transaction method allows for Smart Contracts to be created that keep track of different states to perform different tasks based on the state.
Examples of Account transaction based cryptocurrencies include:
- Ethereum
- Paxos Standard Token
- TrueUSD
- Tether
Address
An Address is a cryptographic hash of a Public and Private key pair that holds value for a given cryptocurrency or token Asset. Bitcoin and other cryptocurrencies that use UTXO based transactions use what is called an "Address". Ethereum and other currencies that use Account based transactions use what is known as an "Account".
Output Address
Output addresses are the destination addresses of where cryptocurrency is sent. A correct output address depends on whether the transfer is RECEIVED
or SENT
.
For RECEIVED
transfers, the output address is internal to your service (the address where your service received funds).
For SENT
transfers, the output address is external to your service (the address where your service sent funds).
Deposit Address
Deposit addresses are addresses that you manage on behalf of your users, where they can deposit value to your service. A deposit address is always associated with exactly one user, and should never be reused for another user, but a user can have multiple deposit addresses. Deposit addresses can be registered even before they have received value.
Withdrawal Address
Withdrawal addresses are foreign Bitcoin addresses outside your service, to which the user intends to send value. Multiple users might send value to the same withdrawal addresses.
Withdrawal addresses should be registered as early as possible for best results, for instance right when the address is pasted into a withdrawal form. When registering a withdrawal address a real-time rating of the address as a recipient of value is returned. This allows you to take action on suspicious behavior immediately.
Asset
An Asset is the cryptocurrency or token being tracked (such as Bitcoin or Tether). For a list of currently supported networks and assets, see Supported networks and assets.
Cluster
A cluster is a collection of cryptocurrency addresses that Chainalysis has identified to be controlled by one entity.
Interval notation
The use of parentheses and brackets to indicate whether endpoints are inclusive. Parentheses indicate the endpoints are not inclusive, brackets indicate the endpoints are inclusive, and a mixed set indicates one of the endpoints is inclusive. For example:
- (20, 40) means greater than 20 and less than 40.
- [20, 40] means greater than or equal to 20 and less than or equal to 40.
- (20, 40] means greater than 20 and less than or equal to 40.
- [20, 40) means greater than or equal to 20 and less than 40.
To learn even more, see Wikipedia.
KYT
KYT is the abbreviation for Know Your Transaction. KYT has rebranded to Transaction Monitoring. To learn more, see Ten Years of Innovating with our Customers.
Transfer
A Transfer is the part of a transaction that transfers value from one address to another address. For some asset types like Ethereum each transaction is one transfer, but for asset types like Bitcoin a transaction can contain multiple transfers.
Received Transfer
Received transfers are the value transfers that your service receives on behalf of a user into their deposit address. Received transfers are registered and processed according to the same rules as sent transfers.
Sent Transfer
Sent transfers are the value that your service sends on behalf of a user when the user makes a withdrawal from your service. Regardless of asset type, the transfer will be part of a transaction. A transfer can be registered as soon as its transaction has been created, and even before it has been broadcast to a blockchain. Once a transfer has been registered, Transaction Monitoring will track it.
For some asset types, the transfer will have to “settle” before it is processed; in Bitcoin we will wait until the transaction is 5 blocks deep to make sure the risk score reflects stable data. Registered transfers that remain unsettled for too long will automatically discard the registered transfer after a timeout of several days.
User IDs
All user activity is recorded in Transaction Monitoring under a userId
. To learn more, see User IDs
UTXO
UTXO is the abbreviation for Unspent Transaction Output. UTXO is the method that some cryptocurrencies use to keep track of transactions in their transaction ledger in the Blockchain. UTXO is the amount of unspent cryptocurrency that can be spent in new transactions.
Examples of UXTO transaction based cryptocurrencies include:
- Bitcoin
- Bitcoin Cash
- Litecoin
Supported networks and assets
This section contains network lists for each asset tier. For an overview of functionality for each tier, see Asset coverage.
To use these lists:
- Identify the blockchain network on which the asset you want to register transfers for is operating.
- Check whether the blockchain network is listed at the mature and emerging tier.
- If the network is not listed at this tier, proceed to step 3.
- If the network is listed as mature and emerging:
- Verify whether we support all or certain assets/tokens on this network. If so, register transfers of the asset as mature and emerging.
- If only specific assets are supported and your asset is not listed, register transfers of the asset as pre-growth.
- Check whether the blockchain network is listed at the pre-growth tier.
- If the network is not listed as pre-growth, contact Customer Support to request enabling that network as pre-growth.
- If the network is listed as pre-growth, register any assets on that network as pre-growth.
Mature and emerging networks
The following table contains our mature and emerging networks. We provide the network's name, the native asset symbol, resources for token symbols, and the transferReference
property. Note that networks and any asset symbols are case insensitive.
For token symbols (ERC-20s, TRC-20s, and other token standards), we suggest the resources in the table below, as well as the following external lists:
- tokenlists.org (for ERC-20s)
- For best results, open one of the lists (for example, the Compound list) and then search for a token.
- coinmarketcap.com (provides some API functionality to retrieve asset symbols)
- coingecko.com
Network value | Native asset symbol | Supported tokens | Format for transferReference |
---|---|---|---|
Algorand |
ALGO | Metatokens on the Algorand network are supported as pre-growth assets: algoexplorer.io/assets | {transaction_hash}:{output_address} |
Arbitrum |
ARB | We offer dynamic token support for all ERC-20 tokens: arbiscan.io/tokens | {transaction_hash}:{output_address} |
Avalanche (Avalanche C-Chain) |
AVAX | We offer dynamic token support for all ERC-20 tokens: avascan.info/blockchain/all/tokens/erc20 | {transaction_hash}:{output_address} |
Base |
We offer dynamic token support for all ERC-20 tokens: https://basescan.org/tokens | {transaction_hash}:{output_address} |
|
Binance_Smart_Chain |
BNB | We offer dynamic token support for all BEP-20 tokens: bscscan.com | {transaction_hash}:{output_address} |
Bitcoin |
BTC | All BRC-20s are supported as pre-growth assets. | {transaction_hash}:{output_address} OR {transaction_hash}:{output_index} |
Bitcoin_Cash |
BCH | {transaction_hash}:{output_address} OR {transaction_hash}:{output_index} |
|
Celo |
CELO | All tokens: explorer.celo.org/tokens | {transaction_hash}:{output_address} |
Cronos |
CRO | All tokens: cronoscan.com/tokens | {transaction_hash}:{output_address} |
Dash |
DASH | {transaction_hash}:{output_address} OR {transaction_hash}:{output_index} |
|
Dogecoin |
DOGE | {transaction_hash}:{output_address} |
|
EOS |
EOS | {transaction_hash}:{output_address} OR {transaction_hash}:{output_index} |
|
Ethereum |
ETH | We offer dynamic token support for all ERC-20 tokens: etherscan.io/tokens | {transaction_hash}:{output_address} |
Ethereum_Classic |
ETC | {transaction_hash}:{output_address} |
|
Fantom |
FTM | All tokens: ftmscan.com/tokens | {transaction_hash}:{output_address} |
Lightning |
{payment_hash}:{recipient_node_key} |
||
Litecoin |
LTC | {transaction_hash}:{output_address} OR {transaction_hash}:{output_index} |
|
Omni |
OMNI | USDT | {transaction_hash} |
Optimism |
We offer dynamic token support for all ERC-20 tokens: optimistic.etherscan.io/tokens | {transaction_hash}:{output_address} |
|
Palm |
PALM | All non-NFT tokens: explorer.palm.io/tokens | {transaction_hash}:{output_address} |
Polygon (Polygon PoS) |
POL | We offer dynamic token support for all ERC-20 tokens: polygonscan.com/tokens | {transaction_hash}:{output_address} |
Solana |
SOL | SOL is supported as mature. SPL tokens in the CSV file at the bottom of this article are supported as emerging. SPL tokens not in the CSV file are supported as pre-growth. See solscan.io/leaderboard/token for token symbols. |
For transfers of SOL: {transaction_hash}:{system_account_address} For transfers of SPL tokens: {transaction_hash}:{system_account_address} OR {transaction_hash}:{token_account_address} |
Tron |
TRX | All TRC-20 tokens: tronscan.org/#/tokens/list | {transaction_hash}:{output_address} |
XRP |
XRP | {transaction_hash}:{output_address} |
|
Zcash |
ZEC | {transaction_hash}:{output_address} OR {transaction_hash}:{output_index} |
Pre-growth networks
The table below lists our currently enabled pre-growth networks. Typically, a network's name matches its network value. However, if a project rebrands, we note this in the Notes column and provide links to the crypto project when possible. Always use the network value specified in the Network value column, even after a rebrand. We also provide the transferReference
format for each network.
We support the tokens of any network listed below. If you want us to enable a network that is not listed, please contact Customer Support. We aim to add a set of pre-growth networks each month, but the exact time will vary due to the team’s capacity. Typically, new networks will be added closer to the middle or end of the month.
For asset and token symbols, we suggest the following resources:
- coinmarketcap.com (provides some API functionality to retrieve asset symbols)
- coingecko.com
Network value (case insensitive) | Format for transferReference |
Notes | Project link |
---|---|---|---|
ABBC |
{transaction_hash}:{output_address} |
https://abbccoin.com/ | |
Acala |
{transaction_hash}:{output_address} |
https://acala.network/karura | |
Achain |
{transaction_hash}:{output_address} |
https://www.achain.com/ | |
Aelf |
{transaction_hash}:{output_address} |
https://aelf.com/ | |
Aeternity |
{transaction_hash}:{output_address} |
https://aeternity.com/ | |
Agoric |
{transaction_hash}:{output_address} |
https://agoric.com/ | |
Akash |
{transaction_hash}:{output_address} |
https://akash.network/token/ | |
Aleph_Zero |
{transaction_hash}:{output_address} |
https://alephzero.org/ | |
Alephium |
{transaction_hash}:{output_address} |
https://alephium.org/ | |
Algorand |
{transaction_hash}:{output_address} |
Algorand's native asset, ALGO, is supported as a mature asset. Metatokens on the Algorand network are supported as pre-growth assets. | https://algorand.com/ |
Altair |
{transaction_hash}:{output_address} |
https://altair.subscan.io/ | |
Aptos |
{transaction_hash}:{output_address} |
https://aptoslabs.com/ | |
Arbitrum_Nova |
{transaction_hash}:{output_address} |
https://arbitrum.io/anytrust | |
Archway |
{transaction_hash}:{output_address} |
https://archway.io/ | |
Ardor |
{transaction_hash}:{output_address} |
https://www.jelurida.com/ardor | |
Ark |
{transaction_hash}:{output_address} |
https://ark.io/ | |
Arweave |
{transaction_hash}:{output_address} |
https://www.arweave.org/ | |
Astar |
{transaction_hash}:{output_address} |
https://astar.network/ | |
Aurora |
{transaction_hash}:{output_address} |
https://www.charliedefi.com/chains/aurora | |
Avalanche_X_Chain |
{transaction_hash}:{output_address} |
https://subnets.avax.network/x-chain | |
Aventus |
{transaction_hash}:{output_address} |
https://aventus.io/ | |
Axelar |
{transaction_hash}:{output_address} |
https://axelar.network/ | |
B3 |
{transaction_hash}:{output_address} |
https://www.b3.fun/ | |
Bahamut |
{transaction_hash}:{output_address} |
https://www.bahamut.io/ | |
Bajun_Network |
{transaction_hash}:{output_address} |
https://ajuna.io/baju/ | |
Bandchain |
{transaction_hash}:{output_address} |
https://www.bandprotocol.com/ | |
Basilisk |
{transaction_hash}:{output_address} |
https://bsx.fi/ | |
Beam |
{transaction_hash}:{output_address} |
https://beam.mw/en/ | |
Bifrost |
{transaction_hash}:{output_address} |
Bifrost refers to Bifrost Polkadot and not Bifrost Kusama. |
https://bifrost.finance/ |
Binance_Chain |
{transaction_hash}:{output_address} |
Binance rebranded Binance Chain to BNB Beacon Chain. To register transfers that occurred on BNB Beacon Chain, use Binance_Chain for your network property. The blockchain data is the same. |
https://www.bnbchain.org/en/smartChain |
Bitcoin_Diamond |
{transaction_hash}:{output_address} |
https://www.bitcoindiamond.org/ | |
Bitcoin_Gold |
{transaction_hash}:{output_address} |
https://bitcoingold.org/ | |
Bitcoin_Satoshi_Vision |
{transaction_hash}:{output_address} |
BSV was downgraded to pre-growth on August 7, 2023. As a result, you can only register BSV transfers using the v2 API endpoints, and should include additional properties in your request. To learn more about which properties to send, see Asset tiers. | |
Bitkub |
{transaction_hash}:{output_address} |
https://www.bitkubchain.com/ | |
bitsCrunch |
{transaction_hash}:{output_address} |
https://docs.bitscrunch.com/docs/overview/about | |
Bitshares |
{transaction_hash}:{output_address} |
https://bitshares.org/ | |
Bittensor |
{transaction_hash}:{output_address} |
https://bittensor.com/ | |
Blackcoin |
{transaction_hash}:{output_address} |
https://blackcoin.org/ | |
Blast |
{transaction_hash}:{output_address} |
https://blast.io/en | |
Bob |
{transaction_hash}:{output_address} |
https://www.gobob.xyz/ | |
Boba |
{transaction_hash}:{output_address} |
https://boba.network/ | |
Bytom |
{transaction_hash}:{output_address} |
https://bytom.io/ | |
Canto |
{transaction_hash}:{output_address} |
https://canto.io/ | |
Cardano |
{transaction_hash}:{output_address} |
https://cardano.org/ | |
Casper |
{transaction_hash}:{output_address} |
https://casper.network/ | |
Celestia |
{transaction_hash}:{output_address} |
https://celestia.org/ | |
Centrifuge |
{transaction_hash}:{output_address} |
https://centrifuge.io/ | |
Chia |
{transaction_hash}:{output_address} |
https://www.chia.net/ | |
Chiliz |
{transaction_hash}:{output_address} |
In May 2023, Chiliz launched a successor project. The original project is now referred to as the "Chiliz Legacy Chain." For transfers on the Chiliz Legacy Chain, please use chiliz as the value for the network property. |
https://www.chiliz.com/ |
Chiliz2 |
{transaction_hash}:{output_address} |
In May 2023, Chiliz launched a successor project, Chiliz Chain, as an independent L1 blockchain. For transfers on the L1 Chiliz Chain platform, please use chiliz2 as the value for the network property. |
https://www.chiliz.com/ |
CLV |
{transaction_hash}:{output_address} |
https://clv.org/ | |
Codex |
{transaction_hash}:{output_address} |
https://www.codex.is/ | |
Concordium |
{transaction_hash}:{output_address} |
https://concordium.com/ | |
Conflux |
{transaction_hash}:{output_address} |
https://confluxnetwork.org/ | |
Consensus |
{transaction_hash}:{output_address} |
||
Constellation |
{transaction_hash}:{output_address} |
https://constellationnetwork.io/ | |
Core |
{transaction_hash}:{output_address} |
https://coremultichain.com/ | |
Core_Chain |
{transaction_hash}:{output_address} |
https://coredao.org/build/build-on-core | |
Coreum |
{transaction_hash}:{output_address} |
https://www.coreum.com/ | |
Cortex |
{transaction_hash}:{output_address} |
https://cortexlabs.ai/ | |
Cosmos |
{transaction_hash} |
https://cosmos.network/ | |
Counterparty |
{transaction_hash}:{output_address} |
https://counterparty.io/ | |
Cronos_POS |
{transaction_hash}:{output_address} |
https://docs.cronos-pos.org/for-users/new-brand-and-domains | |
Cronos_zkEVM |
{transaction_hash}:{output_address} |
https://cronos.org/zkevm | |
Crust |
{transaction_hash}:{output_address} |
https://www.crust.network/ | |
Cudos |
{transaction_hash}:{output_address} |
https://www.cudos.org/# | |
CyberMiles |
{transaction_hash}:{output_address} |
https://www.cybermiles.io/en-us/ | |
Decred |
{transaction_hash}:{output_address} |
https://decred.org/ | |
DeFiChain |
{transaction_hash}:{output_address} |
https://defichain.com/ | |
Deso |
{transaction_hash}:{output_address} |
https://www.deso.com/ | |
Diamond |
{transaction_hash}:{output_address} |
https://bit.diamonds/ | |
Diem |
{transaction_hash}:{output_address} |
https://www.diem.com/ | |
Digibyte |
{transaction_hash}:{output_address} |
https://digibyte.org/ | |
DigitalNote |
{transaction_hash}:{output_address} |
https://digitalnote.org/ | |
Dingocoin |
{transaction_hash}:{output_address} |
https://www.dingocoin.com/ | |
Divi |
{transaction_hash}:{output_address} OR {transaction_hash}:{output_index} |
https://diviproject.org/ | |
DOA |
{transaction_hash}:{output_address} |
https://doacrypto.com/ | |
Double_A_Chain |
{transaction_hash}:{output_address} |
https://www.acuteangle.com/ | |
Dreamcoin |
{transaction_hash}:{output_address} |
http://dreamcoin.fi/ | |
DYDX |
{transaction_hash}:{output_address} |
https://dydx.community/dashboard | |
Dymension |
{transaction_hash}:{output_address} |
https://dymension.xyz/ | |
Ecash |
{transaction_hash}:{output_address} |
https://e.cash/ | |
Edgeware |
{transaction_hash}:{output_address} |
https://www.edgeware.io/ | |
Efinity |
{transaction_hash}:{output_address} |
https://efinity.io/ | |
Einsteinium |
{transaction_hash}:{output_address} |
https://www.emc2.foundation/ | |
Electra_Protocol |
{transaction_hash}:{output_address} |
https://www.electraprotocol.com/ | |
Elrond |
{transaction_hash}:{output_address} |
On November 4, 2022 Elrond rebranded to MultiversX. Use Elrond to register MultiversX transfers. |
https://multiversx.com/ |
Eminer |
{transaction_hash}:{output_address} |
https://eminer.pro/#/ | |
Endurance |
{transaction_hash}:{output_address} |
https://ace.fusionist.io/ | |
Energy_Web |
{transaction_hash}:{output_address} |
https://www.energyweb.org/ | |
Enjin_Relaychain |
{transaction_hash}:{output_address} |
https://enjin.subscan.io/ | |
ENULS |
{transaction_hash}:{output_address} |
https://nuls.io/enuls/ | |
Enumium |
{transaction_hash}:{output_address} |
https://www.enumium.com/ | |
Ergo |
{transaction_hash}:{output_address} |
https://ergoplatform.org/en/ | |
Ethereum_PoW |
{transaction_hash}:{output_address} |
https://ethereumpow.org/ | |
EthereumFair |
{transaction_hash}:{output_address} |
https://etherfair.org/ | |
EUNO |
{transaction_hash}:{output_address} |
https://www.euno.co/ | |
Europa |
{transaction_hash}:{output_address} |
https://mainnet.skalenodes.com/fs/elated-tan-skat/27d20f14b495c5c0831eaaeb9263cad8d56f4b73/europa/index.html | |
Evmos |
{transaction_hash}:{output_address} |
https://evmos.org/ | |
ExclusiveCoin |
{transaction_hash}:{output_address} |
https://exclusivecoin.pw/ | |
Expanse |
{transaction_hash}:{output_address} |
https://expanse.tech/ | |
Factom |
{transaction_hash}:{output_address} |
https://www.factomprotocol.org/ | |
Feathercoin |
{transaction_hash}:{output_address} |
https://feathercoin.com/ | |
Fetch_AI |
{transaction_hash}:{output_address} |
https://fetch.ai/ | |
Filecoin |
{transaction_hash}:{output_address} |
https://filecoin.io/ | |
Firo |
{transaction_hash}:{output_address} |
https://firo.org/ | |
Flare |
{transaction_hash}:{output_address} |
https://flare.network/ | |
Flo |
{transaction_hash}:{output_address} |
https://flo.cash/ | |
Flow |
{transaction_hash}:{output_address} |
https://flow.com/ | |
FNCY |
{transaction_hash}:{output_address} |
https://fncy.world/ | |
Folmcoin |
{transaction_hash}:{output_address} |
https://folmcoin.com/ | |
Force |
{transaction_hash}:{output_address} |
||
Fuse |
{transaction_hash}:{output_address} |
https://www.fuse.io/ | |
Fusion |
{transaction_hash}:{output_address} |
https://www.fusion.org/en | |
Genshiro |
{transaction_hash}:{output_address} |
https://genshiro.io/ | |
Gleec |
{transaction_hash}:{output_address} |
https://gleec.com/ | |
Gnosis |
{transaction_hash}:{output_address} |
https://www.gnosis.io/ | |
Gochain |
{transaction_hash}:{output_address} |
https://gochain.io/ | |
Grin |
{transaction_hash}:{output_address} |
https://grin.mw/ | |
Gulden |
{transaction_hash}:{output_address} |
https://gulden.io/ | |
Gxchain |
{transaction_hash}:{output_address} |
||
HAQQ |
{transaction_hash}:{output_address} |
https://haqq.network/ | |
Harmony |
{transaction_hash}:{output_address} |
https://www.harmony.one/ | |
Hathor |
{transaction_hash}:{output_address} |
https://hathor.network/ | |
Haven |
{transaction_hash}:{output_address} |
https://havenprotocol.org/ | |
Hcash |
{transaction_hash}:{output_address} |
https://h.cash/ | |
Hdac |
{transaction_hash}:{output_address} |
https://hdactech.com/ | |
HECO |
{transaction_hash}:{output_address} |
https://www.hecochain.com/en-us/ | |
Hedera |
{transaction_hash}:{output_address} |
https://hedera.com/ | |
Helium |
{transaction_hash}:{output_address} |
https://www.helium.com/ | |
Hive |
{transaction_hash}:{output_address} |
https://www.hiveblockchain.com/ | |
Horizen |
{transaction_hash}:{output_address} |
https://www.horizen.io/ | |
HPB |
{transaction_hash}:{output_address} |
https://www.hpb.io/ | |
HydraDX |
{transaction_hash}:{output_address} |
https://hydradx.io/ | |
Icon |
{transaction_hash}:{output_address} |
https://icon.community/ | |
Iconic |
{transaction_hash}:{output_address} |
https://iconic-blockchain.com/ | |
ImmutableX |
{transaction_hash}:{output_address} |
https://www.immutable.com/ | |
Immutable_zkEVM |
{transaction_hash}:{output_address} |
https://www.immutable.com/products/immutable-zkevm | |
Injective |
{transaction_hash}:{output_address} |
https://injective.com/ | |
INTChain |
{transaction_hash}:{output_address} |
https://intchain.io/ | |
Integritee |
{transaction_hash}:{output_address} |
https://www.integritee.network/ | |
Interlay |
{transaction_hash}:{output_address} |
https://www.interlay.io/ | |
Internet_Computer |
{transaction_hash}:{output_address} |
https://internetcomputer.org/ | |
Iost |
{transaction_hash}:{output_address} |
https://iost.io/ | |
Iota |
{transaction_hash}:{output_address} |
https://www.iota.org/ | |
Iotex |
{transaction_hash}:{output_address} |
https://iotex.io/ | |
IRISnet |
{transaction_hash}:{output_address} |
https://www.irisnet.org/en/ | |
Juno |
{transaction_hash}:{output_address} |
https://junonetwork.io/ | |
Kadena |
{transaction_hash}:{output_address} |
https://kadena.io/ | |
Kardiachain |
{transaction_hash}:{output_address} |
https://kardiachain.io/ | |
Karura |
{transaction_hash}:{output_address} |
https://acala.network/karura | |
Kaspa |
{transaction_hash}:{output_address} |
https://kaspa.org/ | |
Kava |
{transaction_hash}:{output_address} |
https://www.kava.io/ | |
KCC |
{transaction_hash}:{output_address} |
https://www.kcc.io/ | |
Khala |
{transaction_hash}:{output_address} |
https://parachains.info/details/khala | |
Kilt |
{transaction_hash}:{output_address} |
https://docs.kilt.io/ | |
Kintsugi |
{transaction_hash}:{output_address} |
https://kintsugi.interlay.io/ | |
Klaytn |
{transaction_hash}:{output_address} |
In April 2024, Klaytn rebranded to Kaia. To register transfers on Kaia, use Klaytn for the network property. |
https://klaytn.foundation/ |
Komodo |
{transaction_hash}:{output_address} |
https://komodoplatform.com/en/ | |
Kon |
{transaction_hash}:{output_address} |
https://konpay.io/ | |
Kroma |
{transaction_hash}:{output_address} |
https://kroma.network/ | |
Kujira |
{transaction_hash}:{output_address} |
https://kujira.network/ | |
Kusama |
{transaction_hash}:{output_address} |
https://kusama.network/ | |
Kyve |
{transaction_hash}:{output_address} |
https://www.kyve.network/ | |
Linea |
{transaction_hash}:{output_address} |
https://linea.build/ | |
Link |
{transaction_hash}:{output_address} |
||
LinkEye |
{transaction_hash}:{output_address} |
https://www.linkeye.com/ | |
Liquid |
{transaction_hash}:{output_address} |
https://blockstream.com/liquid/ | |
Lisk |
{transaction_hash}:{output_address} |
https://lisk.com/ | |
Litentry |
{transaction_hash}:{output_address} |
https://www.litentry.com/ | |
Loopring |
{transaction_hash}:{output_address} |
https://loopring.org/#/ | |
LTO_Network |
{transaction_hash}:{output_address} |
https://www.ltonetwork.com/ | |
Lukso |
{transaction_hash}:{output_address} |
https://lukso.network/ | |
Luniverse |
{transaction_hash}:{output_address} |
https://www.luniverse.io/ | |
Manta_Atlantic |
{transaction_hash}:{output_address} |
https://docs.manta.network/docs/Introduction | |
Manta_Pacific |
{transaction_hash}:{output_address} |
https://docs.manta.network/docs/Introduction | |
Mantle |
{transaction_hash}:{output_address} |
https://www.mantle.xyz/ | |
Marmara |
{transaction_hash}:{output_address} |
https://marmara.io/ | |
MAYAChain |
{transaction_hash}:{output_address} |
https://docs.mayaprotocol.com/introduction/readme | |
Medibloc |
{transaction_hash}:{output_address} |
https://medibloc.com/ | |
Merlin_Chain |
{transaction_hash}:{output_address} |
https://merlinchain.io/ | |
Metadium |
{transaction_hash}:{output_address} |
https://metadium.com/ | |
Metal |
{transaction_hash}:{output_address} |
https://metall2.com/ | |
Metaverse_DNA |
{transaction_hash}:{output_address} |
https://www.mvsdna.com/#/ | |
Metis |
{transaction_hash}:{output_address} |
https://www.metis.io/ | |
Milk |
{transaction_hash}:{output_address} |
https://milkalliance.io/ | |
Mina |
{transaction_hash}:{output_address} |
https://minaprotocol.com/ | |
MobileCoin |
{transaction_hash}:{output_address} |
https://mobilecoin.com/ | |
Mode |
{transaction_hash}:{output_address} |
https://www.mode.network/ | |
Monacoin |
{transaction_hash}:{output_address} |
https://monacoin.org/ | |
Monero |
For deposits use: {transaction_hash}:{output_index}:{receiving_address}:{payment_id} For withdrawals use: {transaction_hash}:{output_index}:{withdrawal_address}:{payment_id} |
https://www.getmonero.org/ | |
Moonbeam |
{transaction_hash}:{output_address} |
https://moonbeam.network/ | |
Moonriver |
{transaction_hash}:{output_address} |
https://moonbeam.network/networks/moonriver/ | |
Myriadcoin |
{transaction_hash}:{output_address} |
https://myriadcoin.org/ | |
MXC |
{transaction_hash}:{output_address} |
We support the MXC zkEVM. | https://www.mxc.org/ |
Nano |
{transaction_hash}:{output_address} |
https://nano.org/en | |
Navcoin |
{transaction_hash}:{output_address} |
https://navcoin.org/ | |
Ndau |
{transaction_hash}:{output_address} |
https://ndau.io/ | |
Near |
{transaction_hash}:{output_address} |
https://near.org/ | |
Neblio |
{transaction_hash}:{output_address} |
https://nebl.io/ | |
Nebulas |
{transaction_hash}:{output_address} |
https://www.nebulas.io/ | |
Nem |
{transaction_hash}:{output_address} |
https://docs.nem.io/pages/ | |
Neo |
{transaction_hash}:{output_address} |
https://neo.org/ | |
Nervos |
{transaction_hash}:{output_address} |
https://www.nervos.org/ | |
NeuroWeb |
{transaction_hash}:{output_address} |
https://neuroweb.subscan.io/ | |
Neutron |
{transaction_hash}:{output_address} |
https://www.neutron.org/ | |
Nexus |
{transaction_hash}:{output_address} |
https://nexus.io/ | |
Nimiq |
{transaction_hash}:{output_address} |
https://www.nimiq.com/ | |
Nodle |
{transaction_hash}:{output_address} |
https://www.nodle.com/ | |
Numbers |
{transaction_hash}:{output_address} |
https://www.numbersprotocol.io/ | |
Nym |
{transaction_hash}:{output_address} |
https://nymtech.net/ | |
Oasis |
{transaction_hash}:{output_address} |
https://oasisprotocol.org/ | |
Oasys |
{transaction_hash}:{output_address} |
https://www.oasys.games/ | |
Okexchain |
{transaction_hash}:{output_address} |
https://www.okx.com/oktc | |
Omega_Network |
{transaction_hash}:{output_address} |
https://omtch.com/ | |
Ontology |
{transaction_hash}:{output_address} |
Okexchain rebranded to OKT Chain. Use Okexchain as the value for the network property to register transfers of OKT Chain. |
https://ont.io/ |
ONUS |
{transaction_hash}:{output_address} |
https://onuschain.io/ | |
Oraichain |
{transaction_hash}:{output_address} |
https://orai.io/ | |
Osmosis |
{transaction_hash}:{output_address} |
https://osmosis.zone/ | |
Palette |
{transaction_hash}:{output_address} |
https://hashpalette.com/ | |
Parallel_Finance_Polkadot |
{transaction_hash}:{output_address} |
Note: This is not the EVM. | https://parallel.subscan.io/ |
Passage |
{transaction_hash}:{output_address} |
https://www.passage.io/ | |
Payprotocol |
{transaction_hash}:{output_address} |
https://payprotocol.io/ | |
Peercoin |
{transaction_hash}:{output_address} |
https://www.peercoin.net/ | |
Pendulum |
{transaction_hash}:{output_address} |
https://pendulumchain.org/ | |
Persistence |
{transaction_hash}:{output_address} |
https://persistence.one/ | |
Phala |
{transaction_hash}:{output_address} |
https://phala.subscan.io/ | |
Picasso |
{transaction_hash}:{output_address} |
https://picasso.xyz/ | |
Pinkcoin |
{transaction_hash}:{output_address} |
https://pinkcoin.com/ | |
Pivx |
{transaction_hash}:{output_address} |
https://pivx.org/?hl=en | |
PlatON |
{transaction_hash}:{output_address} |
https://www.platon.network/ | |
PlayBlock |
{transaction_hash}:{output_address} |
https://www.playnance.com/playblock.html | |
Pocket_Network |
{transaction_hash}:{output_address} |
https://www.pokt.network/ | |
Polkadot |
{transaction_hash}:{output_address} |
https://polkadot.network/ | |
Polkadot_Asset_Hub |
{transaction_hash}:{output_address} |
https://wiki.polkadot.network/docs/learn-assets | |
Polygon_zkevm |
{transaction_hash}:{output_address} |
https://polygon.technology/polygon-zkevm | |
Polymesh |
{transaction_hash}:{output_address} |
https://polymesh.network/ | |
Proton |
{transaction_hash}:{output_address} |
https://www.proton.org/ | |
Provenance |
{transaction_hash}:{output_address} |
https://provenance.io/ | |
PulseChain |
{transaction_hash}:{output_address} |
https://pulsechain.com/ | |
QRL |
{transaction_hash}:{output_address} |
https://www.theqrl.org/ | |
Qtum |
{transaction_hash}:{output_address} |
https://qtum.org/ | |
Radix |
{transaction_hash}:{output_address} |
https://www.radixdlt.com/ | |
Ravencoin |
{transaction_hash}:{output_address} |
https://ravencoin.org/ | |
Reddcoin |
{transaction_hash}:{output_address} |
https://www.reddcoin.com/ | |
Reef |
{transaction_hash}:{output_address} |
https://reef.io/ | |
ReserveBlock |
{transaction_hash}:{output_address} |
https://www.reserveblock.io/ | |
Robonomics |
{transaction_hash}:{output_address} |
https://robonomics.network/ | |
Ronin |
{transaction_hash}:{output_address} |
https://roninchain.com/ | |
RSK |
{transaction_hash}:{output_address} |
https://rootstock.io/ | |
Saakuru |
{transaction_hash}:{output_address} |
https://saakuru.com/ | |
Saga |
{transaction_hash}:{output_address} |
https://www.saga.xyz/ | |
Salus |
{transaction_hash}:{output_address} |
https://saluscoin.info/ | |
Santa |
{transaction_hash}:{output_address} |
http://www.santavision.org/index.php | |
Scroll |
{transaction_hash}:{output_address} |
https://scroll.io/ | |
Secret |
{transaction_hash}:{output_address} |
https://scrt.network/ | |
Sei |
{transaction_hash}:{output_address} |
https://www.sei.io/ | |
Shiden |
{transaction_hash}:{output_address} |
https://shimmer.network/ | |
Shimmer |
{transaction_hash}:{output_address} |
Use Shimmer as the value for the network property to register transfers on the L1 blockchain, not the EVM testnet. |
https://shimmer.network/ |
Shimmer_EVM |
{transaction_hash}:{output_address} |
https://shimmer.network/evm | |
Sia |
{transaction_hash}:{output_address} |
https://sia.tech/ | |
Signum |
{transaction_hash}:{output_address} |
https://signum.network/ | |
Simple_Ledger_Protocol |
{transaction_hash}:{output_address} |
||
smartBCH |
{transaction_hash}:{output_address} |
https://smartbch.org/ | |
Social_Send |
{transaction_hash}:{output_address} |
https://socialsend.io/ | |
Solar |
{transaction_hash}:{output_address} |
https://solar.org/ | |
Sologenic |
{transaction_hash}:{output_address} |
https://sologenic.org/ | |
Songbird |
{transaction_hash}:{output_address} |
||
Stacks |
{transaction_hash}:{output_address} |
https://www.stacks.co/ | |
Starknet |
{transaction_hash}:{output_address} |
https://starkware.co/starknet/ | |
Stellar |
{transaction_hash} |
https://www.stellar.org/ | |
Step |
{transaction_hash}:{output_address} |
https://www.step.finance/ | |
Stratis |
{transaction_hash}:{output_address} |
https://www.stratisplatform.com/ | |
Stride |
{transaction_hash}:{output_address} |
https://www.stride.zone/ | |
Sui |
{transaction_hash}:{output_address} |
https://sui.io/ | |
Super_Zero_Protocol |
{transaction_hash}:{output_address} |
https://sero.cash/en/ | |
Supercoin |
{transaction_hash}:{output_address} |
||
SX_Network |
{transaction_hash}:{output_address} |
https://docs.sx.technology/developers/mainnet-details/block-explorer/verifying-a-smart-contract | |
Symbol |
{transaction_hash}:{output_address} |
https://symbol-community.com/ | |
Syscoin |
{transaction_hash}:{output_address} |
https://syscoin.org/ | |
Tachyon |
{transaction_hash}:{output_address} |
https://tachyon.eco/ | |
Telos_EVM |
{transaction_hash}:{output_address} |
https://www.telos.net/evm | |
Telos_Zero |
{transaction_hash}:{output_address} |
https://www.telos.net/ | |
TemTum |
{transaction_hash}:{output_address} |
https://temtum.com/ | |
Ternoa |
{transaction_hash}:{output_address} |
https://www.ternoa.network/ | |
Terra |
{transaction_hash}:{output_address} |
Terra rebranded to Terra Classic on May 28, 2022. To register transfers of the older Terra Classic blockchain, use Terra for the network property. |
https://www.terra.money/ |
Terra2 |
{transaction_hash}:{output_address} |
Terra2 refers to Terra2.0, the newer Terra project. To register transfers of Terra 2.0, use Terra2 for the network property. |
https://www.terra.money/ |
Tezos |
{transaction_hash}:{output_address} |
https://tezos.com/ | |
Theta |
{transaction_hash}:{output_address} |
https://www.thetatoken.org/ | |
Thorchain |
{transaction_hash}:{output_address} |
https://thorchain.org/ | |
Titan_Chain |
{transaction_hash}:{output_address} |
https://titanlab.io/home | |
TKX_Chain |
{transaction_hash}:{output_address} |
https://tokenx.finance/tkx-chain | |
TNC |
{transaction_hash}:{output_address} |
https://tnccoin.com/ | |
TON |
{transaction_hash}:{output_address} |
https://ton.org/ | |
TrueChain |
{transaction_hash}:{output_address} |
https://www.truechain.network/ | |
Ubiq |
{transaction_hash}:{output_address} |
https://ubiqsmart.com/ | |
UMEE |
{transaction_hash}:{output_address} |
https://umee.cc/ | |
V_Systems |
{transaction_hash}:{output_address} |
https://v.systems/ | |
Vanar |
{transaction_hash}:{output_address} |
https://vanarchain.com/en/ | |
Vara |
{transaction_hash}:{output_address} |
https://vara.network/ | |
VeChain |
{transaction_hash}:{output_address} |
https://www.vechain.org/ | |
Velas |
{transaction_hash}:{output_address} |
https://velas.com/en | |
Venom |
{transaction_hash}:{output_address} |
https://venom.foundation/ | |
Verge |
{transaction_hash}:{output_address} |
https://vergecurrency.com/ | |
Vertcoin |
{transaction_hash}:{output_address} |
https://vertcoin.org/ | |
Voucher_Coin |
{transaction_hash}:{output_address} |
||
Viction |
{transaction_hash}:{output_address} |
https://viction.xyz/ | |
Vite |
{transaction_hash}:{output_address} |
https://www.vite.org/ | |
Vyvo |
{transaction_hash}:{output_address} |
https://www.vyvo.com/ | |
Waltonchain |
{transaction_hash}:{output_address} |
https://www.waltonchain.org/ | |
Wanchain |
{transaction_hash}:{output_address} |
https://www.wanchain.org/ | |
Waves |
{transaction_hash} |
https://waves.tech/ | |
Wax |
{transaction_hash}:{output_address} |
https://www.wax.io/ | |
Waykichain |
{transaction_hash}:{output_address} |
https://www.waykichain.com/ | |
Wemix |
{transaction_hash}:{output_address} |
https://www.wemix.com/ | |
Xana |
{transaction_hash}:{output_address} |
https://xana.net/ | |
XDB_Chain |
{transaction_hash}:{output_address} |
https://xdbchain.com/ | |
XinFin |
{transaction_hash}:{output_address} |
https://xinfin.org/ | |
XPLA |
{transaction_hash}:{output_address} |
https://www.xpla.io/ | |
X_Layer |
{transaction_hash}:{output_address} |
https://www.okx.com/xlayer | |
Yoyow |
{transaction_hash}:{output_address} |
https://www.yoyow.org/index_en.html | |
Zilliqa |
{transaction_hash}:{output_address} |
https://www.zilliqa.com/ | |
Zeitgeist |
{transaction_hash}:{output_address} |
https://zeitgeist.stg.subscan.io/extrinsic/5832154-3 | |
Zetachain_ZEVM |
{transaction_hash}:{output_address} |
https://explorer.zetachain.com/evm/txs | |
Zetrix |
{transaction_hash}:{output_address} |
https://www.zetrix.com/ | |
ZKSpace |
{transaction_hash}:{output_address} |
https://zks.org/ | |
zkSync |
{transaction_hash}:{output_address} |
In February 2023, zkSync rebranded its platforms. For transfers on the zkSync Lite platform, please use zkSync as the value for the network property. |
https://zksync.io/ |
zkSync2 |
{transaction_hash}:{output_address} |
In February 2023, zkSync rebranded its platforms. For transfers on the zkSync Era platform, please use zkSync2 as the value for the network property. |
https://zksync.io/ |