How to use the BladeBit Simulation Tool to Estimate Compression Size

Getting Started with Running the BladeBit CUDA Simulation Tool and Gathering Data

Science of Mining Guide v0.91

September 2, 2023

Version History Changes
Version 0.9 Initial Version
Version 0.91 Fix typos
Version 0.92 Add info on how to use the PCM tool get to RAPL power data for CPU/DRAM

1. Overview

On August 28, 2023, we released a report analyzing the performance and power efficiency of BladeBit CUDA leveraging the BladeBit CUDA simulator. The report provided our data points for the test systems in our lab and serves as some baseline measurements for comparison.

This short guide gives an overview of how to estimate the compression level your CPU/GPU will support for Bladebit CUDA using the simulator and gather some helpful performance diagnostics as well.

We use Ubuntu for this overview.




The number of BladeBit software threads and compression level guides the total memory required. For example, testing C7 farms with 32 threads needs 10.4 GiB of RAM.

Table 1: System Memory Used (GiB), CPU Farming



2. Sample script to run the simulator

Our method of running testing was to leverage several Bash scripts to automate the data collection and parsing across a wide variety of runs. There are many ways to automate data extraction and parsing, we provide a simplified example in this write-up as an introductory guide. Additional automation and command parameters can be provided to automate further.

First, it's necessary to make a plot for your desired compression level. The command below creates a C7 plot:

./bladebit_cuda -f $FARMER -c $POOL --compress 7 cudaplot /path/to/plot_storage , use your farmer public key for $FARMER and your contract address for $POOL, set the place for the plot storage to replace the default /path/to/plot_storage above)

At a minimum, all that is required to run the simulator is the ./bladebit_cuda command with your chosen options and compressed plot file.

However, if you would, also like to gather some CPU, GPU, memory, disk, etc utilization, the following is a simple script to do some simple data collection along with the run.


Sample simulator run script with basic data collection for sar and nvidia-smi:

To easily log the run output to a file, one option is to use script such as:


If you ssh to the system under test, you may also consider using screen (sudo apt install screen) to avoid issues with ssh sessions getting closed due to network inactivity / connection loss.

Thus, a full run could look like:


Full guides on how to use screen are available online however, a few basic useful commands to get started are: (Script to get Nvidia GPU data)


Sample nvidia-log.txt

RAPL To get more insights into CPU and DRAM power consumption, the Intel Running Average Power Limit ("RAPL") CPU counters can be queried over time to monitor the power consumption of the CPU and memory. One easy way to get this information is to installed the PCM tools from Intel from:


3. Extracting data from sar

The sar utility can provide very detailed information regarding the performance of the system. This section describes a short overview of how to interact with the sar data that's been collected.

At a high level, the method is to log all sar data to one .bin file (e.g. "sar-data.bin" in this example) and then selectively extract portions of interest into separate text files for easier processing.

CPU utilization data:

sar -u -f sar-data.bin > cpu-utilization.txt

For easier import into tools like Excel, it can be useful to make these fields comma separated (CSV). This can be readily done by the following line below. The tr command portion will substitute a comma for a space character in the input file.

sar -u -f sar-data.bin > cpu-utilization.txt | tr -s ' ' ',' > cpu-utilization-csv.txt

Many other performance metrics can be extracted from the sar file such as:

The sar data can be readily imported into other tools such as Excel for graphing and additional analysis.

4. Interpreting BladeBit CUDA Results

After the BladeBit CUDA run, the simulator will show results such as below giving results about the run.

Several key fields of interest are the worst plot lookup lookup and Average full proof lookup time .

The current Chia guide suggests keeping the maximum lookup time at around 5 seconds and if the times exceed 10 seconds to get a more power CPU/GPU or use a lesser compression level.

The sample results below are representative of attempting to use a compression and farm size which is too high for the hardware.

The estimated largest farm sizes reported by the tool seem to be larger than the response time data would suggest is reasonable so more investigation may be needed before relying upon that portion of the results.


Sample Bladebit CUDA simulator results:


5. Conclusions

This write-up provided an example of how to run the BladeBit CUDA simulator tool to estimate how well a Chia farmers CPU or GPU could handle a given farm size and compression level. While the simulator tool itself provides useful output data, it is also often helpful to dig deeper into the data, e.g. from sar or nvidia-smi to get additional insights. The sample scripts provided here provide a basic starting place for those who would like to evaluate BladeBit CUDA performance on their CPU/GPU hardware.


Contact info: tech@ [this website address]