RPC Inspector Pro Measures Global Performance

RPC providers run infrastructure in different data centers and route traffic through different networks, load balancers, and CDN paths. Your users do not all reach the same backend in the same way.

That means the “best” endpoint can depend on:

  • where your users are located
  • how the provider routes requests
  • how fresh the provider’s node data is
  • how the endpoint handles repeated requests
  • whether WebSocket or gRPC streams stay connected and deliver updates quickly
  • whether rate limits or timeouts appear under normal usage

RPC Inspector Pro measures RPC endpoint performance from multiple probe regions so you can see these differences instead of guessing based on one machine or one cloud region.

What a Run Measures

A run starts with one endpoint, or with two endpoints you want to compare.

For one endpoint, the report helps answer questions such as:

  • Is this endpoint fast from the regions I care about?
  • Does it return fresh and consistent data?
  • Does it rate limit or time out from some places?
  • Do live stream updates arrive evenly across regions?

For two endpoints, the report compares Endpoint A and Endpoint B across the same selected probe regions. This is useful when comparing providers such as Alchemy vs. QuickNode, Chainstack vs. GetBlock, or a private paid endpoint against a public fallback.

Two-endpoint comparisons require both endpoints to be on the same network. Comparing Base mainnet to Solana mainnet would not produce a meaningful result because the endpoints are serving different chains. Comparing two Base mainnet endpoints can be meaningful because the report can evaluate the same network behavior across both providers.

How to Use the Form

The form asks for three main inputs.

First, choose the probe regions. These are the geographic AWS regions that will run the measurements. More regions give you a broader view of global behavior. A smaller region set is useful when you care about a specific market or want a faster, more focused check.

Second, enter one or two endpoints. Endpoint A is required. Endpoint B is optional and is used for comparisons.

Private endpoints are supported. Many private RPC endpoints include an API key in the URL, and RPC Inspector Pro accepts that format. For reporting, it stores and displays only the endpoint host or domain, not the full URL. Auth header values are not stored.

Third, run the inspector and keep the page open while the run completes. Most standard runs take about 35 seconds. When report generation finishes, the page shows a dedicated results URL that you can share.

Endpoint Types

The endpoint format tells RPC Inspector Pro which kind of measurement is possible.

Use https:// or http:// for request/response tests. These are standard JSON-RPC or REST-style endpoint calls.

Use wss:// or ws:// for WebSocket subscription tests. These measure live stream behavior, such as new block headers or Solana slot and block subscriptions.

Use grpcs://, grpc://, or a host like example.com:443 for gRPC subscription tests. Do not submit gRPC endpoints as https://..., even if the endpoint uses port 443.

RPC Inspector Pro detects the network automatically and checks that the endpoint type is compatible with the available tests.

Request/Response Test

The request/response test measures how HTTP or HTTPS endpoints behave when asked for current chain data and follow-up data.

It can measure:

  • latency: how long successful requests take
  • consistency: whether block, slot, or height numbers move forward as expected
  • availability: whether the endpoint can serve the data it just reported
  • freshness: how old the returned block or slot appears to be when timestamp data is available
  • comparison: when two endpoints are submitted, which endpoint performs better by region and metric

This test supports EVM, SVM, Polkadot/Substrate, Cosmos/Tendermint, and UTXO networks.

For a business reader, this is often the easiest test to interpret. It answers a direct question: “If my application asks this endpoint for current data, how fast and reliable is the answer from different regions?”

For a technical reader, the measurement is more specific than a generic ping. It uses real RPC methods for each network category, records successful responses, and counts failures such as timeouts and rate limits.

WebSocket Subscription Test

Many applications do not only ask for data. They subscribe to live updates.

The WebSocket subscription test measures how live stream messages arrive across selected regions. Depending on the network, those messages may represent new block headers, finalized heads, block notifications, or slot notifications.

This test can help answer:

  • Which regions receive stream updates first?
  • Does one endpoint consistently deliver updates ahead of another?
  • Are there gaps in the stream?
  • Does the stream connect successfully and stay useful long enough?
  • Does an endpoint accept richer block subscriptions, or only lighter slot subscriptions?

The WebSocket subscription test supports EVM, SVM, Polkadot/Substrate, and Cosmos/Tendermint networks.

For Solana and other SVM networks, RPC Inspector Pro can use blockSubscribe when the selected endpoint set supports it, and slotSubscribe when that is the commonly supported method. The report uses the correct vocabulary: block reports refer to blocks, while slot reports refer to slots.

gRPC Subscription Test

Some networks and providers expose live data through gRPC. RPC Inspector Pro supports gRPC subscription tests for Sui and Solana-style Yellowstone/Geyser endpoints.

For Sui, the report uses checkpoint vocabulary: checkpoint, checkpoint sequence number, and digest. It does not describe Sui checkpoints as blocks.

For Solana Yellowstone/Geyser, the report can measure block or slot stream behavior, depending on what the selected endpoints support.

This test is useful when your application or infrastructure uses gRPC streams directly, or when you want to compare gRPC stream behavior against other endpoint options.

Mixed gRPC and WebSocket Solana Test

Solana teams often face a practical question: should they use a Yellowstone/Geyser gRPC stream, a Solana WebSocket stream, or both?

The mixed gRPC and WebSocket Solana test compares those two transport styles in one run:

  • Endpoint A is a Yellowstone/Geyser gRPC endpoint.
  • Endpoint B is a Solana WebSocket endpoint.
  • Both endpoints must be on the same SVM network.
  • Both sides are measured as block subscriptions.

This is a specialized test. It is useful when you want to compare block propagation between a gRPC stream and a WebSocket stream without mixing different networks or different units.

If Solana network validation cannot finish because reference checks were incomplete, the run may be rejected as retryable. In that case, try again. This kind of rejection means the tool could not safely confirm the required network identity for the comparison; it does not automatically mean the endpoint is bad.

Reading the Report

A report is built from datasets. One dataset is one endpoint measured from one selected probe region.

A one-endpoint run with ten selected regions expects ten datasets. A two-endpoint run with ten selected regions expects twenty datasets.

The report is organized to answer different levels of questions.

Findings are the fastest place to start. They summarize the most important takeaways and point to notable differences, missing data, or endpoint behavior worth investigating.

Summary tables give a broad metric view across regions and endpoints. Use them to review overall latency, response quality, availability, propagation, or leadership patterns.

Details tables show the underlying region-level or endpoint-region-level rows. Technical users can use these tables to inspect exactly where a provider performed well or poorly.

Errors show retained evidence for failures, setup problems, rate limits, timeouts, and other issues that may explain why a dataset or measurement is incomplete.

The report may show that some selected datasets were missing or arrived too late for a report pass. Read that carefully. A missing dataset is not always proof that the endpoint failed; it can also reflect timing, infrastructure, or report-generation boundaries. The report separates dataset availability from later inclusion in specific tables or calculations.

Reading Propagation Results

For subscription tests, propagation is not a single number. The report provides several views because each one answers a different question.

The fairest headline is block-region, slot-region, or checkpoint-region leadership in Findings. This compares Endpoint A and Endpoint B for the same block, slot, or checkpoint in the same probe region. That matters because it avoids mixing geography into the endpoint comparison. If Endpoint A wins a block-region comparison, it means Endpoint A delivered that same unit earlier than Endpoint B in that same region.

For one-endpoint reports, the related concept is block, slot, or checkpoint leadership. It shows which probe regions saw each unit first for that endpoint. This helps you understand regional delivery patterns, but it is not an endpoint-vs-endpoint contest.

Summary by dataset answers a different question: “How did each endpoint-region dataset behave overall?” It is useful for spotting regions or endpoints with consistently slower arrivals, zero observations, stream gaps, setup failures, or poor response quality. This view is endpoint-region centered, so it is good for diagnosing where behavior differs.

Summary by block, slot, or checkpoint is unit centered. It shows what happened for each observed unit across the available datasets. This is useful when you want to inspect a particular block, slot, or checkpoint and see where it arrived first, where it arrived later, and where evidence was missing.

In practice:

  • Use Findings leadership to understand the fairest endpoint comparison.
  • Use Summary by Dataset to diagnose endpoint-region behavior.
  • Use Summary by Block, Slot, or Checkpoint to inspect propagation of individual units.

Practical Ways to Use the Results

If you operate an application with users concentrated in a few regions, focus first on those probe regions. A global average may be less useful than knowing which endpoint behaves best near your actual users.

If you are choosing between two providers, run them together as Endpoint A and Endpoint B on the same network. That gives the report a fairer comparison surface because both endpoints are measured from the same selected regions during the same run.

If you rely on live updates, use the WebSocket or gRPC subscription tests instead of relying only on request/response latency. A provider can answer ordinary requests quickly while still delivering live stream updates later than another provider.

If you see rate limits, timeouts, or setup failures, inspect the Errors section before drawing conclusions from summary numbers alone. Failures are part of endpoint behavior, and they often explain why a provider looks strong in one region and weak in another.

If you use private endpoints with API keys, you can submit the provider URL as you normally use it. The report is designed to show host or domain labels rather than full secret-bearing URLs.

What the Results Do Not Prove

RPC Inspector Pro gives you evidence from a specific run, with specific endpoints, selected regions, and a specific time window.

It does not prove:

  • which provider is permanently best
  • which endpoint will be best for every application
  • how much maximum load an endpoint can survive
  • how an endpoint behaves under sustained stress benchmarking
  • that a missing dataset is automatically an endpoint outage
  • that stream arrival timestamps are first-byte network timings
  • that a one-endpoint inspection can answer every two-endpoint comparison question
  • that endpoints on different networks can be compared meaningfully

The strongest use of RPC Inspector Pro is repeated, practical measurement: compare the endpoints you are actually considering, from the regions you actually care about, using the endpoint type your application actually uses.

Technical Reference

RPC Inspector Pro currently supports these broad measurement families.

Request/Response Tests over HTTP or HTTPS

  • EVM: eth_getBlockByNumber, then eth_getLogs(hash)
  • SVM: getSlot(finalized), then getBlock(slot)
  • Polkadot/Substrate: chain_getFinalizedHead, then chain_getBlock(hash)
  • Cosmos/Tendermint: /status, then /block?height=
  • UTXO: getblockchaininfo, then getblock(hash, 1)

Subscription Tests

  • EVM: newHeads over WebSocket
  • SVM: blockSubscribe(finalized) or slotSubscribe over WebSocket
  • SVM: block or slot subscriptions over Yellowstone/Geyser gRPC
  • Mixed SVM: Yellowstone/Geyser gRPC as Endpoint A with WebSocket as Endpoint B for block subscriptions
  • Polkadot/Substrate: subscribeFinalizedHeads over WebSocket
  • Cosmos/Tendermint: NewBlockHeader over WebSocket
  • Sui: SubscribeCheckpoints over gRPC

Follow us on:

John Doe
John Doe@username
Read More
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
John Doe
John Doe@username
Read More
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
John Doe
John Doe@username
Read More
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.
Previous
Next