When we talk about blockchain adoption, most conversations revolve around decentralisation, security, and governance. But there’s a harder truth that no one wants to admit at crypto conferences.

When we talk about blockchain adoption, most conversations revolve around decentralisation, security, and governance. But there’s a harder truth that no one wants to admit at crypto conferences:

Most blockchains are still stuck in the slow lane.

The average Ethereum block processes ~15 transactions per second. Bitcoin manages ~7. Meanwhile, Visa processes 65,000 transactions per second. We’re not even close.

This is a business problem as much as a technical one, a user experience problem, and frankly, a hold-up that’s strangling real-world adoption. As someone working in QA and testing for blockchain systems, I’ve watched teams struggle with this question daily: How do you test a system when performance is fundamentally limited by the protocol itself?

The Throughput Problem is Real

Blockchain throughput, which is the number of transactions a network can process per unit of time, directly impacts:

  • User experience: Slow networks = high fees + long wait times
  • Cost efficiency: Network congestion = exponential fee increases
  • Competitive viability: Enterprise clients won’t use a chain that can’t handle their transaction volume
  • Testing complexity: You can’t properly test an application if the underlying chain is a bottleneck

I remember working on a DeFi project that had fundamentally sound logic. The smart contracts were audited. The economic model was solid. But once we went to mainnet, we hit a wall: under peak trading conditions, transaction confirmation times hit 45-60 seconds, and users were paying $50+ in gas fees for a $200 trade.

The chain wasn’t broken. It was performing exactly as designed but it was designed for a throughput that fell short of real-world demand.

Why is Blockchain Throughput so Constrained?

It comes down to a classic trilemma: Decentralisation vs. Security vs. Scalability.

Blockchain networks have to make tradeoffs:

  1. Full node requirements - If every node needs to validate every transaction, you’re limited to what the weakest hardware can handle. This preserves decentralisation and security but tanks throughput.
  2. Consensus mechanisms - Proof of Work is secure but slow (Bitcoin, Ethereum pre-merge). Proof of Stake is faster but more complex to secure against attacks.
  3. Block size & block time - Increasing block size increases throughput but makes it harder to run full nodes. Decreasing block time (faster blocks) increases the risk of forks and orphaned blocks.
  4. Data availability - Every transaction needs to be stored, verified, and propagated across the network. This is a massive bottleneck.

These constraints are in fact, features. They exist because a blockchain without them wouldn’t be truly decentralised, and decentralisation is the whole point.

But they also mean that on-chain throughput is fundamentally limited.

The Solutions (And Their Tradeoffs)

The industry has learned that you can’t solve throughput while maintaining full decentralisation on Layer 1. So we’ve moved to multiple approaches:

Layer 2 Solutions

Rollups (Optimistic and Zero-Knowledge) batch transactions off-chain and submit proofs to Layer 1. They can hit 1,000-4,000 transactions per second while still maintaining Ethereum’s security.

Tradeoff: Slightly longer settlement times. Added complexity. Cross-chain bridging risks.

Sharding & Modular Blockchains

Splitting the network into shards, each processing transactions in parallel, or separating consensus from execution.

Tradeoff: Dramatically increased complexity. Harder to enforce global consistency.

Higher Block Throughput (Increased Block Size/Time)

Some chains (Solana, Polygon) have simply increased block sizes or decreased block times.

Tradeoff: Higher hardware requirements for validators. Increased network bandwidth needs. Less decentralisation over time.

Application-Specific Chains

Build your own blockchain for your specific use case, optimized for your throughput needs.

Tradeoff: You lose network effects and security guarantees. You’re now responsible for your own validator set.

The Testing Nightmare

From a QA perspective, throughput constraints create a cascading testing problem.

You need to test under realistic load, but your test environment (mainnet) has artificial constraints. You can:

  1. Test on testnet - But testnet isn’t a realistic representation of mainnet. Different validator sets, different load patterns, different security assumptions.
  2. Run a local fork - You can simulate mainnet state locally, but you can’t simulate real transaction volume competing for block space.
  3. Load test on a private network - You can hit your target throughput, but you’re not testing against actual network constraints.
  4. Stress test on mainnet - This is expensive and can affect other users. We’ve seen teams do this before going live, which is… not great for everyone else.

The reality is: you can never fully replicate the performance characteristics of a live blockchain in a test environment.

This means production testing is critical. And it means your team needs robust monitoring, circuit breakers, and degradation strategies for when the network gets congested.

What This Means for Your Product

If you’re building on blockchain:

1. Accept that throughput is a constraint, not a “nice to have.”

Design your application around blockchain throughput limits. Don’t treat them as an obstacle to overcome; treat them as a design spec.

  • Use batch processing where possible
  • Implement request queuing strategies
  • Design your UX to accommodate multi-second transaction times
  • Consider off-chain components for non-critical operations

2. Choose your chain based on throughput requirements, not hype.

If you need 1,000+ TPS reliably, Layer 1 Ethereum isn’t your answer. Solana, Polygon, or a rollup might be. If you need 100K+ TPS, you probably need a hybrid model (on-chain for critical operations, off-chain for everything else).

3. Build in performance monitoring from day one.

Don’t wait until launch to discover that your product breaks under real usage. Instrument everything: transaction times, gas prices, contract execution time, state bloat.

4. Plan for congestion.

What happens when the network gets busy? Do your contracts still work? Do your users have a graceful degradation path? Can you revert to a manual process?

5. Remember: throughput improvements are coming, but they’re not here yet.

Ethereum’s roadmap includes Danksharding. New rollups are launching constantly. New chains with different tradeoffs keep appearing. But we’re still years away from a blockchain that can handle “global scale” throughput while maintaining decentralisation and security.

The Uncomfortable Truth

The blockchain throughput problem isn’t going away soon. It has become a fundamental tradeoff in how blockchains work.

The uncomfortable truth is that most blockchain applications will never achieve the throughput of traditional systems, and that’s okay. Blockchains aren’t meant to replace every transaction type, but are useful precisely because they maintain security and decentralization characteristics that traditional systems don’t. The throughput limitation is the cost of those characteristics.

The question for product teams has evolved to:

“What transactions actually need to be on-chain, and are they worth the throughput tradeoffs?”

Once you answer that honestly, throughput stops being a problem. It becomes a design parameter, which is just another constraint to architect around, like every other system limitation.

What’s your experience? Are throughput constraints affecting your product roadmap? What’s your strategy for working around them?

Key Takeaways
  • Blockchain throughput is fundamentally constrained by decentralisation and security tradeoffs
  • The average Layer 1 blockchain processes 7-20 TPS, nowhere near traditional payment systems
  • Layer 2 solutions, sharding, and modular designs offer throughput improvements with different tradeoffs
  • QA teams face unique challenges testing performance against network constraints
  • Product teams need to design around throughput limitations, not against them
  • Throughput is a design parameter to architect around

More articles from Linum Labs