Hoskinson could be wrong about the future of decentralized computing

The blockchain trilemma reared its head again at the Hong Kong Consensus in February, to some extent putting Charles Hoskinson, the founder of Cardano, in the background – having to reassure attendees that hyperscalers like Google Cloud and Microsoft Azure are not a risk for decentralization.

It was pointed out that large blockchain projects need hyperscalers, and that one should not worry about a single point of failure because:

  • Advanced Cryptography Neutralizes Risk
  • Multi-party computation distributes key hardware
  • Confidential computing protects the data used

The argument was based on the idea that “if the cloud can’t see the data, it can’t control the system”, and it was left there due to time constraints.

But there is an alternative to Hoskinson’s argument for hyperscalers that deserves more attention.

MPC and Confidential Computing Reduce Exposure

This was something of a strategic bastion in Charles’s argument: that technologies such as multi-party computing (MPC) and confidential computing ensured that hardware vendors would not have access to the underlying data.

These are powerful tools. But they not dissolve the underlying risk.

MPC distributes key elements to multiple parties so that no single participant can piece together a secret. This significantly reduces the risk of a single compromised node. However, the security surface extends in other directions. The coordination layer, communication channels, and governance of participating nodes all become essential.

Instead of trusting a single key holder, the system now depends on a distributed set of actors behaving correctly and the protocol is properly implemented. The single point of failure doesn’t go away. In fact, it simply becomes a distributed trust surface.

Confidential computing, especially trusted execution environments, introduces a different tradeoff. Data is encrypted at runtime, limiting exposure to the host.

But trusted execution environments (TEEs) are based on hardware assumptions. They depend on microarchitectural isolation, firmware integrity, and correct implementation. Academic literature, for example, here and here, has repeatedly demonstrated that side-channel and architectural vulnerabilities continue to emerge in enclave technologies. The security boundary is narrower than that of the traditional cloud, but it is not absolute.

More importantly, MPCs and TEEs often run on hyperscaler infrastructure. The physical hardware, virtualization layer and supply chain remain focused. If an infrastructure provider controls access to machines, bandwidth, or geographic regions, it retains operational leverage. Cryptography can prevent data inspection, but it does not prevent rate restrictions, shutdowns, or political intervention.

Advanced cryptographic tools make specific attacks more difficult, but they still do not remove the risk of infrastructure failure. They simply replace a visible concentration with a more complex one.

The argument that “no L1 can handle global computing”

Hoskinson emphasized that hyperscalers are necessary because no single layer 1 can handle the computing demands of the world’s systems, referring to the billions of dollars that have helped build such data centers.

Of course, Layer 1 networks were not designed to run AI training loops, high-frequency trading engines, or enterprise analytics pipelines. They exist to maintain consensus, verify state transitions, and provide sustainable data availability.

He is correct about why layer 1 is used. But above all, global systems need results that everyone can verify, even if the calculation takes place elsewhere.

In modern crypto infrastructure, heavy computations increasingly occur off-chain. What matters is that the results can be proven and verified on-chain. This is the basis of rollups, zero-knowledge systems and verifiable computer networks.

Focusing on whether an L1 can run a global computation misses the central question of who controls the execution and storage infrastructure behind the verification.

If computation occurs off-chain but relies on centralized infrastructure, the system inherits centralized failure modes. Colonization remains decentralized in theory, but the path to producing valid state transitions is concentrated in practice.

The problem should be about dependency at the infrastructure layer, not computational capacity within layer 1.

Crypto neutrality is not the same as participatory neutrality

Cryptographic neutrality is a powerful idea and something Hoskinson used in his argument. This means that the rules cannot be arbitrarily changed, hidden backdoors cannot be introduced, and the protocol remains fair.

But cryptography continues material.

This physical layer determines who can participate, who can afford it, and who ends up being excluded, because throughput and latency are ultimately limited by the actual machines and the infrastructure they run on. If the production, distribution, and hosting of hardware remains centralized, participation becomes economically limited even when the protocol itself is mathematically neutral.

In supercomputing systems, hardware is a game changer. It determines the cost structure, which can evolve and the resilience to censorship pressure. A neutral protocol running on concentrated infrastructure is neutral in theory but constrained in practice.

Priority should shift to cryptography combined with diversified ownership of the material.

Without infrastructure diversity, neutrality becomes fragile under pressure. If a small group of providers can limit workloads, restrict regions, or impose compliance barriers, the system inherits their influence. Fairness of rules alone does not guarantee fairness of participation.

Specialization beats generalization in IT markets

Competing with AWS is often presented as a question of scale, but that too is misleading.

Hyperscalers optimize flexibility. Their infrastructure is designed to serve thousands of workloads simultaneously. Virtualization layers, orchestration systems, enterprise compliance tools, and elasticity guarantees—these capabilities are assets for general-purpose computing, but they are also cost layers.

Zero-knowledge proving and verifiable calculations are deterministic, computationally dense, memory bandwidth limited, and pipeline sensitive. In other words, they reward specialization.

A specially designed proof network competes in proof per dollar, proof per watt, and proof per latency. When hardware, demonstration software, circuit design, and aggregation logic are vertically integrated, efficiency comes together. Removing unnecessary abstraction layers reduces overhead. Sustained throughput on persistent clusters outperforms elastic scaling for narrow, constant workloads.

In IT markets, specialization consistently outperforms generalization for regular, high-volume tasks. AWS optimizes for optionality. A dedicated test network optimizes for a working class.

The economic structure also differs. Hyperscaler pricing for business margins and high demand variability. A network aligned with protocol incentives can amortize hardware differently and adjust performance around sustained usage rather than short-term rental models.

The competition is about structural efficiency for a defined workload.

Use hyperscalers, but don’t depend on them

Hyperscalers are not the enemy. They are efficient, reliable and globally distributed infrastructure providers. The problem is addiction.

A resilient architecture uses major vendors for burst capacity, geographic redundancy, and edge distribution, but it does not anchor core functions to a single vendor or small group of vendors.

The settlement, final verification, and availability of critical artifacts must remain intact even in the face of a cloud region failure, a vendor leaving a market, or tighter political constraints.

This is where decentralized storage and computing infrastructures become a viable alternative. Evidentiary artifacts, historical records, and audit entries shall not be removed at the discretion of the Provider. Instead, they should live on infrastructure that is economically aligned with the protocol and structurally difficult to disable.

Hypescalers should be used as a optional an accelerator rather than something fundamental to the product. The cloud can still be useful for scope and bursts, but the system’s ability to produce evidence and maintain what verification depends on is not controlled by a single vendor.

In such a system, if a hyperscaler disappears tomorrow, the network will only slow down, as the most important elements are owned and operated by a larger network rather than leased to a big-brand choke point.

This is how we reinforce the decentralization philosophy of cryptography.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top