Tuesday, June 1, 2021

Gavin Andresen - why do we need a block size limit?

Since Satoshi introduced the block limit in July 2010, a lot has changed online. Whether this limit is us [...] https://www.pinterest.com/pin/1085437947660215829/

Since Satoshi introduced the block limit in July 2010, a lot has changed online. Do we even need this limit?     Since Satoshi introduced the block limit in July 2010, a lot has changed online. Do we even need this limit?   This article is based on an article by Gavin Andresen - Bitcoin Classic developer   Satoshi introduced the 1MB limit as a protection of the network against a DoS (Denial of Service - preventing or significantly slowing down the normal operation of the system) attack. In 2010, the number of transactions was small, the price of BTC oscillated around a few cents. Buying BTC on the exchange just to produce small transactions in order to build large blocks represented a small cost: 50BTC from mining a block was worth just $1.5. And from such a number of coins you can generate a huge number of transactions and effectively block the network. Without an upper limit on block size, an attacker could greatly overload all nodes and waste large amounts of user disk space.   The introduction of the cap has helped to limit the scope of such an attack. Compared to 2010, the current price is about $400 and with a 25BTC reward per block, an attacker would have to have over $10`000 to perform a similar operation. While the sacrifice of $2 "for fun" was realistic, spending $10`000 is definitely out of reach for the average joker.   The second source of attack on the network are transactions requiring a very large amount of computing power for verification. Such an attack has never happened (although it was described in detail back in 2013), and the recently introduced BIP109 patch makes it virtually impossible.   The disk capacities available to the common user are growing very fast, and it doesn't look like this limitation would affect the further development of the project and the ability of home users to run full Core nodes.   So why are developers so stubbornly standing by a limit that has failed to serve its primary purpose for quite some time? Based on 2010 data, the maximum block size could be several orders of magnitude larger than 1MB (back then the network was confirming a few hundred transactions per day, now it's hundreds of thousands of transactions). We know how to change this limit, we know safe methods of introducing hardfork, what's stopping us?   From the point of view of network users, the current situation is unacceptable. You wait longer and longer for a transaction confirmation despite paying much higher fees. Increasing network capacity in the short term would relieve the existing "traffic jam" of unconfirmed transactions and bring the network back up to speed.   The only counter-indication seems to be the limitations of Internet connections. Despite the significant increase in the number of transactions on the network, the transaction and block transmission protocol itself has not changed substantially over the years. Without making changes to the P2P protocol (e.g., Xtreme Thinblocks), there is little to dream that mines from China would support a project to significantly increase block size. However, the proposed block hardfork to 2MB is accepted by all and should be implemented as soon as possible. Also, there are no contraindications to introduce Xtreme Thinblocks, which are already a solution ready for possible implementation.   There is already enough development of protocols and traffic analysis in the network that running a change to automatic block sizing is not a problem either. All that remains is to convince the largest mines to adopt and support a single project so that the changes come into effect as soon as possible.   Photo under Creative Commons license: Flickr.com Tags bitcoin btc developers Gavin Andresen m Gavin Andresen block size

No comments:

Post a Comment