Upgrades of Prysm and Teku released by Ethereum users to tackle network problems in the long run
Two Ethereum 2.0 consensus clients, Prysm and Teku, have released new upgrades to address existing Beacon Chain final issues.
The Ethereum network experienced terminal problems twice in 24 hours, the first lasting 25 minutes and the second over an hour.
According to an announcement on the Ethereum Foundation blog, on May 11 and May 12, there were two different events where the Proof-of-Stake (PoS) consensus mechanism of the Ethereum network’s Beacon Chain failed to reach conclusion for 3 and 8 periods, respectively. .
However, end-user transactions were not affected due to the diversity of clients because not all client implementations were affected by the occurrence.
The exact cause of the problem is still being investigated; However, it seems that the heavy load on some compatibility layer clients caused by an “exceptional scenario” may be causing the problem.
Prism and Tico announce promotions
Meanwhile, Prysm and Teku have released new updates that include tweaks to prevent beacon nodes from consuming large amounts of resources during such exceptional scenarios.
The Prysm update, called v4.0.3 -fix, contains the necessary optimization to prevent the Beacon Chain node from consuming too many resources during periods of disruption.
The ETH 2.0 client urges Ethereum node administrators to perform upgrades to their nodes if heavy resource usage occurs.
On the other hand, the Teku v23.5.0 update removed outdated and questionable certificates from the Ethereum mainnet. The update also included several changes and improvements designed to prevent similar certificate dumping issues in the future.
Other Ethereum clients, such as Nimbus, have claimed that they do not need core upgrades for their clients. However, they have indicated that they will continue to monitor the problem and offer fixes if it gets worse.
For their part, Lighthouse and Lodestar have seen fair load due to their unique architecture. While other ETH 2.0 clients consumed a significant amount of resources, the two kept the network alive by validating 40-50% of the blocks until other clients retrieved the blocks and started validating the blocks.
You must log in to post a comment.