Merkle Root (Cryptocurrency) Definition

When mining, does one also include Merkle root data into block header when testing for hashes? /r/Bitcoin

When mining, does one also include Merkle root data into block header when testing for hashes? /Bitcoin submitted by BitcoinAllBot to BitcoinAll [link] [comments]

How do pools coordinate their miners so that no one wastes work?

Hello, I am trying to understand how a pool operator makes sure that miners do not check the same nonce and hence waste time and work.

In my head I have the following analogy: I have a bookshelf with 30 books. One of the books has a $10 bill (the reward). A friend and I are checking each book. If we just check the books at random, my friend could check a book I have already opened. So it would be best to coordinate: I start from the left end, he starts from the right end. And we save time.

Is there any such coordination by pool operators, or does every miner check nonces at random? Is there latency in communicating the coordination, or is it something that is set once and forever? If so, what happens when new miners join the pool or old one's leave the pool?

Thanks a lot in advance!
submitted by whatdoyounotknow2 to Bitcoin [link] [comments]

DFINITY Research Report

DFINITY Research Report
Author: Gamals Ahmed, CoinEx Business Ambassador
ABSTRACT
The DFINITY blockchain computer provides a secure, performant and flexible consensus mechanism. At its core, DFINITY contains a decentralized randomness beacon, which acts as a verifiable random function (VRF) that produces a stream of outputs over time. The novel technique behind the beacon relies on the existence of a unique-deterministic, non-interactive, DKG-friendly threshold signatures scheme. The only known examples of such a scheme are pairing-based and derived from BLS.
The DFINITY blockchain is layered on top of the DFINITY beacon and uses the beacon as its source of randomness for leader selection and leader ranking. A “weight” is attributed to a chain based on the ranks of the leaders who propose the blocks in the chain, and that weight is used to select between competing chains. The DFINITY blockchain is layered on top of the DFINITY beacon and uses the beacon as its source of randomness for leader selection and leader ranking blockchain is further hardened by a notarization process which dramatically improves the time to finality and eliminates the nothing-at-stake and selfish mining attacks.
DFINITY consensus algorithm is made to scale through continuous quorum selections driven by the random beacon. In practice, DFINITY achieves block times of a few seconds and transaction finality after only two confirmations. The system gracefully handles temporary losses of network synchrony including network splits, while it is provably secure under synchrony.

1.INTRODUCTION

DFINITY is building a new kind of public decentralized cloud computing resource. The company’s platform uses blockchain technology which is aimed at building a new kind of public decentralized cloud computing resource with unlimited capacity, performance and algorithmic governance shared by the world, with the capability to power autonomous self-updating software systems, enabling organizations to design and deploy custom-tailored cloud computing projects, thereby reducing enterprise IT system costs by 90%.
DFINITY aims to explore new territory and prove that the blockchain opportunity is far broader and deeper than anyone has hitherto realized, unlocking the opportunity with powerful new crypto.
Although a standalone project, DFINITY is not maximalist minded and is a great supporter of Ethereum.
The DFINITY blockchain computer provides a secure, performant and flexible consensus mechanism. At its core, DFINITY contains a decentralized randomness beacon, which acts as a verifiable random function (VRF) that produces a stream of outputs over time. The novel technique behind the beacon relies on the existence of a unique-deterministic, non-interactive, DKG-friendly threshold signatures scheme. The only known examples of such a scheme are pairing-based and derived from BLS.
DFINITY’s consensus mechanism has four layers: notary (provides fast finality guarantees to clients and external observers), blockchain (builds a blockchain from validated transactions via the Probabilistic Slot Protocol driven by the random beacon), random beacon (provides the source of randomness for all higher layers like smart contract applications), and identity (provides a registry of all clients).
DFINITY’s consensus mechanism has four layers

Figure1: DFINITY’s consensus mechanism layers
1. Identity layer:
Active participants in the DFINITY Network are called clients. Where clients are registered with permanent identities under a pseudonym. Moreover, DFINITY supports open membership by providing a protocol for registering new clients by depositing a stake with an insurance period. This is the responsibility of the first layer.
2. Random Beacon layer:
Provides the source of randomness (VRF) for all higher layers including ap- plications (smart contracts). The random beacon in the second layer is an unbiasable, verifiable random function (VRF) that is produced jointly by registered clients. Each random output of the VRF is unpredictable by anyone until just before it becomes avail- able to everyone. This is a key technology of the DFINITY system, which relies on a threshold signature scheme with the properties of uniqueness and non-interactivity.

https://preview.redd.it/hkcf53ic05e51.jpg?width=441&format=pjpg&auto=webp&s=44d45c9602ee630705ce92902b8a8379201d8111
3. Blockchain layer:
The third layer deploys the “probabilistic slot protocol” (PSP). This protocol ranks the clients for each height of the chain, in an order that is derived determin- istically from the unbiased output of the random beacon for that height. A weight is then assigned to block proposals based on the proposer’s rank such that blocks from clients at the top of the list receive a higher weight. Forks are resolved by giving favor to the “heaviest” chain in terms of accumulated block weight — quite sim- ilar to how traditional proof-of-work consensus is based on the highest accumulated amount of work.
The first advantage of the PSP protocol is that the ranking is available instantaneously, which allows for a predictable, constant block time. The second advantage is that there is always a single highest-ranked client, which allows for a homogenous network bandwidth utilization. Instead, a race between clients would favor a usage in bursts.
4. Notarization layer:
Provides fast finality guarantees to clients and external observers. DFINITY deploys the novel technique of block notarization in its fourth layer to speed up finality. A notarization is a threshold signature under a block created jointly by registered clients. Only notarized blocks can be included in a chain. Of all RSA-based alternatives exist but suffer from an impracticality of setting up the thresh- old keys without a trusted dealer.
DFINITY achieves its high speed and short block times exactly because notarization is not full consensus.
DFINITY does not suffer from selfish mining attack or a problem nothing at stake because the authentication step is impossible for the opponent to build and maintain a series of linked and trusted blocks in secret.
DFINITY’s consensus is designed to operate on a network of millions of clients. To en- able scalability to this extent, the random beacon and notarization protocols are designed such as that they can be safely and efficiently delegated to a committee

1.1 OVERVIEW ABOUT DFINITY

DFINITY is a blockchain-based cloud-computing project that aims to develop an open, public network, referred to as the “internet computer,” to host the next generation of software and data. and it is a decentralized and non-proprietary network to run the next generation of mega-applications. It dubbed this public network “Cloud 3.0”.
DFINITY is a third generation virtual blockchain network that sets out to function as an “intelligent decentralised cloud,”¹ strongly focused on delivering a viable corporate cloud solution. The DFINITY project is overseen, supported and promoted by DFINITY Stiftung a not-for-profit foundation based in Zug, Switzerland.
DFINITY is a decentralized network design whose protocols generate a reliable “virtual blockchain computer” running on top of a peer-to-peer network upon which software can be installed and can operate in the tamperproof mode of smart contracts.
DFINITY introduces algorithmic governance in the form of a “Blockchain Nervous System” that can protect users from attacks and help restart broken systems, dynamically optimize network security and efficiency, upgrade the protocol and mitigate misuse of the platform, for example by those wishing to run illegal or immoral systems.
DFINITY is an Ethereum-compatible smart contract platform that is implementing some revolutionary ideas to address blockchain performance, scaling, and governance. Whereas
DFINITY could pose a credible threat to Ethereum’s extinction, the project is pursuing a coevolutionary strategy by contributing funding and effort to Ethereum projects and freely offering their technology to Ethereum for adoption. DFINITY has labeled itself Ethereum’s “crazy sister” to express it’s close genetic resemblance to Ethereum, differentiated by its obsession with performance and neuron-inspired governance model.
Dfinity raised $61 million from Andreesen Horowitz and Polychain Capital in a February 2018 funding round. At the time, Dfinity said it wanted to create an “internet computer” to cut the costs of running cloud-based business applications. A further $102 million funding round in August 2018 brought the project’s total funding to $195 million.
In May 2018, Dfinity announced plans to distribute around $35 million worth of Dfinity tokens in an airdrop. It was part of the company’s plan to create a “Cloud 3.0.” Because of regulatory concerns, none of the tokens went to US residents.
DFINITY be broadening and strengthening the EVM ecosystem by giving applications a choice of platforms with different characteristics. However, if DFINITY succeeds in delivering a fully EVM-compatible smart contract platform with higher transaction throughput, faster confirmation times, and governance mechanisms that can resolve public disputes without causing community splits, then it will represent a clearly superior choice for deploying new applications and, as its network effects grow, an attractive place to bring existing ones. Of course the challenge for DFINITY will be to deliver on these promises while meeting the security demands of a public chain with significant value at risk.

1.1.1 DFINITY FUTURE

  • DFINITY aims to explore new blockchain territory related to the original goals of the Ethereum project and is sometimes considered “Ethereum’s crazy sister.”
  • DFINITY is developing blockchain-based infrastructure to support a new style of the internet (akin to Ethereum’s “World Computer”), one in which the internet itself will support software applications and data rather than various cloud hosting providers.
  • The project suggests this reinvented software platform can simplify the development of new software systems, reduce the human capital needed to maintain and secure data, and preserve user data privacy.
  • Dfinity aims to reduce the costs of cloud services by creating a decentralized “internet computer” which may launch in 2020
  • Dfinity claims transactions on its network are finalized in 3–5 seconds, compared to 1 hour for Bitcoin and 10 minutes for Ethereum.

1.1.2 DFINITY’S VISION

DFINITY’s vision is its new internet infrastructure can support a wide variety of end-user and enterprise applications. Social media, messaging, search, storage, and peer-to-peer Internet interactions are all examples of functionalities that DFINITY plans to host atop its public Web 3.0 cloud-like computing resource. In order to provide the transaction and data capacity necessary to support this ambitious vision, DFINITY features a unique consensus model (dubbed Threshold Relay) and algorithmic governance via its Blockchain Nervous System (BNS) — sometimes also referred to as the Network Nervous System or NNS.

1.2 DFINITY COMMUNITY

The DFINITY community brings people and organizations together to learn and collaborate on products that help steward the next-generation of internet software and services. The Internet Computer allows developers to take on the monopolization of the internet, and return the internet back to its free and open roots. We’re committed to connecting those who believe the same through our events, content, and discussions.

https://preview.redd.it/0zv64fzf05e51.png?width=637&format=png&auto=webp&s=e2b17365fae3c679a32431062d8e3c00a57673cf

1.3 DFINITY ROADMAP (TIMELINE) February 15, 2017

February 15, 2017
Ethereum based community seed round raises 4M Swiss francs (CHF)
The DFINITY Stiftung, a not-for-profit foundation entity based in Zug, Switzerland, raised the round. The foundation held $10M of assets as of April 2017.
February 8, 2018
Dfinity announces a $61M fundraising round led by Polychain Capital and Andreessen Horowitz
The round $61M round led by Polychain Capital and Andreessen Horowitz, along with an DFINITY Ecosystem Venture Fund which will be used to support projects developing on the DFINITY platform, and an Ethereum based raise in 2017 brings the total funding for the project over $100 million. This is the first cryptocurrency token that Andressen Horowitz has invested in, led by Chris Dixon.
August 2018
Dfinity raises a $102,000,000 venture round from Multicoin Capital, Village Global, Aspect Ventures, Andreessen Horowitz, Polychain Capital, Scalar Capital, Amino Capital and SV Angel.
January 23, 2020
Dfinity launches an open source platform aimed at the social networking giants

2.DFINITY TECHNOLOGY

Dfinity is building what it calls the internet computer, a decentralized technology spread across a network of independent data centers that allows software to run anywhere on the internet rather than in server farms that are increasingly controlled by large firms, such as Amazon Web Services or Google Cloud. This week Dfinity is releasing its software to third-party developers, who it hopes will start making the internet computer’s killer apps. It is planning a public release later this year.
At its core, the DFINITY consensus mechanism is a variation of the Proof of Stake (PoS) model, but offers an alternative to traditional Proof of Work (PoW) and delegated PoS (dPoS) networks. Threshold Relay intends to strike a balance between inefficiencies of decentralized PoW blockchains (generally characterized by slow block times) and the less robust game theory involved in vote delegation (as seen in dPoS blockchains). In DFINITY, a committee of “miners” is randomly selected to add a new block to the chain. An individual miner’s probability of being elected to the committee proposing and computing the next block (or blocks) is proportional to the number of dfinities the miner has staked on the network. Further, a “weight” is attributed to a DFINITY chain based on the ranks of the miners who propose blocks in the chain, and that weight is used to choose between competing chains (i.e. resolve chain forks).
A decentralized random beacon manages the random selection process of temporary block producers. This beacon is a Variable Random Function (VRF), which is a pseudo-random function that provides publicly verifiable proofs of its outputs’ correctness. A core component of the random beacon is the use of Boneh-Lynn-Shacham (BLS) signatures. By leveraging the BLS signature scheme, the DFINITY protocol ensures no actor in the network can determine the outcome of the next random assignment.
Dfinity is introducing a new standard, which it calls the internet computer protocol (ICP). These new rules let developers move software around the internet as well as data. All software needs computers to run on, but with ICP the computers could be anywhere. Instead of running on a dedicated server in Google Cloud, for example, the software would have no fixed physical address, moving between servers owned by independent data centers around the world. “Conceptually, it’s kind of running everywhere,” says Dfinity engineering manager Stanley Jones.
DFINITY also features a native programming language, called ActorScript (name may be subject to change), and a virtual machine for smart contract creation and execution. The new smart contract language is intended to simplify the management of application state for programmers via an orthogonal persistence environment (which means active programs are
not required to retrieve or save their state). All ActorScript contracts are eventually compiled down to WebAssembly instructions so the DFINITY virtual machine layer can execute the logic of applications running on the network. The advantage of using the WebAssembly standard is that all major browsers support it and a variety of programming languages can compile down to Wasm (not just ActorScript).
Dfinity is moving fast. Recently, Dfinity showed off a TikTok clone called CanCan. In January it demoed a LinkedIn-alike called LinkedUp. Neither app is being made public, but they make a convincing case that apps made for the internet computer can rival the real things.

2.1 DFINITY CORE APPLICATIONS

The DFINITY cloud has two core applications:
  1. Enabling the re-engineering of business: DFINITY ambitiously aims to facilitate the re-engineering of mass-market services (such as Web Search, Ridesharing Services, Messaging Services, Social Media, Supply Chain, etc) into open source businesses that leverage autonomous software and decentralised governance systems to operate and update themselves more efficiently.
  2. Enable the re-engineering of enterprise IT systems to reduce costs: DFINITY seeks to re-engineer enterprise IT systems to take advantage of the unique properties that blockchain computer networks provide.
At present, computation on blockchain-based computer networks is far more expensive than traditional, centralised solutions (Amazon Web Services, Microsoft Azure, Google Cloud Platform, etc). Despite increasing computational cost, DFINITY intends to lower net costs “by 90% or more” through reducing the human capital cost associated with sustaining and supporting these services.
Whilst conceptually similar to Ethereum, DFINITY employs original and new cryptography methods and protocols (crypto:3) at the network level, in concert with AI and network-fuelled systemic governance (Blockchain Nervous System — BNS) to facilitate Corporate adoption.
DFINITY recognises that different users value different properties and sees itself as more of a fully compatible extension of the Ethereum ecosystem rather than a competitor of the Ethereum network.
In the future, DFINITY hopes that much of their “new crypto might be used within the Ethereum network and are also working hard on shared technology components.”
As the DFINITY project develops over time, the DFINITY Stiftung foundation intends to steadily increase the BNS’ decision-making responsibilities over time, eventually resulting in the dissolution of its own involvement entirely, once the BNS is sufficiently sophisticated.
DFINITY consensus mechanism is a heavily optimized proof of stake (PoS) model. It places a strong emphasis on transaction finality through implementing a Threshold Relay technique in conjunction with the BLS signature scheme and a notarization method to address many of the problems associated with PoS consensus.

2.2 THRESHOLD RELAY

As a public cloud computing resource, DFINITY targets business applications by substantially reducing cloud computing costs for IT systems. They aim to achieve this with a highly scalable and powerful network with potentially unlimited capacity. The DFINITY platform is chalk full of innovative designs and features like their Blockchain Nervous System (BNS) for algorithmic governance.
One of the primary components of the platform is its novel Threshold Relay Consensus model from which randomness is produced, driving the other systems that the network depends on to operate effectively. The consensus system was first designed for a permissioned participation model but can be paired with any method of Sybil resistance for an open participation model.
“The Threshold Relay is the mechanism by which Dfinity randomly samples replicas into groups, sets the groups (committees) up for threshold operation, chooses the current committee, and relays from one committee to the next is called the threshold relay.”
Threshold Relay consists of four layers (As mentioned previously):
  1. Notary layer, which provides fast finality guarantees to clients and external observers and eliminates nothing-at-stake and selfish mining attacks, providing Sybil attack resistance.
  2. Blockchain layer that builds a blockchain from validated transactions via the Probabilistic Slot Protocol driven by the random beacon.
  3. Random beacon, which as previously covered, provides the source of randomness for all higher layers like the blockchain layer smart contract applications.
  4. Identity layer that provides a registry of all clients.

2.2.1 HOW DOES THRESHOLD RELAY WORK?

Threshold Relay produces an endogenous random beacon, and each new value defines random group(s) of clients that may independently try and form into a “threshold group”. The composition of each group is entirely random such that they can intersect and clients can be presented in multiple groups. In DFINITY, each group is comprised of 400 members. When a group is defined, the members attempt to set up a BLS threshold signature system using a distributed key generation protocol. If they are successful within some fixed number of blocks, they then register the public key (“identity”) created for their group on the global blockchain using a special transaction, such that it will become part of the set of active groups in a following “epoch”. The network begins at “genesis” with some number of predefined groups, one of which is nominated to create a signature on some default value. Such signatures are random values — if they were not then the group’s signatures on messages would be predictable and the threshold signature system insecure — and each random value produced thus is used to select a random successor group. This next group then signs the previous random value to produce a new random value and select another group, relaying between groups ad infinitum and producing a sequence of random values.
In a cryptographic threshold signature system a group can produce a signature on a message upon the cooperation of some minimum threshold of its members, which is set to 51% in the DFINITY network. To produce the threshold signature, group members sign the message
individually (here the preceding group’s threshold signature) creating individual “signature shares” that are then broadcast to other group members. The group threshold signature can be constructed upon combination of a sufficient threshold of signature shares. So for example, if the group size is 400, if the threshold is set at 201 any client that collects that many shares will be able to construct the group’s signature on the message. Other group members can validate each signature share, and any client using the group’s public key can validate the single group threshold signature produced by combining them. The magic of the BLS scheme is that it is “unique and deterministic” meaning that from whatever subset of group members the required number of signature shares are collected, the single threshold signature created is always the same and only a single correct value is possible.
Consequently, the sequence of random values produced is entirely deterministic and unmanipulable, and signatures generated by relaying between groups produces a Verifiable Random Function, or VRF. Although the sequence of random values is pre-determined given some set of participating groups, each new random value can only be produced upon the minimal agreement of a threshold of the current group. Conversely, in order for relaying to stall because a random number was not produced, the number of correct processes must be below the threshold. Thresholds are configured so that this is extremely unlikely. For example, if the group size is set to 400, and the threshold is 201, 200 or more of the processes must become faulty to prevent production. If there are 10,000 processes in the network, of which 3,000 are faulty, the probability this will occur is less than 10e-17.

2.3 DFINITY TOKEN

The DFINITY blockchain also supports a native token, called dfinities (DFN), which perform multiple roles within the network, including:
  1. Fuel for deploying and running smart contracts.
  2. Security deposits (i.e. staking) that enable participation in the BNS governance system.
  3. Security deposits that allow client software or private DFINITY cloud networks to connect to the public network.
Although dfinities will end up being assigned a value by the market, the DFINITY team does not intend for DFN to act as a currency. Instead, the project has envisioned PHI, a “next-generation” crypto-fiat scheme, to act as a stable medium of exchange within the DFINITY ecosystem.
Neuron operators can earn Dfinities by participating in network-wide votes, which could be concerning protocol upgrades, a new economic policy, etc. DFN rewards for participating in the governance system are proportional to the number of tokens staked inside a neuron.

2.4 SCALABILITY

DFINITY is constantly developing with a structure that separates consensus, validation, and storage into separate layers. The storage layer is divided into multiple strings, each of which is responsible for processing transactions that occur in the fragment state. The verification layer is responsible for combining hashes of all fragments in a Merkle-like structure that results in a global state fractionation that is stored in blocks in the top-level chain.

2.5 DFINITY CONSENSUS ALGORITHM

The single most important aspect of the user experience is certainly the time required before a transaction becomes final. This is not solved by a short block time alone — Dfinity’s team also had to reduce the number of confirmations required to a small constant. DFINITY moreover had to provide a provably secure proof-of-stake algorithm that scales to millions of active participants without compromising any bit on decentralization.
Dfinity soon realized that the key to scalability lay in having an unmanipulable source of randomness available. Hence they built a scalable decentralized random beacon, based on what they call the Threshold Relay technique, right into the foundation of the protocol. This strong foundation drives a scalable and fast consensus layer: On top of the beacon runs a blockchain which utilizes notarization by threshold groups to achieve near-instant finality. Details can be found in the overview paper that we are releasing today.
The roots of the DFINITY consensus mechanism date back to 2014 when thair Chief Scientist, Dominic Williams, started to look for more efficient ways to drive large consensus networks. Since then, much research has gone into the protocol and it took several iterations to reach its current design.
For any practical consensus system the difficulty lies in navigating the tight terrain that one is given between the boundaries imposed by theoretical impossibility-results and practical performance limitations.
The first key milestone was the novel Threshold Relay technique for decentralized, deterministic randomness, which is made possible by certain unique characteristics of the BLS signature system. The next breakthrough was the notarization technique, which allows DFINITY consensus to solve the traditional problems that come with proof-of-stake systems. Getting the security proofs sound was the final step before publication.
DFINITY consensus has made the proper trade-offs between the practical side (realistic threat models and security assumptions) and the theoretical side (provable security). Out came a flexible, tunable algorithm, which we expect will establish itself as the best performing proof-of-stake algorithm. In particular, having the built-in random beacon will prove to be indispensable when building out sharding and scalable validation techniques.

2.6 LINKEDUP

The startup has rather cheekily called this “an open version of LinkedIn,” the Microsoft-owned social network for professionals. Unlike LinkedIn, LinkedUp, which runs on any browser, is not owned or controlled by a corporate entity.
LinkedUp is built on Dfinity’s so-called Internet Computer, its name for the platform it is building to distribute the next generation of software and open internet services.
The software is hosted directly on the internet on a Switzerland-based independent data center, but in the concept of the Internet Computer, it could be hosted at your house or mine. The compute power to run the application LinkedUp, in this case — is coming not from Amazon AWS, Google Cloud or Microsoft Azure, but is instead based on the distributed architecture that Dfinity is building.
Specifically, Dfinity notes that when enterprises and developers run their web apps and enterprise systems on the Internet Computer, the content is decentralized across a minimum of four or a maximum of an unlimited number of nodes in Dfinity’s global network of independent data centers.
Dfinity is an open source for LinkedUp to developers for creating other types of open internet services on the architecture it has built.
“Open Social Network for Professional Profiles” suggests that on Dfinity model one can create “Open WhatsApp”, “Open eBay”, “Open Salesforce” or “Open Facebook”.
The tools include a Canister Software Developer Kit and a simple programming language called Motoko that is optimized for Dfinity’s Internet Computer.
“The Internet Computer is conceived as an alternative to the $3.8 trillion legacy IT stack, and empowers the next generation of developers to build a new breed of tamper-proof enterprise software systems and open internet services. We are democratizing software development,” Williams said. “The Bronze release of the Internet Computer provides developers and enterprises a glimpse into the infinite possibilities of building on the Internet Computer — which also reflects the strength of the Dfinity team we have built so far.”
Dfinity says its “Internet Computer Protocol” allows for a new type of software called autonomous software, which can guarantee permanent APIs that cannot be revoked. When all these open internet services (e.g. open versions of WhatsApp, Facebook, eBay, Salesforce, etc.) are combined with other open software and services it creates “mutual network effects” where everyone benefits.
On 1 November, DFINITY has released 13 new public versions of the SDK, to our second major milestone [at WEF Davos] of demoing a decentralized web app called LinkedUp on the Internet Computer. Subsequent milestones towards the public launch of the Internet Computer will involve:
  1. On boarding a global network of independent data centers.
  2. Fully tested economic system.
  3. Fully tested Network Nervous Systems for configuration and upgrades

2.7 WHAT IS MOTOKO?

Motoko is a new software language being developed by the DFINITY Foundation, with an accompanying SDK, that is designed to help the broadest possible audience of developers create reliable and maintainable websites, enterprise systems and internet services on the Internet Computer with ease. By developing the Motoko language, the DFINITY Foundation will ensure that a language that is highly optimized for the new environment is available. However, the Internet Computer can support any number of different software frameworks, and the DFINITY Foundation is also working on SDKs that support the Rust and C languages. Eventually, it is expected there will be many different SDKs that target the Internet Computer.
Full article
submitted by CoinEx_Institution to u/CoinEx_Institution [link] [comments]

Looking for Technical Information about Mining Pools

I'm doing research on how exactly bitcoins are mined, and I'm looking for detailed information about how mining pools work - i.e. what exactly is the pool server telling each participating miner to do.
It's so far my understanding that, when Bitcoins are mined, the following steps take place:
  1. Transactions from the mempool are selected for a new block; this may or may not be all the transactions in said mempool. A coinable transaction - which consists of the miner's wallet's address and other arbitrary data - that will help create new Bitcoin will also be added to the new block.
  2. All of said transactions are hashed together into a Merkle Root. The hashing algorithm is Double SHA-256.
  3. A block header is formed for the new block. Said block header consists of a Version, the Block Hash of the Previous Block in the Blockchain, said Merkle Root from earlier, a timestamp in UTC, the target, and a nonce - which is 32 bits long and can be any value from 0x00000000 to 0xFFFFFFFF (a total of 4,294,967,296 nonce values in total).
  4. The nonce value is set to 0x00000000, and said block header is double hashed to get the Block Hash of the current block; and if said Block Hash starts with a certain number of zeroes (depending on the difficulty), the miner sends the block to the Bitcoin Network, the block successfully added to the blockchain and the miner is awarded with newly created bitcoin.
  5. But if said Block Hash does not start with the required number of zeroes, said block will not be accepted by the network, and the miner Double Hashes the block again, but with a different nonce value; but if none of the 4,294,967,296 nonce values yields a Block Hash with the required number of zeroes, it will be impossible to add the block to the network - and in that case, the miner will either need to change the timestamp and try all 4,294,967,296 nonce values again, or the miner will need to start all over again and compose a new block with a different set of transactions (either a different coinable transaction, a different set of transactions from the mempool, or both).
Now, what I'm trying to figure out is what exactly each miner is doing differently in a mining pool, and if it is different depending on the pool.
One thing I've read is that a mining pool gives each participating miner a different set of transactions from the mempool.
I've also read that, because the most sophisticated miners can try all 4,294,967,296 nonce values in less than a fraction of a second, and since the timestamp can only be updated every second, the coinbase transaction is used as a "second nonce" (although, it is my understanding that, being part of a transaction, if this "extra nonce" is changed, all the transactions need to be double hashed into a new Merkle Root); and I may have read someplace that miners could also be given the same set of transactions from the mempool, but are each told to use a different set of "extra nonce" values for the coinbase transaction.
Is there anything else that pools tell miners to do differently? Is each pool different in the instructions it gives to the participating miners? Did I get anything wrong?
I want to make sure I have a full technical understanding of what mining pools are doing to mine bitcoin.
submitted by sparky77734 to Bitcoin [link] [comments]

merle tree

Hi, I understand that there is a merkle root hash in every block of the blockchain.
I did not really understand what it is used for.
if miners are doing the validation by solving a puzzle, why do we need the merkle root?
submitted by jcoder42 to BitcoinBeginners [link] [comments]

Proof Of Work Explained

Proof Of Work Explained
https://preview.redd.it/hl80wdx61j451.png?width=1200&format=png&auto=webp&s=c80b21c53ae45c6f7d618f097bc705a1d8aaa88f
A proof-of-work (PoW) system (or protocol, or function) is a consensus mechanism that was first invented by Cynthia Dwork and Moni Naor as presented in a 1993 journal article. In 1999, it was officially adopted in a paper by Markus Jakobsson and Ari Juels and they named it as "proof of work".
It was developed as a way to prevent denial of service attacks and other service abuse (such as spam on a network). This is the most widely used consensus algorithm being used by many cryptocurrencies such as Bitcoin and Ethereum.
How does it work?
In this method, a group of users competes against each other to find the solution to a complex mathematical puzzle. Any user who successfully finds the solution would then broadcast the block to the network for verifications. Once the users verified the solution, the block then moves to confirm the state.
The blockchain network consists of numerous sets of decentralized nodes. These nodes act as admin or miners which are responsible for adding new blocks into the blockchain. The miner instantly and randomly selects a number which is combined with the data present in the block. To find a correct solution, the miners need to select a valid random number so that the newly generated block can be added to the main chain. It pays a reward to the miner node for finding the solution.
The block then passed through a hash function to generate output which matches all input/output criteria. Once the result is found, other nodes in the network verify and validate the outcome. Every new block holds the hash of the preceding block. This forms a chain of blocks. Together, they store information within the network. Changing a block requires a new block containing the same predecessor. It is almost impossible to regenerate all successors and change their data. This protects the blockchain from tampering.
What is Hash Function?
A hash function is a function that is used to map data of any length to some fixed-size values. The result or outcome of a hash function is known as hash values, hash codes, digests, or simply hashes.
https://preview.redd.it/011tfl8c1j451.png?width=851&format=png&auto=webp&s=ca9c2adecbc0b14129a9b2eea3c2f0fd596edd29
The hash method is quite secure, any slight change in input will result in a different output, which further results in discarded by network participants. The hash function generates the same length of output data to that of input data. It is a one-way function i.e the function cannot be reversed to get the original data back. One can only perform checks to validate the output data with the original data.
Implementations
Nowadays, Proof-of-Work is been used in a lot of cryptocurrencies. But it was first implemented in Bitcoin after which it becomes so popular that it was adopted by several other cryptocurrencies. Bitcoin uses the puzzle Hashcash, the complexity of a puzzle is based upon the total power of the network. On average, it took approximately 10 min to block formation. Litecoin, a Bitcoin-based cryptocurrency is having a similar system. Ethereum also implemented this same protocol.
Types of PoW
Proof-of-work protocols can be categorized into two parts:-
· Challenge-response
This protocol creates a direct link between the requester (client) and the provider (server).
In this method, the requester needs to find the solution to a challenge that the server has given. The solution is then validated by the provider for authentication.
The provider chooses the challenge on the spot. Hence, its difficulty can be adapted to its current load. If the challenge-response protocol has a known solution or is known to exist within a bounded search space, then the work on the requester side may be bounded.
https://preview.redd.it/ij967dof1j451.png?width=737&format=png&auto=webp&s=12670c2124fc27b0f988bb4a1daa66baf99b4e27
Source-wiki
· Solution–verification
These protocols do not have any such prior link between the sender and the receiver. The client, self-imposed a problem and solve it. It then sends the solution to the server to check both the problem choice and the outcome. Like Hashcash these schemes are also based on unbounded probabilistic iterative procedures.
https://preview.redd.it/gfobj9xg1j451.png?width=740&format=png&auto=webp&s=2291fd6b87e84395f8a4364267f16f577b5f1832
Source-wiki
These two methods generally based on the following three techniques:-
CPU-bound
This technique depends upon the speed of the processor. The higher the processor power greater will be the computation.
Memory-bound
This technique utilizes the main memory accesses (either latency or bandwidth) in computation speed.
Network-bound
In this technique, the client must perform a few computations and wait to receive some tokens from remote servers.
List of proof-of-work functions
Here is a list of known proof-of-work functions:-
o Integer square root modulo a large prime
o Weaken Fiat–Shamir signatures`2
o Ong–Schnorr–Shamir signature is broken by Pollard
o Partial hash inversion
o Hash sequences
o Puzzles
o Diffie–Hellman–based puzzle
o Moderate
o Mbound
o Hokkaido
o Cuckoo Cycle
o Merkle tree-based
o Guided tour puzzle protocol
A successful attack on a blockchain network requires a lot of computational power and a lot of time to do the calculations. Proof of Work makes hacks inefficient since the cost incurred would be greater than the potential rewards for attacking the network. Miners are also incentivized not to cheat.
It is still considered as one of the most popular methods of reaching consensus in blockchains. Though it may not be the most efficient solution due to high energy extensive usage. But this is why it guarantees the security of the network.
Due to Proof of work, it is quite impossible to alter any aspect of the blockchain, since any such changes would require re-mining all those subsequent blocks. It is also difficult for a user to take control over the network computing power since the process requires high energy thus making these hash functions expensive.
submitted by RumaDas to u/RumaDas [link] [comments]

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.
  • Bitcoin (BTC) is a peer-to-peer cryptocurrency that aims to function as a means of exchange that is independent of any central authority. BTC can be transferred electronically in a secure, verifiable, and immutable way.
  • Launched in 2009, BTC is the first virtual currency to solve the double-spending issue by timestamping transactions before broadcasting them to all of the nodes in the Bitcoin network. The Bitcoin Protocol offered a solution to the Byzantine Generals’ Problem with a blockchain network structure, a notion first created by Stuart Haber and W. Scott Stornetta in 1991.
  • Bitcoin’s whitepaper was published pseudonymously in 2008 by an individual, or a group, with the pseudonym “Satoshi Nakamoto”, whose underlying identity has still not been verified.
  • The Bitcoin protocol uses an SHA-256d-based Proof-of-Work (PoW) algorithm to reach network consensus. Its network has a target block time of 10 minutes and a maximum supply of 21 million tokens, with a decaying token emission rate. To prevent fluctuation of the block time, the network’s block difficulty is re-adjusted through an algorithm based on the past 2016 block times.
  • With a block size limit capped at 1 megabyte, the Bitcoin Protocol has supported both the Lightning Network, a second-layer infrastructure for payment channels, and Segregated Witness, a soft-fork to increase the number of transactions on a block, as solutions to network scalability.

https://preview.redd.it/s2gmpmeze3151.png?width=256&format=png&auto=webp&s=9759910dd3c4a15b83f55b827d1899fb2fdd3de1

1. What is Bitcoin (BTC)?

  • Bitcoin is a peer-to-peer cryptocurrency that aims to function as a means of exchange and is independent of any central authority. Bitcoins are transferred electronically in a secure, verifiable, and immutable way.
  • Network validators, whom are often referred to as miners, participate in the SHA-256d-based Proof-of-Work consensus mechanism to determine the next global state of the blockchain.
  • The Bitcoin protocol has a target block time of 10 minutes, and a maximum supply of 21 million tokens. The only way new bitcoins can be produced is when a block producer generates a new valid block.
  • The protocol has a token emission rate that halves every 210,000 blocks, or approximately every 4 years.
  • Unlike public blockchain infrastructures supporting the development of decentralized applications (Ethereum), the Bitcoin protocol is primarily used only for payments, and has only very limited support for smart contract-like functionalities (Bitcoin “Script” is mostly used to create certain conditions before bitcoins are used to be spent).

2. Bitcoin’s core features

For a more beginner’s introduction to Bitcoin, please visit Binance Academy’s guide to Bitcoin.

Unspent Transaction Output (UTXO) model

A UTXO transaction works like cash payment between two parties: Alice gives money to Bob and receives change (i.e., unspent amount). In comparison, blockchains like Ethereum rely on the account model.
https://preview.redd.it/t1j6anf8f3151.png?width=1601&format=png&auto=webp&s=33bd141d8f2136a6f32739c8cdc7aae2e04cbc47

Nakamoto consensus

In the Bitcoin network, anyone can join the network and become a bookkeeping service provider i.e., a validator. All validators are allowed in the race to become the block producer for the next block, yet only the first to complete a computationally heavy task will win. This feature is called Proof of Work (PoW).
The probability of any single validator to finish the task first is equal to the percentage of the total network computation power, or hash power, the validator has. For instance, a validator with 5% of the total network computation power will have a 5% chance of completing the task first, and therefore becoming the next block producer.
Since anyone can join the race, competition is prone to increase. In the early days, Bitcoin mining was mostly done by personal computer CPUs.
As of today, Bitcoin validators, or miners, have opted for dedicated and more powerful devices such as machines based on Application-Specific Integrated Circuit (“ASIC”).
Proof of Work secures the network as block producers must have spent resources external to the network (i.e., money to pay electricity), and can provide proof to other participants that they did so.
With various miners competing for block rewards, it becomes difficult for one single malicious party to gain network majority (defined as more than 51% of the network’s hash power in the Nakamoto consensus mechanism). The ability to rearrange transactions via 51% attacks indicates another feature of the Nakamoto consensus: the finality of transactions is only probabilistic.
Once a block is produced, it is then propagated by the block producer to all other validators to check on the validity of all transactions in that block. The block producer will receive rewards in the network’s native currency (i.e., bitcoin) as all validators approve the block and update their ledgers.

The blockchain

Block production

The Bitcoin protocol utilizes the Merkle tree data structure in order to organize hashes of numerous individual transactions into each block. This concept is named after Ralph Merkle, who patented it in 1979.
With the use of a Merkle tree, though each block might contain thousands of transactions, it will have the ability to combine all of their hashes and condense them into one, allowing efficient and secure verification of this group of transactions. This single hash called is a Merkle root, which is stored in the Block Header of a block. The Block Header also stores other meta information of a block, such as a hash of the previous Block Header, which enables blocks to be associated in a chain-like structure (hence the name “blockchain”).
An illustration of block production in the Bitcoin Protocol is demonstrated below.

https://preview.redd.it/m6texxicf3151.png?width=1591&format=png&auto=webp&s=f4253304912ed8370948b9c524e08fef28f1c78d

Block time and mining difficulty

Block time is the period required to create the next block in a network. As mentioned above, the node who solves the computationally intensive task will be allowed to produce the next block. Therefore, block time is directly correlated to the amount of time it takes for a node to find a solution to the task. The Bitcoin protocol sets a target block time of 10 minutes, and attempts to achieve this by introducing a variable named mining difficulty.
Mining difficulty refers to how difficult it is for the node to solve the computationally intensive task. If the network sets a high difficulty for the task, while miners have low computational power, which is often referred to as “hashrate”, it would statistically take longer for the nodes to get an answer for the task. If the difficulty is low, but miners have rather strong computational power, statistically, some nodes will be able to solve the task quickly.
Therefore, the 10 minute target block time is achieved by constantly and automatically adjusting the mining difficulty according to how much computational power there is amongst the nodes. The average block time of the network is evaluated after a certain number of blocks, and if it is greater than the expected block time, the difficulty level will decrease; if it is less than the expected block time, the difficulty level will increase.

What are orphan blocks?

In a PoW blockchain network, if the block time is too low, it would increase the likelihood of nodes producingorphan blocks, for which they would receive no reward. Orphan blocks are produced by nodes who solved the task but did not broadcast their results to the whole network the quickest due to network latency.
It takes time for a message to travel through a network, and it is entirely possible for 2 nodes to complete the task and start to broadcast their results to the network at roughly the same time, while one’s messages are received by all other nodes earlier as the node has low latency.
Imagine there is a network latency of 1 minute and a target block time of 2 minutes. A node could solve the task in around 1 minute but his message would take 1 minute to reach the rest of the nodes that are still working on the solution. While his message travels through the network, all the work done by all other nodes during that 1 minute, even if these nodes also complete the task, would go to waste. In this case, 50% of the computational power contributed to the network is wasted.
The percentage of wasted computational power would proportionally decrease if the mining difficulty were higher, as it would statistically take longer for miners to complete the task. In other words, if the mining difficulty, and therefore targeted block time is low, miners with powerful and often centralized mining facilities would get a higher chance of becoming the block producer, while the participation of weaker miners would become in vain. This introduces possible centralization and weakens the overall security of the network.
However, given a limited amount of transactions that can be stored in a block, making the block time too longwould decrease the number of transactions the network can process per second, negatively affecting network scalability.

3. Bitcoin’s additional features

Segregated Witness (SegWit)

Segregated Witness, often abbreviated as SegWit, is a protocol upgrade proposal that went live in August 2017.
SegWit separates witness signatures from transaction-related data. Witness signatures in legacy Bitcoin blocks often take more than 50% of the block size. By removing witness signatures from the transaction block, this protocol upgrade effectively increases the number of transactions that can be stored in a single block, enabling the network to handle more transactions per second. As a result, SegWit increases the scalability of Nakamoto consensus-based blockchain networks like Bitcoin and Litecoin.
SegWit also makes transactions cheaper. Since transaction fees are derived from how much data is being processed by the block producer, the more transactions that can be stored in a 1MB block, the cheaper individual transactions become.
https://preview.redd.it/depya70mf3151.png?width=1601&format=png&auto=webp&s=a6499aa2131fbf347f8ffd812930b2f7d66be48e
The legacy Bitcoin block has a block size limit of 1 megabyte, and any change on the block size would require a network hard-fork. On August 1st 2017, the first hard-fork occurred, leading to the creation of Bitcoin Cash (“BCH”), which introduced an 8 megabyte block size limit.
Conversely, Segregated Witness was a soft-fork: it never changed the transaction block size limit of the network. Instead, it added an extended block with an upper limit of 3 megabytes, which contains solely witness signatures, to the 1 megabyte block that contains only transaction data. This new block type can be processed even by nodes that have not completed the SegWit protocol upgrade.
Furthermore, the separation of witness signatures from transaction data solves the malleability issue with the original Bitcoin protocol. Without Segregated Witness, these signatures could be altered before the block is validated by miners. Indeed, alterations can be done in such a way that if the system does a mathematical check, the signature would still be valid. However, since the values in the signature are changed, the two signatures would create vastly different hash values.
For instance, if a witness signature states “6,” it has a mathematical value of 6, and would create a hash value of 12345. However, if the witness signature were changed to “06”, it would maintain a mathematical value of 6 while creating a (faulty) hash value of 67890.
Since the mathematical values are the same, the altered signature remains a valid signature. This would create a bookkeeping issue, as transactions in Nakamoto consensus-based blockchain networks are documented with these hash values, or transaction IDs. Effectively, one can alter a transaction ID to a new one, and the new ID can still be valid.
This can create many issues, as illustrated in the below example:
  1. Alice sends Bob 1 BTC, and Bob sends Merchant Carol this 1 BTC for some goods.
  2. Bob sends Carols this 1 BTC, while the transaction from Alice to Bob is not yet validated. Carol sees this incoming transaction of 1 BTC to him, and immediately ships goods to B.
  3. At the moment, the transaction from Alice to Bob is still not confirmed by the network, and Bob can change the witness signature, therefore changing this transaction ID from 12345 to 67890.
  4. Now Carol will not receive his 1 BTC, as the network looks for transaction 12345 to ensure that Bob’s wallet balance is valid.
  5. As this particular transaction ID changed from 12345 to 67890, the transaction from Bob to Carol will fail, and Bob will get his goods while still holding his BTC.
With the Segregated Witness upgrade, such instances can not happen again. This is because the witness signatures are moved outside of the transaction block into an extended block, and altering the witness signature won’t affect the transaction ID.
Since the transaction malleability issue is fixed, Segregated Witness also enables the proper functioning of second-layer scalability solutions on the Bitcoin protocol, such as the Lightning Network.

Lightning Network

Lightning Network is a second-layer micropayment solution for scalability.
Specifically, Lightning Network aims to enable near-instant and low-cost payments between merchants and customers that wish to use bitcoins.
Lightning Network was conceptualized in a whitepaper by Joseph Poon and Thaddeus Dryja in 2015. Since then, it has been implemented by multiple companies. The most prominent of them include Blockstream, Lightning Labs, and ACINQ.
A list of curated resources relevant to Lightning Network can be found here.
In the Lightning Network, if a customer wishes to transact with a merchant, both of them need to open a payment channel, which operates off the Bitcoin blockchain (i.e., off-chain vs. on-chain). None of the transaction details from this payment channel are recorded on the blockchain, and only when the channel is closed will the end result of both party’s wallet balances be updated to the blockchain. The blockchain only serves as a settlement layer for Lightning transactions.
Since all transactions done via the payment channel are conducted independently of the Nakamoto consensus, both parties involved in transactions do not need to wait for network confirmation on transactions. Instead, transacting parties would pay transaction fees to Bitcoin miners only when they decide to close the channel.
https://preview.redd.it/cy56icarf3151.png?width=1601&format=png&auto=webp&s=b239a63c6a87ec6cc1b18ce2cbd0355f8831c3a8
One limitation to the Lightning Network is that it requires a person to be online to receive transactions attributing towards him. Another limitation in user experience could be that one needs to lock up some funds every time he wishes to open a payment channel, and is only able to use that fund within the channel.
However, this does not mean he needs to create new channels every time he wishes to transact with a different person on the Lightning Network. If Alice wants to send money to Carol, but they do not have a payment channel open, they can ask Bob, who has payment channels open to both Alice and Carol, to help make that transaction. Alice will be able to send funds to Bob, and Bob to Carol. Hence, the number of “payment hubs” (i.e., Bob in the previous example) correlates with both the convenience and the usability of the Lightning Network for real-world applications.

Schnorr Signature upgrade proposal

Elliptic Curve Digital Signature Algorithm (“ECDSA”) signatures are used to sign transactions on the Bitcoin blockchain.
https://preview.redd.it/hjeqe4l7g3151.png?width=1601&format=png&auto=webp&s=8014fb08fe62ac4d91645499bc0c7e1c04c5d7c4
However, many developers now advocate for replacing ECDSA with Schnorr Signature. Once Schnorr Signatures are implemented, multiple parties can collaborate in producing a signature that is valid for the sum of their public keys.
This would primarily be beneficial for network scalability. When multiple addresses were to conduct transactions to a single address, each transaction would require their own signature. With Schnorr Signature, all these signatures would be combined into one. As a result, the network would be able to store more transactions in a single block.
https://preview.redd.it/axg3wayag3151.png?width=1601&format=png&auto=webp&s=93d958fa6b0e623caa82ca71fe457b4daa88c71e
The reduced size in signatures implies a reduced cost on transaction fees. The group of senders can split the transaction fees for that one group signature, instead of paying for one personal signature individually.
Schnorr Signature also improves network privacy and token fungibility. A third-party observer will not be able to detect if a user is sending a multi-signature transaction, since the signature will be in the same format as a single-signature transaction.

4. Economics and supply distribution

The Bitcoin protocol utilizes the Nakamoto consensus, and nodes validate blocks via Proof-of-Work mining. The bitcoin token was not pre-mined, and has a maximum supply of 21 million. The initial reward for a block was 50 BTC per block. Block mining rewards halve every 210,000 blocks. Since the average time for block production on the blockchain is 10 minutes, it implies that the block reward halving events will approximately take place every 4 years.
As of May 12th 2020, the block mining rewards are 6.25 BTC per block. Transaction fees also represent a minor revenue stream for miners.
submitted by D-platform to u/D-platform [link] [comments]

80.241.217.46 mining 18 blocks today containing mostly 1 -> 64 -> -128 -> 256 -> 512 transactions

Who is 80.241.217.46? This IP is mainly producing blocks with 2 base system pattern... even blocks with 1 transaction. Seems like a waste not to include more transaction and looks rather suspicious to me. Currently they got 3 of the past 4 blocks so they seem strong. Server located in Germany.
Is this the reason why Ive been waiting almost an hour for confirmation of my 0.0001 fee transaction?
UPDATE: Only 192 transactions has been confirmed in over 1,5 hour because of this pool.
submitted by nostr to Bitcoin [link] [comments]

A radical new way to mine Bitcoin?

I have this crazy idea that I feel could Optimize Solo Mining and make it profitable again, and I'm trying to figure out how to build and program a rig so that I could try this:
It is my understanding that, when a new block is created, the miner first generates a Merkle Root from a Merkle Tree consisting of all the transactions that will be listed in the Block (including the Coinbase Transaction), adds said Merkle Root to the new Block's Header, and then tries to find a number between 0 and 4,294,967,295 (the nonce) that, when combined with the rest of the Block's Header and Hashed will result in a Hash with a certain amount of Zeros. If such a nonce is found, then the block is accepted by the Bitcoin Network, added to the Blockchain, and newly created Bitcoin appears in the Miner's Wallet; but, if none of the nonces work, then a new Merkle Root is generated by changing the Coinbase Transaction (but leaving the other transactions the same as before), and the miner again tests all of the possible nonces to see if any of them will result in the proper hash. And all this goes on and on until the miner finds the right combination of Merkle Root and Nonce that will work with the rest of the Block's Header.
Now, because there are 4,294,967,296 possible nonces, this means that the Miner will Hash bad Merkle Root 4,294,967,296 times; which, if you think about it enough seems like such a waste. However, because there is only 612,772 blocks in the blockchain (as of the typing of this post), this also means that it's quite unlikely that any 2 or more Blocks will share the same Nonce (not impossible, but unlikely).
Hence, my idea is to configure a miner so that, instead of checking all 4,294,967,296 possible nonces, an Artificial Intelligence analyses the Blockchain and guesses the best nonce to try, and the Miner then keeps changing the Merkle Root (leaving the nonce the same) until to hopefully finds a Merkle Root that, when Hashed with the previously chosen nonce and the rest of the Block's Header will produce the appropriate Hash in which the Block will be accepted. This can also be scaled up with work with multiple miners: For example, if you have 8 miners, you can have the AI choose the 8 best nonces to try, assign each of those nonces to a single miner, and each miner keeps trying it assigned nonce with different Merkle Roots until one of the miners finds a combination that works.
submitted by sparky77734 to Bitcoin [link] [comments]

How many nodes would we lose if BTC used 32MB blocks?

Can someone help me with some estimates/quantify the security implications of such a change? (Worst case/expected case/best case)
What approach do you take to such an estimate?
(constructive posts only please, I honestly want to learn)
submitted by 50thMonkey to Bitcoin [link] [comments]

Continuous Proof of Bitcoin Burn: trust minimized sidechains and bitcoin-pegs w/o oracles/federations today

Original design presented for discussion and criticism
originally posted here: https://bitcointalk.org/index.php?topic=5212814.0
TLDR: Proposing the following that's possible today to use for any existing or new altcoins:
_______________________________________

Disclaimer:

This is not an altcoin thread. I'm not making anything. The design discussed options for existing altcoins and new ways to built on top of Bitcoin inheriting some of its security guarantees. 2 parts: First, the design allows any altcoins to switch to securing themselves via Bitcoin instead of their own PoW or PoS with significant benefits to both altcoins and Bitcoin (and environment lol). Second, I explain how to create Bitcoin-pegged assets to turn altcoins into a Bitcoin sidechain equivalent. Let me know if this is of interest or if it exists, feel free to use or do anything with this, hopefully I can help.

Issue:

Solution to first few points:

PoW altcoin switching to CPoBB would trade:

PoS altcoin switching to CPoBB would trade:

We already have a permissionless, compact, public, high-cost-backed finality base layer to build on top - Bitcoin! It will handle sorting, data availability, finality, and has something of value to use instead of capital or energy that's outside the sidechain - the Bitcoin coins. The sunk costs of PoW can be simulated by burning Bitcoin, similar to concept known as Proof of Burn where Bitcoin are sent to unspendable address. Unlike ICO's, no contributors can take out the Bitcoins and get rewards for free. Unlike PoS, entry into supply lies outside the alt-chain and thus doesn't depend on permission of alt-chain stake-coin holders. It's hard to find a more bandwidth or state size protective blockchain to use other than Bitcoin as well so altcoins can be Bitcoin-aware at little marginal difficulty - 10 years of history fully validates in under a day.

What are typical issues with Proof of Burn?

Solution:

This should be required for any design for it to stay permissionless. Optional is constant fixed emission rate for altcoins not trying to be money if goal is to maximize accessibility. Since it's not depending on brand new PoW for security, they don't have to depend on massive early rewards giving disproportionate fraction of supply at earliest stage either. If 10 coins are created every block, after n blocks, at rate of 10 coins per block, % emission per block is = (100/n)%, an always decreasing number. Sidechain coin doesn't need to be scarce money, and could maximize distribution of control by encouraging further distribution. If no burners exist in a block, altcoin block reward is simply added to next block reward making emission predictable.
Sidechain block content should be committed in burn transaction via a root of the merkle tree of its transactions. Sidechain state will depend on Bitcoin for finality and block time between commitment broadcasts. However, the throughput can be of any size per block, unlimited number of such sidechains can exist with their own rules and validation costs are handled only by nodes that choose to be aware of a specific sidechain by running its consensus compatible software.
Important design decision is how can protocol determine the "true" side-block and how to distribute incentives. Simplest solution is to always :
  1. Agree on the valid sidechain block matching the merkle root commitment for the largest amount of Bitcoin burnt, earliest inclusion in the bitcoin block as the tie breaker
  2. Distribute block reward during the next side-block proportional to current amounts burnt
  3. Bitcoin fee market serves as deterrent for spam submissions of blocks to validate
e.g.
sidechain block reward is set always at 10 altcoins per block Bitcoin block contains the following content embedded and part of its transactions: tx11: burns 0.01 BTC & OP_RETURN tx56: burns 0.05 BTC & OP_RETURN ... <...root of valid sidechain block version 1> ... tx78: burns 1 BTC & OP_RETURN ... <...root of valid sidechain block version 2> ... tx124: burns 0.2 BTC & OP_RETURN ... <...root of INVALID sidechain block version 3> ...
Validity is deterministic by rules in client side node software (e.g. signature validation) so all nodes can independently see version 3 is invalid and thus burner of tx124 gets no reward allocated. The largest valid burn is from tx78 so version 2 is used for the blockchain in sidechain. The total valid burn is 1.06 BTC, so 10 altcoins to be distributed in the next block are 0.094, 0.472, 9.434 to owners of first 3 transactions, respectively.
Censorship attack would require continuous costs in Bitcoin on the attacker and can be waited out. Censorship would also be limited to on-sidechain specific transactions as emission distribution to others CPoB contributors wouldn't be affected as blocks without matching coin distributions on sidechain wouldn't be valid. Additionally, sidechains can allow a limited number of sidechain transactions to happen via embedding transaction data inside Bitcoin transactions (e.g. OP_RETURN) as a way to use Bitcoin for data availability layer in case sidechain transactions are being censored on their network. Since all sidechain nodes are Bitcoin aware, it would be trivial to include.
Sidechain blocks cannot be reverted without reverting Bitcoin blocks or hard forking the protocol used to derive sidechain state. If protocol is forked, the value of sidechain coins on each fork of sidechain state becomes important but Proof of Burn natively guarantees trust minimized and permissionless distribution of the coins, something inferior methods like obscure early distributions, trusted pre-mines, and trusted ICO's cannot do.
More bitcoins being burnt is parallel to more hash rate entering PoW, with each miner or burner getting smaller amount of altcoins on average making it unprofitable to burn or mine and forcing some to exit. At equilibrium costs of equipment and electricity approaches value gained from selling coins just as at equilibrium costs of burnt coins approaches value of altcoins rewarded. In both cases it incentivizes further distribution to markets to cover the costs making burners and miners dependent on users via markets. In both cases it's also possible to mine without permission and mine at a loss temporarily to gain some altcoins without permission if you want to.
Altcoins benefit by inheriting many of bitcoin security guarantees, bitcoin parties have to do nothing if they don't want to, but will see their coins grow more scarce through burning. The contributions to the fee market will contribute to higher Bitcoin miner rewards even after block reward is gone.

Sidechain Bitcoin-pegs:

What is the ideal goal of the sidechains? Ideally to have a token that has the bi-directionally pegged value to Bitcoin and tradeable ~1:1 for Bitcoin that gives Bitcoin users an option of a different rule set without compromising the base chain nor forcing base chain participants to do anything different.
Issues with value pegs:
Let's get rid of the idea of needing Bitcoin collateral to back pegged coins 1:1 as that's never secure, independent, or scalable at same security level. As drive-chain design suggested the peg doesn't have to be fast, can take months, just needs to exist so other methods can be used to speed it up like atomic swaps by volunteers taking on the risk for a fee.
In continuous proof of burn we have another source of Bitcoins, the burnt Bitcoins. Sidechain protocols can require some minor percentage (e.g. 20%) of burner tx value coins via another output to go to reimburse those withdrawing side-Bitcoins to Bitcoin chain until they are filled. If withdrawal queue is empty that % is burnt instead. Selection of who receives reimbursement is deterministic per burner. Percentage must be kept small as it's assumed it's possible to get up to that much discount on altcoin emissions.
Let's use a really simple example case where each burner pays 20% of burner tx amount to cover withdrawal in exact order requested with no attempts at other matching, capped at half amount requested per payout. Example:
withdrawal queue: request1: 0.2 sBTC request2: 1.0 sBTC request3: 0.5 sBTC
same block burners: tx burns 0.8 BTC, 0.1 BTC is sent to request1, 0.1 BTC is sent to request2 tx burns 0.4 BTC, 0.1 BTC is sent to request1 tx burns 0.08 BTC, 0.02 BTC is sent to request 1 tx burns 1.2 BTC, 0.1 BTC is sent to request1, 0.2 BTC is sent to request2
withdrawal queue: request1: filled with 0.32 BTC instead of 0.2 sBTC, removed from queue request2: partially-filled with 0.3 BTC out of 1.0 sBTC, 0.7 BTC remaining for next queue request3: still 0.5 sBTC
Withdrawal requests can either take long time to get to filled due to cap per burn or get overfilled as seen in "request1" example, hard to predict. Overfilling is not a big deal since we're not dealing with a finite source. The risk a user that chooses to use the sidechain pegged coin takes on is based on the rate at which they can expect to get paid based on value of altcoin emission that generally matches Bitcoin burn rate. If sidechain loses interest and nobody is burning enough bitcoin, the funds might be lost so the scale of risk has to be measured. If Bitcoins burnt per day is 0.5 BTC total and you hope to deposit or withdraw 5000 BTC, it might take a long time or never happen to withdraw it. But for amounts comparable or under 0.5 BTC/day average burnt with 5 side-BTC on sidechain outstanding total the risks are more reasonable.
Deposits onto the sidechain are far easier - by burning Bitcoin in a separate known unspendable deposit address for that sidechain and sidechain protocol issuing matching amount of side-Bitcoin. Withdrawn bitcoins are treated as burnt bitcoins for sake of dividing block rewards as long as they followed the deterministic rules for their burn to count as valid and percentage used for withdrawals is kept small to avoid approaching free altcoin emissions by paying for your own withdrawals and ensuring significant unforgeable losses.
Ideally more matching is used so large withdrawals don't completely block everyone else and small withdrawals don't completely block large withdrawals. Better methods should deterministically randomize assigned withdrawals via previous Bitcoin block hash, prioritized by request time (earliest arrivals should get paid earlier), and amount of peg outstanding vs burn amount (smaller burns should prioritize smaller outstanding balances). Fee market on bitcoin discourages doing withdrawals of too small amounts and encourages batching by burners.
The second method is less reliable but already known that uses over-collateralized loans that create a oracle-pegged token that can be pegged to the bitcoin value. It was already used by its inventors in 2014 on bitshares (e.g. bitCNY, bitUSD, bitBTC) and similarly by MakerDAO in 2018. The upside is a trust minimized distribution of CPoB coins can be used to distribute trust over selection of price feed oracles far better than pre-mined single trusted party based distributions used in MakerDAO (100% pre-mined) and to a bit lesser degree on bitshares (~50% mined, ~50% premined before dpos). The downside is 2 fold: first the supply of BTC pegged coin would depend on people opening an equivalent of a leveraged long position on the altcoin/BTC pair, which is hard to convince people to do as seen by very poor liquidity of bitBTC in the past. Second downside is oracles can still collude to mess with price feeds, and while their influence might be limited via capped price changes per unit time and might compromise their continuous revenue stream from fees, the leverage benefits might outweight the losses. The use of continous proof of burn to peg withdrawals is superior method as it is simply a minor byproduct of "mining" for altcoins and doesn't depend on traders positions. At the moment I'm not aware of any market-pegged coins on trust minimized platforms or implemented in trust minimized way (e.g. premined mkr on premined eth = 2 sets of trusted third parties each of which with full control over the design).
_______________________________________

Brief issues with current altchains options:

  1. PoW: New PoW altcoins suffer high risk of attacks. Additional PoW chains require high energy and capital costs to create permissionless entry and trust minimized miners that are forever dependent on markets to hold them accountable. Using same algorithm or equipment as another chain or merge-mining puts you at a disadvantage by allowing some miners to attack and still cover sunk costs on another chain. Using a different algorithm/equipment requires building up the value of sunk costs to protect against attacks with significant energy and capital costs. Drive-chains also require miners to allow it by having to be sidechain aware and thus incur additional costs on them and validating nodes if the sidechain rewards are of value and importance.
  2. PoS: PoS is permissioned (requires permission from internal party to use network or contribute to consensus on permitted scale), allows perpetual control without accountability to others, and incentivizes centralization of control over time. Without continuous source of sunk costs there's no reason to give up control. By having consensus entirely dependent on internal state network, unlike PoW but like private databases, cannot guarantee independent permissionless entry and thus cannot claim trust minimization. Has no built in distribution methods so depends on safe start (snapshot of trust minimized distributions or PoW period) followed by losing that on switch to PoS or starting off dependent on a single trusted party such as case in all significant pre-mines and ICO's.
  3. Proof of Capacity: PoC is just shifting costs further to capital over PoW to achieve same guarantees.
  4. PoW/PoS: Still require additional PoW chain creation. Strong dependence on PoS can render PoW irrelevant and thus inherit the worst properties of both protocols.
  5. Tokens inherit all trust dependencies of parent blockchain and thus depend on the above.
  6. Embedded consensus (counterparty, veriblock?, omni): Lacks mechanism for distribution, requires all tx data to be inside scarce Bitcoin block space so high cost to users instead of compensated miners. If you want to build a very expressive scripting language, might very hard & expensive to fit into Bitcoin tx vs CPoBB external content of unlimited size in a committed hash. Same as CPoBB is Bitcoin-aware so can respond to Bitcoin being sent but without source of Bitcoins like burning no way to do any trust minimized Bitcoin-pegs it can control fully.

Few extra notes from my talks with people:

Main questions to you:

open to working on this further with others
submitted by awasi868 to CryptoTechnology [link] [comments]

BIP proposal: Inhibiting a covert attack on the Bitcoin POW function | Gregory Maxwell | Apr 05 2017

Gregory Maxwell on Apr 05 2017:
A month ago I was explaining the attack on Bitcoin's SHA2 hashcash which
is exploited by ASICBOOST and the various steps which could be used to
block it in the network if it became a problem.
While most discussion of ASICBOOST has focused on the overt method
of implementing it, there also exists a covert method for using it.
As I explained one of the approaches to inhibit covert ASICBOOST I
realized that my words were pretty much also describing the SegWit
commitment structure.
The authors of the SegWit proposal made a specific effort to not be
incompatible with any mining system and, in particular, changed the
design at one point to accommodate mining chips with forced payout
addresses.
Had there been awareness of exploitation of this attack an effort
would have been made to avoid incompatibility-- simply to separate
concerns. But the best methods of implementing the covert attack
are significantly incompatible with virtually any method of
extending Bitcoin's transaction capabilities; with the notable
exception of extension blocks (which have their own problems).
An incompatibility would go a long way to explain some of the
more inexplicable behavior from some parties in the mining
ecosystem so I began looking for supporting evidence.
Reverse engineering of a particular mining chip has demonstrated
conclusively that ASICBOOST has been implemented
in hardware.
On that basis, I offer the following BIP draft for discussion.
This proposal does not prevent the attack in general, but only
inhibits covert forms of it which are incompatible with
improvements to the Bitcoin protocol.
I hope that even those of us who would strongly prefer that
ASICBOOST be blocked completely can come together to support
a protective measure that separates concerns by inhibiting
the covert use of it that potentially blocks protocol improvements.
The specific activation height is something I currently don't have
a strong opinion, so I've left it unspecified for the moment.
BIP: TBD
Layer: Consensus
Title: Inhibiting a covert attack on the Bitcoin POW function
Author: Greg Maxwell
Status: Draft
Type: Standards Track
Created: 2016-04-05
License: PD
==Abstract==
This proposal inhibits the covert exploitation of a known
vulnerability in Bitcoin Proof of Work function.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119.
==Motivation==
Due to a design oversight the Bitcoin proof of work function has a potential
attack which can allow an attacking miner to save up-to 30% of their energy
costs (though closer to 20% is more likely due to implementation overheads).
Timo Hanke and Sergio Demian Lerner claim to hold a patent on this attack,
which they have so far not licensed for free and open use by the public.
They have been marketing their patent licenses under the trade-name
ASICBOOST. The document takes no position on the validity or enforceability
of the patent.
There are two major ways of exploiting the underlying vulnerability: One
obvious way which is highly detectable and is not in use on the network
today and a covert way which has significant interaction and potential
interference with the Bitcoin protocol. The covert mechanism is not
easily detected except through its interference with the protocol.
In particular, the protocol interactions of the covert method can block the
implementation of virtuous improvements such as segregated witness.
Exploitation of this vulnerability could result in payoff of as much as
$100 million USD per year at the time this was written (Assuming at
50% hash-power miner was gaining a 30% power advantage and that mining
was otherwise at profit equilibrium). This could have a phenomenal
centralizing effect by pushing mining out of profitability for all
other participants, and the income from secretly using this
optimization could be abused to significantly distort the Bitcoin
ecosystem in order to preserve the advantage.
Reverse engineering of a mining ASIC from a major manufacture has
revealed that it contains an undocumented, undisclosed ability
to make use of this attack. (The parties claiming to hold a
patent on this technique were completely unaware of this use.)
On the above basis the potential for covert exploitation of this
vulnerability and the resulting inequality in the mining process
and interference with useful improvements presents a clear and
present danger to the Bitcoin system which requires a response.
==Background==
The general idea of this attack is that SHA2-256 is a merkle damgard hash
function which consumes 64 bytes of data at a time.
The Bitcoin mining process repeatedly hashes an 80-byte 'block header' while
incriminating a 32-bit nonce which is at the end of this header data. This
means that the processing of the header involves two runs of the compression
function run-- one that consumes the first 64 bytes of the header and a
second which processes the remaining 16 bytes and padding.
The initial 'message expansion' operations in each step of the SHA2-256
function operate exclusively on that step's 64-bytes of input with no
influence from prior data that entered the hash.
Because of this if a miner is able to prepare a block header with
multiple distinct first 64-byte chunks but identical 16-byte
second chunks they can reuse the computation of the initial
expansion for multiple trials. This reduces power consumption.
There are two broad ways of making use of this attack. The obvious
way is to try candidates with different version numbers. Beyond
upsetting the soft-fork detection logic in Bitcoin nodes this has
little negative effect but it is highly conspicuous and easily
blocked.
The other method is based on the fact that the merkle root
committing to the transactions is contained in the first 64-bytes
except for the last 4 bytes of it. If the miner finds multiple
candidate root values which have the same final 32-bit then they
can use the attack.
To find multiple roots with the same trailing 32-bits the miner can
use efficient collision finding mechanism which will find a match
with as little as 216 candidate roots expected, 224 operations to
find a 4-way hit, though low memory approaches require more
computation.
An obvious way to generate different candidates is to grind the
coinbase extra-nonce but for non-empty blocks each attempt will
require 13 or so additional sha2 runs which is very inefficient.
This inefficiency can be avoided by computing a sqrt number of
candidates of the left side of the hash tree (e.g. using extra
nonce grinding) then an additional sqrt number of candidates of
the right side of the tree using transaction permutation or
substitution of a small number of transactions. All combinations
of the left and right side are then combined with only a single
hashing operation virtually eliminating all tree related
overhead.
With this final optimization finding a 4-way collision with a
moderate amount of memory requires ~224 hashing operations
instead of the >228 operations that would be require for
extra-nonce grinding which would substantially erode the
benefit of the attack.
It is this final optimization which this proposal blocks.
==New consensus rule==
Beginning block X and until block Y the coinbase transaction of
each block MUST either contain a BIP-141 segwit commitment or a
correct WTXID commitment with ID 0xaa21a9ef.
(See BIP-141 "Commitment structure" for details)
Existing segwit using miners are automatically compatible with
this proposal. Non-segwit miners can become compatible by simply
including an additional output matching a default commitment
value returned as part of getblocktemplate.
Miners SHOULD NOT automatically discontinue the commitment
at the expiration height.
==Discussion==
The commitment in the left side of the tree to all transactions
in the right side completely prevents the final sqrt speedup.
A stronger inhibition of the covert attack in the form of
requiring the least significant bits of the block timestamp
to be equal to a hash of the first 64-bytes of the header. This
would increase the collision space from 32 to 40 or more bits.
The root value could be required to meet a specific hash prefix
requirement in order to increase the computational work required
to try candidate roots. These change would be more disruptive and
there is no reason to believe that it is currently necessary.
The proposed rule automatically sunsets. If it is no longer needed
due to the introduction of stronger rules or the acceptance of the
version-grinding form then there would be no reason to continue
with this requirement. If it is still useful at the expiration
time the rule can simply be extended with a new softfork that
sets longer date ranges.
This sun-setting avoids the accumulation of technical debt due
to retaining enforcement of this rule when it is no longer needed
without requiring a hard fork to remove it.
== Overt attack ==
The non-covert form can be trivially blocked by requiring that
the header version match the coinbase transaction version.
This proposal does not include this block because this method
may become generally available without restriction in the future,
does not generally interfere with improvements in the protocol,
and because it is so easily detected that it could be blocked if
it becomes an issue in the future.
==Ba...[message truncated here by reddit bot]...
original: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-April/013996.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

Xthinner/Blocktorrent development status update -- Jan 12, 2018

Edit: Jan 12, 2019, not 2018.
Xthinner is a new block propagation protocol which I have been working on. It takes advantage of LTOR to give about 99.6% compression for blocks, as long as all of the transactions in the block were previously transmitted. That's about 13 bits (1.6 bytes) per transaction. Xthinner is designed to be fault-tolerant, and to handle situations in which the sender and receiver's mempools are not well synchronized with gracefully degrading performance -- missing transactions or other decoding errors can be detected and corrected with one or (rarely) two additional round trips of communication. My expectation is that when it is finished, it will perform about 4x to 6x better than Compact Blocks and Xthin for block propagation. Relative to Graphene, I expect Xthinner to perform similarly under ideal circumstances (better than Graphene v1, slightly worse than Graphene v2), but much better under strenuous conditions (i.e. mempool desynchrony).
The current development status of Xthinner is as follows:
  1. Python proof-of-concept encodedecoder -- done 2018-09-15
  2. Detailed informal writeup of the encoding scheme -- done 2018-09-29
  3. Modify TxMemPool to allow iterating on a view sorted by TxId -- done 2018-11-26
  4. Basic C++ segment encoder -- done 2018-11-26
  5. Basic c++ segment decoder -- done 2018-11-26
  6. Checksums for error detection -- done 2018-12-09
  7. Serialization/deserialization -- done 2018-12-09
  8. Prefilled transactions, coinbase handling, and non-mempool transactions -- done 2018-12-25
  9. Missing/extra transactions, re-requests, and handling mempool desynchrony for segment decoding -- done 2019-01-12
  10. Block transmission coupling the block header with one or more Xthinner segments -- 50% done 2019-01-12
  11. Missing/extra transactions, re-requests, and handling mempool desynchrony for block decoding -- done 2019-01-12
  12. Integration with Bitcoin ABC networking code
  13. Networking testing on regtest/testnet/mainnet with real blocks
  14. Write BIP/BUIP and formal spec
  15. Bitcoin ABC pull request and begin of code review
  16. Unit tests, performance tests, benchmarks -- started
  17. Bitcoin Unlimited pull request and begin of code review
  18. Alpha release of binaries for testing or low-security block relay networks
  19. Merging code into ABC/BU, disabled-by-default
  20. Complete security review
  21. Enable by default in ABC and/or BU
  22. (Optional) parallelize encoding/decoding of blocks
Following is the debugging output from a test run done with coherent senderecipient mempools with a 1.25 million tx block, edited for readability:
Testing Xthinner on a block with 1250003 transactions with sender mempool size 2500000 and recipient mempool size 2500000 Tx/Block creation took 262 sec, 104853 ns/tx (mempool) CTOR block sorting took 2467 ms, 987 ns/tx (mempool) Encoding is 1444761 pushBytes, 2889520 1-bit commands, 103770 checksum bytes total 1910345 bytes, 12.23 bits/tx Single-threaded encoding took 2924 ms, 1169 ns/tx (mempool) Serialization/deserialization took 1089 ms, 435 ns/tx (mempool) Single-threaded decoding took 1912314 usec, 764 ns/tx (mempool) Filling missing slots and handling checksum errors took 0 rounds and 12 usec, 0 ns/tx (mempool) Blocks match! *** No errors detected 
If each transaction were 400 bytes on average, this block would be 500 MB, and it was encoded in 1.9 MB of data, a 99.618% reduction in size. Real-world performance is likely to be somewhat worse than this, as it's not likely that 100% of the block's transactions will always be in the recipient's mempool, but the performance reduction from mempool desychrony is smooth and predictable. If the recipient is missing 10% of the sender's transactions, and has another 10% that the sender does not have, the transaction list is still able to be successfully transmitted and decoded, although in that case it usually takes 2.5 round trips to do so, and the overall compression ratio ends up being around 71% instead of 99.6%.
Anybody who wishes can view the WIP Xthinner code here.
Once Xthinner is finished, I intend to start working on Blocktorrent. Blocktorrent is a method for breaking a block into small independently verifiable chunks for transmission, where each chunk is about one IP packet (a bit less than 1500 bytes) in size. In the same way that Bittorrent was faster than Napster, Blocktorrent should be faster than Xthinner. Currently, one of the big limitations on block propagation performance is that a node cannot forward the first byte of a block until the last byte of the block has been received and completely validated. Blocktorrent will change that, and allow nodes to forward each IP packet shortly after that packet was received, regardless of whether any other packets have also been received and regardless of the order in which the packets are received. This should dramatically improve the bandwidth utilization efficiency of nodes during block propagation, and should reduce the block propagation latency for reaching the full network quite a lot -- my current estimate is about 10x improvement over Xthinner. Blocktorrent achieves this partial validation of small chunks by taking advantage of Bitcoin blocks' Merkle tree structure. Chunks of transactions are transmitted in a packet along with enough data from the rest of the Merkle tree's internal nodes to allow for that chunk of transactions to be validated back to the Merkle root, the block header, and the mining PoW, thereby ensuring that packet being forwarded is not invalid spam data used solely for a DoS attack. (Forwarding DoS attacks to other nodes is bad.) Each chunk will contain an Xthinner segment to encode TXIDs My performance target with Blocktorrent is to be able to propagate a 1 GB block in about 5-10 seconds to all nodes in the network that have 100 Mbps connectivity and quad core CPUs. Blocktorrent will probably perform a bit worse than FIBRE at small block sizes, but better at very large blocksizes, all without the trust and centralized infrastructure that FIBRE uses.
submitted by jtoomim to btc [link] [comments]

A few questions about bitcoin mining

Newbie here.
I'm not yet familiar with bitcoin mining, just a bit interested. The bitcoin block header, as we all know, is consisted of the nonce, the timestamp, the Merkle root, nBits, and the hash of the previous block. Miners usually increment the nonce by 1, until they exhaust all 2^32 possibilities and find the solution.
However, I have read that it is very common for miners to exhaust all 2^32 combinations and not find a solution at all. As a result, they have to make slight changes to the timestamp and/or the Merkle root to calculate even more combinations.
Therefore, what is the probability of a miner exhausting 2^32 combinations without finding a valid nonce in a specific block? Does it have something to do with the bitcoin mining "difficulty" thingy? I'm so confused right now......
submitted by Palpatine88888 to Bitcoin [link] [comments]

Making AsicBoost irrelevant | Peter Todd | May 10 2016

Peter Todd on May 10 2016:
As part of the hard-fork proposed in the HK agreement(1) we'd like to make the
patented AsicBoost optimisation useless, and hopefully make further similar
optimizations useless as well.
What's the best way to do this? Ideally this would be SPV compatible, but if it
requires changes from SPV clients that's ok too. Also the fix this should be
compatible with existing mining hardware.
1) https://medium.com/@bitcoinroundtable/bitcoin-roundtable-consensus-266d475a61ff
2) http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-April/012596.html

https://petertodd.org 'peter'[:-1]@petertodd.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: Digital signature
URL: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20160510/2eaafd23/attachment.sig
original: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-May/012652.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

ETH and EOS Will Continue to Face Congestion Problems — This Project Won’t

ETH and EOS Will Continue to Face Congestion Problems — This Project Won’t
https://preview.redd.it/4s9hat9znf341.png?width=587&format=png&auto=webp&s=4c544a57d23e9a0101f4adc154cae0f3b7923bc4

Protocol congestion is a perennial problem in the blockchain ecosystem. Various measures have been implemented to avert congestion, but most struggle to offer a long-term solution.

Protocols have tried increasing their block size to increase the number of transactions they can hold and decreasing block production time to increase block generation. Though these measures worked in the short-term, they soon reached their limit. Thus, nearly all existing protocols cannot compare their transaction rates to those of centralized platforms.
The blockchain ecosystem has experienced transaction delays, massive transaction fees, and other inconveniences as a result of congestion within blockchain protocols.
Now, protocols like Aelf are out to change this narrative.
This article explores the congestion issue in the blockchain system, specifically on the Ethereum and EOS protocols. It also explores why Aelf will not be affected by the problem of congestion.

More Users = More Transactions

According to a report by Deloitte, blockchain is changing the business landscape, causing industries to adjust their operations based on the solutions it offers. This is also being seen in governments. The report also highlights that blockchain is yet to reach its full potential.
Blockchain is growing significantly, and one of the best examples of this is the congestion in Ethereum. Back in 2017, one of the first signs of future congestion was the d’App, CryptoKitties, which caused massive congestion in the Ethereum network- at one point resulting in a six-fold increase in total network requests.

https://preview.redd.it/5nbs750tnf341.png?width=470&format=png&auto=webp&s=f060f3c596a99ab2cda654f59374b2fe124ca87f
These furry kittens were the source of great delays on the Ethereum network upon release | Source
It is also worth noting, that during the peak bull run in 2017, Bitcoin also suffered from a massively congested network and transaction time delays. The situation got so bad, some transactions took over two weeks to complete!
The delays were caused because Ethereum could only meet 15 transactions per second (tps) at the time. Even without CryptoKitties, the platform was eventually going to suffer massive delays as more people used their protocol.
Ethereum is now living the congested future of its platform as Tether transactions load its network with numerous requests that often leads to delays in the Ethereum Network. Despite increasing their block capacity by about 25%, it is not enough to meet the growing number of transactions on their platform.

Attempts Towards Greater Scalability

Over at EOS, things are not going as planned.
The protocol is among the networks that ushered in blockchain 3.0 promising faster transaction rates. This was achieved as EOS outperformed Ethereum and Bitcoin in transaction rates.
However, because of their network set up, their platform weakness was exposed in 2019 as EOS experienced a massive delay caused by a specialized Denial-of-Service (DoS) attack.
DoS attacks are successful when the targeted platform is flooded with numerous transaction requests; thus, legitimate requests cannot be processed in a good time. This can be further specialized when attackers use Distributed-Denial-of-Service, which specifically targets a single network or server, thus rendering the platform ineffective faster.
For EOS, their network weakness was exposed as the attack targeted the blockchain layer. The attacker posted so many deferred transactions that when the time came to process them (deferred are given priority over new transactions) that no new transactions could be processed The attack produced numerous trash transactions that made valid transactions useless. The attack was made via a d’App hosted on EOS.
Because the issue was not addressed since January, another attempt to slow down the network was successfully made. The plan was likely carried out to determine the limitations of the EOS network.
An airdrop was planned on the EOS network, where users would be rewarded with tokens if they frequently transferred EOS tokens into and out of the EOS network. The airdrop event created congestion because of the number of transactions being generated on the EOS network.
The congestion created on EOS on both instances can be attributed to the function of deferring transactions to a later time. This allows attackers to technically block other transactions for the period it will take to process all their ‘deferred’ transactions.

Aelf’s Simple Brilliance

Ethereum and EOS are both suffering congestion as a result of the growing number of transactions daily. These protocols are also likely to suffer congestion from planned attacks on their network.
Aelf drew lessons from both EOS and Ethereum to develop a platform that solves the issue of scalability.
On the issue of transaction rates, Aelf created a platform that achieves high tps. The tps are performed on-chain, and this is created through separation and specialization. Aelf’s protocol separates transactional data and computational dependency, which significantly impacts their tps.
Furthermore, Aelf implements parallel data processing through the separation of transactional data. This helps Aelf achieve even high tps on-chain.
The separation of transactional data is done using side chains. Aelf implements a branched-chain network as opposed to the single-chain system that is in use by both EOS and Ethereum. The branched-chain network allows Aelf to dedicate each side chain to a particular transaction type.
Aelf achieves its side chain specialization by using a “one chain to one type of contract” system. Therefore, one side chain can only process requests from one type of contract only. This makes the Aelf system highly specialized while still maintaining a simple structure.
Moreover, within the dedicated side chain, other side chains can be formed depending on the demand and needs of the network. This system resembles partitioning or sharding in database architecture and is known as “Tree Branch side chain extension” in the Aelf ecosystem.
The” Tree Branch side chain extension” acts as an emergency overflow system that protects Aelf from congestion by creating other side chains that can process transactions in case transaction requests outweigh Aelf’s capacity at the time.

https://preview.redd.it/8klki00vnf341.png?width=600&format=png&auto=webp&s=1d79d322c9d1376856ca1bb38e9688ba7bd0f7aa
A visual example of Aelf’s ‘Tree Branch’ | Source
Aelf’s side chains communicate through the main chain in the form of a Merkle tree root. Communication between the side chains is not direct. The information must pass through the filtering system of the mainchain to determine whether the data can be passed from one side chain to the other. The filtering process is based on the protocol’s guidelines.
These implementations deter deferred transactions, which makes it impossible for planned attacks to slow down the network through numerous “fake” transactions.
With Aelf’s set up, they are ahead in terms of scalability and security and, thus, a worthy choice for setting up a d’App.
Having seen the limitations of EOS and Ethereum, it is clear that their congestion problems are inevitable. Aelf remains the only platform that is immune to network congestion. The use of a side chain set up to isolate and categorize transactions is a simple yet brilliant idea implemented by the Aelf team, which assures Aelf of scalability throughout its lifetime. Aelf may have cemented themselves in blockchain history through its platform.
For more information of Aelf's platform, please follow this link.
#Aelf #DPoS #Blockchain #ParallelProcessing $ELF
Disclaimer: Please only take this information as my OWN opinion and should not be regarded as financial advice in any situation. Please remember to DYOR before making any decisions.
♂️ Hi, my name’s Sal. If you found this article useful and would like to view my other work please be sure to clap and follow me on medium and LinkedIn!😎
submitted by Floris-Jan to aelfofficial [link] [comments]

r/Bitcoin recap - April 2019

Hi Bitcoiners!
I’m back with the 28th monthly Bitcoin news recap.
For those unfamiliar, each day I pick out the most popularelevant/interesting stories in Bitcoin and save them. At the end of the month I release them in one batch, to give you a quick (but not necessarily the best) overview of what happened in bitcoin over the past month.
You can see recaps of the previous months on Bitcoinsnippets.com
A recap of Bitcoin in April 2019
Adoption
Development
Security
Mining
Business
Education
Archeology (Financial Incumbents)
Price & Trading
Fun & Other
submitted by SamWouters to Bitcoin [link] [comments]

WitLess Mining - Removing Signatures from Bitcoin Cash

WitLess Mining

A Selfish Miner Variant to Remove Signatures from Bitcoin Cash
WitLess Mining is a hypothetical adversarial hybrid fork leveraging a variant of the selfish miner strategy to remove signatures from Bitcoin Cash. By orphaning blocks produced by miners unwilling to blindly accept WitLess blocks without validation, a miner or cartel of collaborating miners with a substantial, yet less than majority, share of the total Bitcoin Cash network hash power can alter the Nash equilibrium of Bitcoin Cash’s economic incentives, enticing otherwise honest miners to engage in non-validated mining. Once a majority of network hash power has switched to non-validated mining it will be possible to steal arbitrary UTXOs using invalid signatures - even non-existent signatures. As miners would risk losing all of their prior rewards and fees were signatures to be released that prove their malfeasance, it could even be possible to steal coins using non-existent transactions, leaving victims no evidence to prove the theft occurred.
WitLess Mining introduces two new data structures, the WitLess Transaction (wltx) and the WitLess Transaction Input (wltxin). These data structures are modifications of their standard counterpart data structures, Transaction (tx) and Transaction Input (txin), and can be used as drop-in replacements to create a WitLess Block (wlblock). These new structures provide WitLess Miners signature-withheld (WitLess) transaction data sufficient to reliably update their local UTXO sets based on the transactions contained within a WitLess block while preventing validation of the transaction signature scripts.
The specific mechanism by which WitLess Mining transaction and block data will be communicated among WitLess miners is left as an exercise for the reader. The author suggests it may be possible to extend the existing Bitcoin Cash gossip network protocol to handle the new WitLess data structures. Until WitLess Mining becomes well-adopted, it may be preferable to implement an out-of-band mechanism for releasing WitLess transactions and blocks as service. In order to offset potential revenue reduction due to the selfish mining strategy, the WitLess Mining cartel might provide a distribution service under a subscription model, offering earlier updates for higher tiers. An advanced distribution system could even implement a per-block bidding option, creating a WitLess information market.
Regardless of the distribution mechanism chosen, the risk having their blocks orphaned will provide strong economic incentive for rational short-term profit-maximizing agents to seek out WitLess transaction and block data. To encourage other segments of the Bitcoin Cash ecosystem to adopt WitLess Mining, the WitLess data structures are designed specifically to facilitating malicous fee-based transaction replacement:
It is expected that fee-based transaction replacement will be particularly popular among malicious users wishing to defraud 0-conf accepting merchants as well as the vulnerable merchants themselves. The feature is likely to encourage higher fees from the users resulting in their WitLess transaction data fetching a premium price under subscription- or market-based distribution. Malicious users may also be interested in subscribing directly to a WitLess Mining distribution service in order to receive notification when the cartel is in a position to reliably orphan non-compliant blocks, during which time their efforts will be most effective.

WitLess Block - wlblock

The wlblock is an alternate serialization of a standard block, containing the set of wltx as a direct replacement of the tx records. The hashMerkleRoot of a wlblock should be identical to the corresponding value in the standard block and can verified to apply to a set of txid by constructing a Merkelized root of txid_commitments from the wltx set. The same proof of work validation that applies to the standard block header also ensures legitimacy of the wltx set thanks to a WitLess Commitment included as an input to the coinbase tx.

WitLess Transaction - wltx

Field Size Description Data type Comments
4 version int32_t Transaction data format version as it appears in the corresponding tx
2 flag uint8_t[2] Always 0x5052 and indicates that the transaction is WitLess
1+ wltx_in count var_int Number of WitLess transaction inputs (never zero)
41+ wltx_in wtx_in[] A list of 1 or more WitLess transaction inputs or sources for coins
1+ tx_out count var_int Number of transaction outputs as it appears in the corresponding tx
9+ tx_out tx_out[] A list of 1 or more transaction outputs or destinations for coins as it appears in the corresponding tx
4 lock_time uint32_t The block number or timestamp at which this transaction is unlocked. This can vary from the corresponding tx, with the higher of the two taking precedence.
Each wltx can be referenced by a wltxid generated in way similar to the standard txid.

WitLess Transaction Input - wltxin

Field Size Description Data type Comments
36 previous_output outpoint The previous output transaction reference as it appears in the corresponding txin
1+ script length var_int The length of the signature script as it appears in the corresponding txin
32 or 0 txid_commitment char[32] Only for the first the wltxin of a transaction, the txid of the tx containing the corresponding txin; omitted for all subsequent wltxin entries
4 sequence uint32_t Transaction version as defined by the sender. Intended for replacement of transactions when sender wants to defraud 0-conf merchants. This can vary from the corresponding txin, with the lower of the two taking precedence.

WitLess Commitment Structure

A new block rule is added which requires a commitment to the wltxid. The wltxid of coinbase WitLess transaction is assumed to be 0x828ef3b079f9c23829c56fe86e85b4a69d9e06e5b54ea597eef5fb3ffef509fe.
A witless root hash is calculated with all those wltxid as leaves, in a way similar to the hashMerkleRoot in the block header.
The commitment is recorded in a scriptPubKey of the coinbase tx. It must be at least 42 bytes, with the first 10-byte of 0x6a284353573e3d534e43, that is:
 1-byte - OP_RETURN (0x6a) 1-byte - Push the following 40 bytes (0x28) 8-byte - WitLess Commitment header (0x4353573e3d534e43) 32-byte - WitLess Commitment hash: Double-SHA256(witless root hash) 43rd byte onwards: Optional data with no consensus meaning 
If there are more than one scriptPubKey matching the pattern, the one with highest output index is assumed to be the WitLess commitment.
submitted by tripledogdareya to btc [link] [comments]

Bitcoin mining: how is a block header of 80 bytes processed in SHA-256? Isn't it too big?

The whole process of bitcoin mining was making sense to me until a moment of madness an hour ago, if there is 80 bytes of data to be processed in SHA-256, that's 640 bits of data to be processed in SHA-256.
In this 80 bytes we have: 4 bytes (version), previous block hash (32 bytes), merkle root (32 bytes), time (4 bytes), bits (4 bytes), nonce (4 bytes).
I thought SHA-256 accepted 512 bits of data, so that's 64 bytes of data. And on top of that, I need to add the length of the data to be processed in the last 64 bits of this 512 bits input but 64 bytes is well over the limit.
What am I missing here? Can someone help hear it up for me?
Thanks
submitted by gradschl to crypto [link] [comments]

Thought: Chinese Bitcoin miners don't need to worry about bandwidth

There's no technical reason why a Chinese mining farm couldn't operate their node outside of China to improve the bandwidth of their connection to the rest of the world.
Basically, we divide a mining operation in 2 parts:
  1. Bitcoin Node
  2. (Custom) software that manages the mining machines
The node serves to keep up with the network (high bandwidth requirement) but sends only the block header, minus nonce, to the software that controls the mining hardware (low bandwidth requirement).
The mining hardware only returns a single nonce once it has found the correct one.

The connection from the node to the mining hardware only has to send 76 bytes every 10 minutes*. (and receive 4 bytes)
All that matters is latency.


NOTE: This also translates really well to the operation of solar farms in remote areas such as the Saharah. All I need is a phone connection to the miners, in theory.

*this would be the bare minimum. I guess you'd normally wanna update the merkle root whenever new TXs arrive, but this is not super critical. You could also still limit this to update once every 10 seconds or so your bandwidth usage remains extremely modest.
submitted by Greamee to btc [link] [comments]

Who invented blockchain?

Who invented blockchain?
Strange it may seem, but the concept of blockchain was invented long before Satoshi Nakamoto created Bitcoin as A Peer to Peer Electronic Cash System.
Let’s take a look at the events preceding Bitcoin’s blockchain appearance.
https://preview.redd.it/0o9jv2k9wz441.png?width=1920&format=png&auto=webp&s=df62d5226931e4022255913a69f4a4b9ad8e93d9
  • The idea takes its roots from coding and deciphering. Early in the 1940s, a British mathematician Alan Turing, who was the first known cryptographer, deciphered the Enigma Machine. At the same time, the Americans decoded the Purple Code, a Japanese ciphering machine.

https://preview.redd.it/k9cigmbewz441.png?width=602&format=png&auto=webp&s=5ed0de1db6296a9922c8526853a456cfa8b99642
https://preview.redd.it/h90v2yofwz441.png?width=92&format=png&auto=webp&s=ea624361bbedf3db06126098a440d832bd4ba2eb
  • In the 1970s, Martin Hellman and Whitfield Diffie invented a special algorithm which split the encrypted keys into a pair — a private and a public key.

https://preview.redd.it/tfuc8cniwz441.png?width=602&format=png&auto=webp&s=7018bf7b163f249775aae6f8161668b4725cfb30
https://preview.redd.it/c92l11rjwz441.png?width=92&format=png&auto=webp&s=fca6e6eb2e8ae39ccb24e9f886356c582a82cc37
  • Then, in 1992, W. Scott Stornetta, Stuart Haber added Merkle Tree to the cryptography concept, boosting security, performance, and efficiency.

https://preview.redd.it/q96c8dbmwz441.png?width=602&format=png&auto=webp&s=16a3ce56e0e0ea5601ae4c976abe49cea0008653
https://preview.redd.it/yvyx2oknwz441.png?width=92&format=png&auto=webp&s=93492986d5de06937e7f26ba9ad0c12ea187666c
  • However, this technology was not used, and the patent ended in 2004, four years before Bitcoin appeared.

https://preview.redd.it/h2dt1ugqwz441.png?width=92&format=png&auto=webp&s=f3526f2f704bfe10a54800bbc114466ccff9e0de
  • In 2004, a scientist and cryptographer Hal Finney introduced a system called RPoW, which was Reusable Proof Of Work. The system operated by getting a non-exchangeable Hashcash based PoW token and in return created an RSA-signed token that could then be transacted from person to person.
  • RPoW solved the double-spending problem by keeping the ownership of tokens registered on a trusted server. It also allowed users worldwide to verify its correctness and integrity in real-time.

https://preview.redd.it/mgqdaastwz441.png?width=602&format=png&auto=webp&s=fb8e43b46d63cd4e58f452a469a244b29a6bc2fa
https://preview.redd.it/4ct9s9ruwz441.png?width=92&format=png&auto=webp&s=4a8faf4837154ebf0da9d178d0d8cdeb4435ac32
  • In 2009, Satoshi Nakamoto introduced his white paper Bitcoin: A Peer to Peer Electronic Cash System. The technology that underpinned the Bitcoin was called blockchain. It solved the problem of trust because each time a transaction was made, it was bundled together with other transactions and stored in a block. The block was then placed on the chain, which couldn’t be changed.
  • Based on the Hashcash PoW algorithm, but rather than using tools trusted computing function like the RPoW. The double-spending protection was provided by a decentralized peer-to-peer protocol for verifying and tracking the transactions. In simple words, Bitcoins are “mined” for a reward using the proof-of-work mechanism by miners and after verified by the decentralized nodes in the network.
submitted by y0ujin to NovemGold [link] [comments]

Thoughts on scaling with larger blocks and universal blockchain pruning

While Bitcoin Core is experimenting with payment channels that will use the blockchain as a trusted intermediary to open and close channels, with a throughput theoretically limited only by latency (if it works at all), the scaling solution chosen by the Bitcoin Cash community is increasingly larger blocks to track demand and scale as the network grows. That, however, creates new problems.
The first problem is block propagation. Every time a node wins the proof-of-work race, it has to broadcast it's block for validation and acceptance by the other nodes, which in practical terms involves uploading files through the p2p network. Larger blocks, therefore, would result in larger files to upload, and a block size limit is defined by the mean/median network bandwidth.
If I remember correctly, the "compact blocks" implementation used by both Bitcoin Core and Cash can handle up to 98% compression, which is great and allows for a tremendous increase in block size without fear of clogging up the network, but a scalable future-proof payment system must be able to theoretically handle millions of transactions per second, which would require a bandwidth measured in the Gbps.
The solution to this problem, instead of hoping that Gbps connections will be commonplace by then, is improving the compression algorithms. One such improved algorithm is Graphene, which can handle up to 99.8% compression, requiring a bandwidth measured in Mbps for a throughput in the millions tx/s.
The second problem, and perhaps the most discussed one regarding future centralization problems with Bitcoin Cash, is the storage size of the blockchain. While it sits at around 160GB right now, with increased demand and larger blocks comes an exponential increase in the blockchain database size.
One million tx/s (i.e. 600 million transactions per 10-minute block), averaging 250 bytes per transaction, would result in 150GB blocks, and an increase of around 7800TB in required storage per year. This is hardly possible for most nodes, even with the expected Moore's Law increase in consumer grade SSD, unless we experience some technological breakthrough allowing seemingly endless storage capacity. Once again, this is not something we should expect to happen, but rather work around it to ensure a continued decentralization of full nodes on the network.
One solution to this problem is "pruning" the blockchain, as a means to ditch spent outputs from the blockchain and only keep the unspent ones. However, this is only a solution to lightweight/pruned nodes (which don't relay previous blocks or Merkle paths) since it's impossible to actually prune the blockchain, because any change in past blocks will require a change in the subsequent blocks, and the new pruned blockchain would only be valid if we employed a massive amount of computational power to solve the proof-of-work puzzles of each individual block since Genesis. Hardly an easy solution.
A work-around to mining the pruned blocks would be to hard fork the network to allow for a completely invalid blockchain up to the "pruning block", from where valid blocks would continue to be built on top of. This has numerous problems, but the biggest might just be that the community will never allow for nearly 10 years worth of blocks to suddenly go invalid for the sake of scaling, and thus the hard fork would never gain traction.
EDIT: In reality, nodes running with a pruned blockchain keep track of the UTXO set (i.e. every unspent transaction outputs and their associated public keys) and erase every transaction up to a certain amount of blocks, keeping only the block headers (including the merkle root) and the merkle branches. This allows for any participant to request the merkle path to a certain transaction and locally verify it's authenticity without downloading the actual blocks. However, there are situations where one must validate the blocks themselves instead of trusting the nodes it's connected to, and since each block references a block which you can't expect to be valid without checking, one ends up needing to validate every previous blocks until all coinbase transactions are accounted for, or even all the way back to the Genesis block, so it's very important that there exists full nodes running the blockchain in it's entirety.
I propose a solution to this problem with a "net settlement hard fork" that would consolidate the complete UTXO set in the blockchain and render the previous blocks unnecessary to ensure the security of the system.
On a predetermined block, similarly to how the reward halvings are scheduled, a special block would be mined, which I'll refer to as Exodus (actually, Exodus#1, as more of these special blocks would have to be mined from time to time). This block would contain "net settlement" transactions of all unspent outputs back to their rightful public keys. For example, the wallet that received the first coinbase transaction ever, likely belonging to Satoshi Nakamoto (1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa), has 1297 unspent outputs as of now, most of which are donations intended for Satoshi. On the Exodus block, all these outputs would be combined and sent back to that wallet as one net settled transaction of 66.87633136 BTC. Since every valid transaction requires a signature from the private key holder, these transactions would all be considered invalid on any other block except Exodus, the special block where the community has agreed to overlook this lack of signatures.
In order to prevent malicious actors from trying to steal some of the outputs from other people or even create new outputs and send them to themselves, the hard fork would be programmed far in advance, so all miners can create their own [identical] Exodus block independently, incentivized to do so because of the expected regular block reward (block_reward) and the Exodus block reward calculated by adding every individual transaction size (tx_size_i) and multiplying by some agreed upon maximum transaction fee (max_tx_fee). Therefore, the total coinbase reward for mining that block would be block_reward + max_tx_fee * Σtx_size_i.
This block would have an unlimited size, and define the end of the "first blockchain era", from Genesis to Exodus, which can be kept by anyone who chooses to do so. Since Exodus contains all the net settled transactions from the historical blockchain, full nodes aren't required to keep any previous blocks in their entirety to ensure transaction validity, effectively ditching several GB/TB of required storage.
Every new transaction referencing the old blockchain would need to be rejected, until every wallet is fully synced and starts to reference the Exodus transactions instead, which shouldn't take long, but I might be understating how disruptive this could be. Subsequent blocks could have their maximum block sizes automatically adjusted similarly to how mining difficulty is adjusted now, without worrying over the database size. New "net settlement hard forks" would have to be performed periodically to ensure a limit to the blockchain size, similarly to how Monero ensure periodic upgrades to their protocol (it hard forks every 6 months).
This would ensure that throughput can increase as needed, and the database would be regularly pruned to ensure proper decentralization of full nodes.
Feel free to criticize the idea, and perhaps point me to better scaling solutions that I might be overlooking.
submitted by ViaLogica to btc [link] [comments]

Bitcoin Mining is Coming Back! (Use Headphones) - YouTube Blockchain/Bitcoin for beginners 7: Blockchain header: Merkle roots and SPV transaction verification Bitcoin/Altcoin Developer Guide - 3 - Merkle Trees Merkle Tree  Merkle Root  Blockchain - YouTube Merkle Tree Hashing Algorithm Implementation In Python

Bitcoin merkle tree []. Hash trees can be used to verify any kind of data stored, handled and transferred in and between computers. Currently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks. merkle root hash. char[32] A SHA256(SHA256()) hash in internal byte order. The merkle root is derived from the hashes of all transactions included in this block, ensuring that none of those transactions can be modified without modifying the header. See the merkle trees section below. 4. time. uint32_t Solo Bitcoin Mining ... If none of the hashes are below the threshold, the mining hardware gets an updated block header with a new merkle root from the mining software; this new block header is created by adding extra nonce data to the coinbase field of the coinbase transaction. On the other hand, if a hash is found below the target threshold, the mining hardware returns the block header with ... A Merkle root streamlines the process considerably. When you start mining, you line up all of the transactions you want to include and construct a Merkle tree. You put the resulting root hash (32 bytes) in the block header. Then, when you’re mining, you only need to hash the block header, instead of the whole block. A Merkle root is a simple mathematical way to verify the data on a Merkle tree. Merkle roots are used in cryptocurrency to make sure data blocks passed between peers on a peer-to-peer network are ...

[index] [35298] [8399] [37041] [38490] [33414] [48775] [2482] [30872] [21150] [1745]

Bitcoin Mining is Coming Back! (Use Headphones) - YouTube

Editing Monitors : https://amzn.to/2RfKWgL https://amzn.to/2Q665JW https://amzn.to/2OUP21a. Check out our website: http://www.telusko.com Follow Telusko on T... Bitcoin Mining im Detail erklärt: Nonce, Merkle Root, SPV... Teil 15 Kryptographie Crashkurs - Duration: 14:01. ... Best or Worse Zcash / Bitcoin Mining Pool - Slushpool Review - Duration: 6:49. Close. This video is unavailable. Bitcoin 101 - Merkle Roots and Merkle Trees - Bitcoin Coding and Software - The Block Header - Duration: 24:18. CRI 43,991 views. 24:18!!Con 2017: How Merkle Trees Enable the Decentralized Web! by ... Blockchain/Bitcoin for beginners 6: blocks and mining, content and creation of bitcoin blocks - Duration: 46:48. Matt Thomas 10,975 views. 46:48 . Bitcoin Internals: Verifying Merkle Roots using ...

#