Merkle root of a bitcoin block calculated in C# by Chidi ...

Private Blockchain

A subreddit for news on private blockchains
[link]

Bob The Magic Custodian



Summary: Everyone knows that when you give your assets to someone else, they always keep them safe. If this is true for individuals, it is certainly true for businesses.
Custodians always tell the truth and manage funds properly. They won't have any interest in taking the assets as an exchange operator would. Auditors tell the truth and can't be misled. That's because organizations that are regulated are incapable of lying and don't make mistakes.

First, some background. Here is a summary of how custodians make us more secure:

Previously, we might give Alice our crypto assets to hold. There were risks:

But "no worries", Alice has a custodian named Bob. Bob is dressed in a nice suit. He knows some politicians. And he drives a Porsche. "So you have nothing to worry about!". And look at all the benefits we get:
See - all problems are solved! All we have to worry about now is:
It's pretty simple. Before we had to trust Alice. Now we only have to trust Alice, Bob, and all the ways in which they communicate. Just think of how much more secure we are!

"On top of that", Bob assures us, "we're using a special wallet structure". Bob shows Alice a diagram. "We've broken the balance up and store it in lots of smaller wallets. That way", he assures her, "a thief can't take it all at once". And he points to a historic case where a large sum was taken "because it was stored in a single wallet... how stupid".
"Very early on, we used to have all the crypto in one wallet", he said, "and then one Christmas a hacker came and took it all. We call him the Grinch. Now we individually wrap each crypto and stick it under a binary search tree. The Grinch has never been back since."

"As well", Bob continues, "even if someone were to get in, we've got insurance. It covers all thefts and even coercion, collusion, and misplaced keys - only subject to the policy terms and conditions." And with that, he pulls out a phone-book sized contract and slams it on the desk with a thud. "Yep", he continues, "we're paying top dollar for one of the best policies in the country!"
"Can I read it?' Alice asks. "Sure," Bob says, "just as soon as our legal team is done with it. They're almost through the first chapter." He pauses, then continues. "And can you believe that sales guy Mike? He has the same year Porsche as me. I mean, what are the odds?"

"Do you use multi-sig?", Alice asks. "Absolutely!" Bob replies. "All our engineers are fully trained in multi-sig. Whenever we want to set up a new wallet, we generate 2 separate keys in an air-gapped process and store them in this proprietary system here. Look, it even requires the biometric signature from one of our team members to initiate any withdrawal." He demonstrates by pressing his thumb into the display. "We use a third-party cloud validation API to match the thumbprint and authorize each withdrawal. The keys are also backed up daily to an off-site third-party."
"Wow that's really impressive," Alice says, "but what if we need access for a withdrawal outside of office hours?" "Well that's no issue", Bob says, "just send us an email, call, or text message and we always have someone on staff to help out. Just another part of our strong commitment to all our customers!"

"What about Proof of Reserve?", Alice asks. "Of course", Bob replies, "though rather than publish any blockchain addresses or signed transaction, for privacy we just do a SHA256 refactoring of the inverse hash modulus for each UTXO nonce and combine the smart contract coefficient consensus in our hyperledger lightning node. But it's really simple to use." He pushes a button and a large green checkmark appears on a screen. "See - the algorithm ran through and reserves are proven."
"Wow", Alice says, "you really know your stuff! And that is easy to use! What about fiat balances?" "Yeah, we have an auditor too", Bob replies, "Been using him for a long time so we have quite a strong relationship going! We have special books we give him every year and he's very efficient! Checks the fiat, crypto, and everything all at once!"

"We used to have a nice offline multi-sig setup we've been using without issue for the past 5 years, but I think we'll move all our funds over to your facility," Alice says. "Awesome", Bob replies, "Thanks so much! This is perfect timing too - my Porsche got a dent on it this morning. We have the paperwork right over here." "Great!", Alice replies.
And with that, Alice gets out her pen and Bob gets the contract. "Don't worry", he says, "you can take your crypto-assets back anytime you like - just subject to our cancellation policy. Our annual management fees are also super low and we don't adjust them often".

How many holes have to exist for your funds to get stolen?
Just one.

Why are we taking a powerful offline multi-sig setup, widely used globally in hundreds of different/lacking regulatory environments with 0 breaches to date, and circumventing it by a demonstrably weak third party layer? And paying a great expense to do so?
If you go through the list of breaches in the past 2 years to highly credible organizations, you go through the list of major corporate frauds (only the ones we know about), you go through the list of all the times platforms have lost funds, you go through the list of times and ways that people have lost their crypto from identity theft, hot wallet exploits, extortion, etc... and then you go through this custodian with a fine-tooth comb and truly believe they have value to add far beyond what you could, sticking your funds in a wallet (or set of wallets) they control exclusively is the absolute worst possible way to take advantage of that security.

The best way to add security for crypto-assets is to make a stronger multi-sig. With one custodian, what you are doing is giving them your cryptocurrency and hoping they're honest, competent, and flawlessly secure. It's no different than storing it on a really secure exchange. Maybe the insurance will cover you. Didn't work for Bitpay in 2015. Didn't work for Yapizon in 2017. Insurance has never paid a claim in the entire history of cryptocurrency. But maybe you'll get lucky. Maybe your exact scenario will buck the trend and be what they're willing to cover. After the large deductible and hopefully without a long and expensive court battle.

And you want to advertise this increase in risk, the lapse of judgement, an accident waiting to happen, as though it's some kind of benefit to customers ("Free institutional-grade storage for your digital assets.")? And then some people are writing to the OSC that custodians should be mandatory for all funds on every exchange platform? That this somehow will make Canadians as a whole more secure or better protected compared with standard air-gapped multi-sig? On what planet?

Most of the problems in Canada stemmed from one thing - a lack of transparency. If Canadians had known what a joke Quadriga was - it wouldn't have grown to lose $400m from hard-working Canadians from coast to coast to coast. And Gerald Cotten would be in jail, not wherever he is now (at best, rotting peacefully). EZ-BTC and mister Dave Smilie would have been a tiny little scam to his friends, not a multi-million dollar fraud. Einstein would have got their act together or been shut down BEFORE losing millions and millions more in people's funds generously donated to criminals. MapleChange wouldn't have even been a thing. And maybe we'd know a little more about CoinTradeNewNote - like how much was lost in there. Almost all of the major losses with cryptocurrency exchanges involve deception with unbacked funds.
So it's great to see transparency reports from BitBuy and ShakePay where someone independently verified the backing. The only thing we don't have is:
It's not complicated to validate cryptocurrency assets. They need to exist, they need to be spendable, and they need to cover the total balances. There are plenty of credible people and firms across the country that have the capacity to reasonably perform this validation. Having more frequent checks by different, independent, parties who publish transparent reports is far more valuable than an annual check by a single "more credible/official" party who does the exact same basic checks and may or may not publish anything. Here's an example set of requirements that could be mandated:
There are ways to structure audits such that neither crypto assets nor customer information are ever put at risk, and both can still be properly validated and publicly verifiable. There are also ways to structure audits such that they are completely reasonable for small platforms and don't inhibit innovation in any way. By making the process as reasonable as possible, we can completely eliminate any reason/excuse that an honest platform would have for not being audited. That is arguable far more important than any incremental improvement we might get from mandating "the best of the best" accountants. Right now we have nothing mandated and tons of Canadians using offshore exchanges with no oversight whatsoever.

Transparency does not prove crypto assets are safe. CoinTradeNewNote, Flexcoin ($600k), and Canadian Bitcoins ($100k) are examples where crypto-assets were breached from platforms in Canada. All of them were online wallets and used no multi-sig as far as any records show. This is consistent with what we see globally - air-gapped multi-sig wallets have an impeccable record, while other schemes tend to suffer breach after breach. We don't actually know how much CoinTrader lost because there was no visibility. Rather than publishing details of what happened, the co-founder of CoinTrader silently moved on to found another platform - the "most trusted way to buy and sell crypto" - a site that has no information whatsoever (that I could find) on the storage practices and a FAQ advising that “[t]rading cryptocurrency is completely safe” and that having your own wallet is “entirely up to you! You can certainly keep cryptocurrency, or fiat, or both, on the app.” Doesn't sound like much was learned here, which is really sad to see.
It's not that complicated or unreasonable to set up a proper hardware wallet. Multi-sig can be learned in a single course. Something the equivalent complexity of a driver's license test could prevent all the cold storage exploits we've seen to date - even globally. Platform operators have a key advantage in detecting and preventing fraud - they know their customers far better than any custodian ever would. The best job that custodians can do is to find high integrity individuals and train them to form even better wallet signatories. Rather than mandating that all platforms expose themselves to arbitrary third party risks, regulations should center around ensuring that all signatories are background-checked, properly trained, and using proper procedures. We also need to make sure that signatories are empowered with rights and responsibilities to reject and report fraud. They need to know that they can safely challenge and delay a transaction - even if it turns out they made a mistake. We need to have an environment where mistakes are brought to the surface and dealt with. Not one where firms and people feel the need to hide what happened. In addition to a knowledge-based test, an auditor can privately interview each signatory to make sure they're not in coercive situations, and we should make sure they can freely and anonymously report any issues without threat of retaliation.
A proper multi-sig has each signature held by a separate person and is governed by policies and mutual decisions instead of a hierarchy. It includes at least one redundant signature. For best results, 3of4, 3of5, 3of6, 4of5, 4of6, 4of7, 5of6, or 5of7.

History has demonstrated over and over again the risk of hot wallets even to highly credible organizations. Nonetheless, many platforms have hot wallets for convenience. While such losses are generally compensated by platforms without issue (for example Poloniex, Bitstamp, Bitfinex, Gatecoin, Coincheck, Bithumb, Zaif, CoinBene, Binance, Bitrue, Bitpoint, Upbit, VinDAX, and now KuCoin), the public tends to focus more on cases that didn't end well. Regardless of what systems are employed, there is always some level of risk. For that reason, most members of the public would prefer to see third party insurance.
Rather than trying to convince third party profit-seekers to provide comprehensive insurance and then relying on an expensive and slow legal system to enforce against whatever legal loopholes they manage to find each and every time something goes wrong, insurance could be run through multiple exchange operators and regulators, with the shared interest of having a reputable industry, keeping costs down, and taking care of Canadians. For example, a 4 of 7 multi-sig insurance fund held between 5 independent exchange operators and 2 regulatory bodies. All Canadian exchanges could pay premiums at a set rate based on their needed coverage, with a higher price paid for hot wallet coverage (anything not an air-gapped multi-sig cold wallet). Such a model would be much cheaper to manage, offer better coverage, and be much more reliable to payout when needed. The kind of coverage you could have under this model is unheard of. You could even create something like the CDIC to protect Canadians who get their trading accounts hacked if they can sufficiently prove the loss is legitimate. In cases of fraud, gross negligence, or insolvency, the fund can be used to pay affected users directly (utilizing the last transparent balance report in the worst case), something which private insurance would never touch. While it's recommended to have official policies for coverage, a model where members vote would fully cover edge cases. (Could be similar to the Supreme Court where justices vote based on case law.)
Such a model could fully protect all Canadians across all platforms. You can have a fiat coverage governed by legal agreements, and crypto-asset coverage governed by both multi-sig and legal agreements. It could be practical, affordable, and inclusive.

Now, we are at a crossroads. We can happily give up our freedom, our innovation, and our money. We can pay hefty expenses to auditors, lawyers, and regulators year after year (and make no mistake - this cost will grow to many millions or even billions as the industry grows - and it will be borne by all Canadians on every platform because platforms are not going to eat up these costs at a loss). We can make it nearly impossible for any new platform to enter the marketplace, forcing Canadians to use the same stagnant platforms year after year. We can centralize and consolidate the entire industry into 2 or 3 big players and have everyone else fail (possibly to heavy losses of users of those platforms). And when a flawed security model doesn't work and gets breached, we can make it even more complicated with even more people in suits making big money doing the job that blockchain was supposed to do in the first place. We can build a system which is so intertwined and dependent on big government, traditional finance, and central bankers that it's future depends entirely on that of the fiat system, of fractional banking, and of government bail-outs. If we choose this path, as history has shown us over and over again, we can not go back, save for revolution. Our children and grandchildren will still be paying the consequences of what we decided today.
Or, we can find solutions that work. We can maintain an open and innovative environment while making the adjustments we need to make to fully protect Canadian investors and cryptocurrency users, giving easy and affordable access to cryptocurrency for all Canadians on the platform of their choice, and creating an environment in which entrepreneurs and problem solvers can bring those solutions forward easily. None of the above precludes innovation in any way, or adds any unreasonable cost - and these three policies would demonstrably eliminate or resolve all 109 historic cases as studied here - that's every single case researched so far going back to 2011. It includes every loss that was studied so far not just in Canada but globally as well.
Unfortunately, finding answers is the least challenging part. Far more challenging is to get platform operators and regulators to agree on anything. My last post got no response whatsoever, and while the OSC has told me they're happy for industry feedback, I believe my opinion alone is fairly meaningless. This takes the whole community working together to solve. So please let me know your thoughts. Please take the time to upvote and share this with people. Please - let's get this solved and not leave it up to other people to do.

Facts/background/sources (skip if you like):



Thoughts?
submitted by azoundria2 to QuadrigaInitiative [link] [comments]

A breakdown of the aelf blockchain whitepaper — Part 2

A breakdown of the aelf blockchain whitepaper — Part 2

https://preview.redd.it/p9cf7c4cpri51.png?width=512&format=png&auto=webp&s=006d466a2d0ad4d4afbbffe340eb2ad44631ad27

Breaking down the aelf side-chain

Cloud computing, parallel processing, and AEDPoS have greatly improved the execution performance of any kind of smart contract, but when they are applied to enterprise-level scenarios, new problems crop up. To begin with, in software design, it is a rather bad idea to program all the methods in the same class. We always write a series of classes to inherit a base class, in order to decouple the functionalities and make the class extensible whenever needed. The same also applies to blockchain design. Second, since all the data and transactions are accessible to anyone through a blockchain explorer, if we put the smart contract and data of different enterprises or government sectors on a single blockchain, then everyone can see them, which means there will be no data privacy. Although there are encryption techniques which can mask data, such as zero knowledge proof, it is always better to put the data of different enterprises on different blockchains.
Based on these considerations, long before other projects even realized it, aelf proposed that side-chain technology should be applied to this scenario. Unfortunately, for someone who is new to blockchain, it is almost impossible to understand how side-chain works. Side-chain is not what it literally means, it is not subordinate to the main chain. On the contrary, a side chain is a blockchain distributed system with the same functions and nodes as a main chain (say, the aelf blockchain). As mentioned above, we can put the data of different enterprises on different blockchains. This means we can build many blockchains, and work magic (of course not magic in its literal sense) to make these chains connect to the aelf main chain (in fact, we can call any of these blockchains a main chain and the rest side chains). Currently, the most popular method of connecting any two blockchains, which we also call cross-chain, is using a middle-man. When we want to use bitcoin to play a decentralized game on Ethereum, we need to send a transaction with some amount of bitcoin to a locking bitcoin address, then the middle-man will exchange the locked BTC for ETH at a certain exchange rate and allocate to you the equivalent amount of ETH on Ethereum, which you can use for playing games.
But in aelf, we use a metadata indexing method, which is more straightforward. Unlike other projects who built on the blockchains of those already successful projects (such as Ethereum or the HyperLedger fabric framework for consortium blockchains), the aelf team has writen all the code and build the infrastructure from scratch. From the beginning, the aelf team has defined how the data structure of a blockchain, a block, a transaction etc. should look like in C#. In an aelf blockchain data structure, there is an attribute called blockchain ID, which is a unique hash; and in block data structure, there are several attributes called blockchain ID , Merkle tree root and related side chain block list. There is also one more important thing: all of aelf’s data structures are serialized and stored in Redis (a popular key-value pair database system), so is the side chain information. As a result, as the aelf main chain is growing with block production by BPs, other side chains can send transactions to cross-chain contracts, which then execute the related code to connect to the main chain’s network port and request the main chain to index the side chain block and pay the indexing fee.
The core issue here is how to index a side chain: when a main chain (the block data structure on the main chain, or the data records with main chain ID in Redis), receives a request from a side chain, it adds the side chain’s block head data structure to the related side chain block list, which means theoretically we have indexed or related a side chain. We have mentioned that there is also a blockchain ID in each block, this attribute allows a main chain to index blocks from different side chains. When a user on a main chain wants to access data on a side chain or vise versa, they just need to find the target block on the main chain and its related side chain block list, and then find the target block on the side chain via key indexing.
As we will explain later, blockchains for different application scenarios generate blocks at different speeds. Under such circumstances, a chain with slower speed might index many blocks from a chain that produces blocks faster. This method can be applied to scenarios such as forking.
In practice, we can build any number of blockchains, and relate it via indexing to the aelf main chain, with a specific category of smart contracts running on each of them. For example, we can allow only banking-related smart contracts deployed on a specific blockchain, and e-commerce smart contracts on another. Our whitepaper summarizes it best:
One chain, one contract.
Moreover, the indexing method can make many blockchains into a hierarchical tree structure, the root being the so-called main chain. That’s because a related blockchain can then again index another blockchain as its side chain, and the process can keep going on. Logically, this is in perfect accordance with hierarchical taxonomy, for example, the financial sector has many subcategories, such as banking, lending, investment and insurance, and under investment banking, there are venture capital, investment bank etc… Each subcategory is supported by an indexed blockchain.
So how do these blockchains collaborate in a distributed system? First we need to be know that any node in a distributed system is just a software instance running on your computer, or a process. In TCP/IP, a node is allocated a port number, so we can run any number of this type of instances on a computer. However, each instance has its own port number: we can run several blockchain nodes, one IPFS node, one bit-torrent node and etc. simultaneously. In aelf, you should first start a main chain instance, and then you can build and run a side chain instance. Transactions broadcast on the side chain are collected by the BP nodes (block production nodes) on the main chain. When smart contracts deployed on the side chain is triggered, the BP and full nodes on the main chain will run them.

Aelf — a blockchain based operating system

To perfect the design of our software system, aelf made the system extensible, flexible and pluggable. Just as there are thousands of Linux OS with only one Linux kernel. As Ethereum Founder Vitalik Buterin has explained, Ethereum can be seen as a world computer because there are lots of smart contracts running on it, and the contract execution results are consistent in all the distributed systems around the world. This idea is also embedded in aelf’s system and we call it a “blockchain infrastructure operating system”, or a distributed operating system.
Just like any OS, aelf has a kernel and a shell. In fact, aelf’s kernel is not something like a Linux kernel, it is just an analogy. There is a special concept in aelf’s kernel called the minimum viable blockchain system, which defines the most fundamental aspect of a blockchain. If a developer wants to create a new blockchain system or a new blockchain project, he does’t have to start from scratch, instead, he can directly extend and customize using the aelf blockchain open-source code. The technologies described above are all included in the minimum viable blockchain system. With these, anyone can customize:
  • Block property: block data structure, block packaging speed, transaction data structure, etc.
  • Consensus type: AEDPoS is used by default, but you can also use incentive consensus, like PoW and PoS. And you can also use the consensus of traditional distributed systems, like PoS and Practical Byzantine Fault Tolerance, or PBFT. In fact, the f evil nodes of 3f+1 nodes are the upper limit for any distributed system to reach a consensus, which is called the Byzantine Fault Tolerance, or BFT. In order to do this, there is a specific algorithm, but in 1999, a much more efficient algorithm to reach this consensus came along, that is the PBFT. In scenarios like private blockchain or consortium blockchain where there is no need for a incentive model, PBFT will be a good option.
  • Smart contract collection: In aelf, there are many predefined smart contracts that can be used directly by other contracts, such as token contract, cross-chain contract (also called CCTP, or cross chain transfer protocol), consensus contract, organization voting contracts, etc. Of course, you can also create your own contract with a brand new implementation logic.
  • Others.

Summary

So this is our breakdown of the aelf blockchain whitepaper. In previous articles, we first introduced two basic concepts which are often misinterpreted by other articles. After helping you get these two concepts straight, we then introduced aelf’s vast arsenal of powerful technology. If these articles helped you understand the aelf blockchain better, then I have reached my goal. But I must advise you to read the whitepaper for a more detailed explanation. With all this knowledge at your disposal, I believe you will be much more comfortable developing DApps on aelf.
Check Part 1 here: https://medium.com/aelfblockchain/a-breakdown-of-the-aelf-blockchain-whitepaper-part-1-a63fc2e3e2e7
submitted by Floris-Jan to aelfofficial [link] [comments]

DFINITY Research Report

DFINITY Research Report
Author: Gamals Ahmed, CoinEx Business Ambassador
ABSTRACT
The DFINITY blockchain computer provides a secure, performant and flexible consensus mechanism. At its core, DFINITY contains a decentralized randomness beacon, which acts as a verifiable random function (VRF) that produces a stream of outputs over time. The novel technique behind the beacon relies on the existence of a unique-deterministic, non-interactive, DKG-friendly threshold signatures scheme. The only known examples of such a scheme are pairing-based and derived from BLS.
The DFINITY blockchain is layered on top of the DFINITY beacon and uses the beacon as its source of randomness for leader selection and leader ranking. A “weight” is attributed to a chain based on the ranks of the leaders who propose the blocks in the chain, and that weight is used to select between competing chains. The DFINITY blockchain is layered on top of the DFINITY beacon and uses the beacon as its source of randomness for leader selection and leader ranking blockchain is further hardened by a notarization process which dramatically improves the time to finality and eliminates the nothing-at-stake and selfish mining attacks.
DFINITY consensus algorithm is made to scale through continuous quorum selections driven by the random beacon. In practice, DFINITY achieves block times of a few seconds and transaction finality after only two confirmations. The system gracefully handles temporary losses of network synchrony including network splits, while it is provably secure under synchrony.

1.INTRODUCTION

DFINITY is building a new kind of public decentralized cloud computing resource. The company’s platform uses blockchain technology which is aimed at building a new kind of public decentralized cloud computing resource with unlimited capacity, performance and algorithmic governance shared by the world, with the capability to power autonomous self-updating software systems, enabling organizations to design and deploy custom-tailored cloud computing projects, thereby reducing enterprise IT system costs by 90%.
DFINITY aims to explore new territory and prove that the blockchain opportunity is far broader and deeper than anyone has hitherto realized, unlocking the opportunity with powerful new crypto.
Although a standalone project, DFINITY is not maximalist minded and is a great supporter of Ethereum.
The DFINITY blockchain computer provides a secure, performant and flexible consensus mechanism. At its core, DFINITY contains a decentralized randomness beacon, which acts as a verifiable random function (VRF) that produces a stream of outputs over time. The novel technique behind the beacon relies on the existence of a unique-deterministic, non-interactive, DKG-friendly threshold signatures scheme. The only known examples of such a scheme are pairing-based and derived from BLS.
DFINITY’s consensus mechanism has four layers: notary (provides fast finality guarantees to clients and external observers), blockchain (builds a blockchain from validated transactions via the Probabilistic Slot Protocol driven by the random beacon), random beacon (provides the source of randomness for all higher layers like smart contract applications), and identity (provides a registry of all clients).
DFINITY’s consensus mechanism has four layers

Figure1: DFINITY’s consensus mechanism layers
1. Identity layer:
Active participants in the DFINITY Network are called clients. Where clients are registered with permanent identities under a pseudonym. Moreover, DFINITY supports open membership by providing a protocol for registering new clients by depositing a stake with an insurance period. This is the responsibility of the first layer.
2. Random Beacon layer:
Provides the source of randomness (VRF) for all higher layers including ap- plications (smart contracts). The random beacon in the second layer is an unbiasable, verifiable random function (VRF) that is produced jointly by registered clients. Each random output of the VRF is unpredictable by anyone until just before it becomes avail- able to everyone. This is a key technology of the DFINITY system, which relies on a threshold signature scheme with the properties of uniqueness and non-interactivity.

https://preview.redd.it/hkcf53ic05e51.jpg?width=441&format=pjpg&auto=webp&s=44d45c9602ee630705ce92902b8a8379201d8111
3. Blockchain layer:
The third layer deploys the “probabilistic slot protocol” (PSP). This protocol ranks the clients for each height of the chain, in an order that is derived determin- istically from the unbiased output of the random beacon for that height. A weight is then assigned to block proposals based on the proposer’s rank such that blocks from clients at the top of the list receive a higher weight. Forks are resolved by giving favor to the “heaviest” chain in terms of accumulated block weight — quite sim- ilar to how traditional proof-of-work consensus is based on the highest accumulated amount of work.
The first advantage of the PSP protocol is that the ranking is available instantaneously, which allows for a predictable, constant block time. The second advantage is that there is always a single highest-ranked client, which allows for a homogenous network bandwidth utilization. Instead, a race between clients would favor a usage in bursts.
4. Notarization layer:
Provides fast finality guarantees to clients and external observers. DFINITY deploys the novel technique of block notarization in its fourth layer to speed up finality. A notarization is a threshold signature under a block created jointly by registered clients. Only notarized blocks can be included in a chain. Of all RSA-based alternatives exist but suffer from an impracticality of setting up the thresh- old keys without a trusted dealer.
DFINITY achieves its high speed and short block times exactly because notarization is not full consensus.
DFINITY does not suffer from selfish mining attack or a problem nothing at stake because the authentication step is impossible for the opponent to build and maintain a series of linked and trusted blocks in secret.
DFINITY’s consensus is designed to operate on a network of millions of clients. To en- able scalability to this extent, the random beacon and notarization protocols are designed such as that they can be safely and efficiently delegated to a committee

1.1 OVERVIEW ABOUT DFINITY

DFINITY is a blockchain-based cloud-computing project that aims to develop an open, public network, referred to as the “internet computer,” to host the next generation of software and data. and it is a decentralized and non-proprietary network to run the next generation of mega-applications. It dubbed this public network “Cloud 3.0”.
DFINITY is a third generation virtual blockchain network that sets out to function as an “intelligent decentralised cloud,”¹ strongly focused on delivering a viable corporate cloud solution. The DFINITY project is overseen, supported and promoted by DFINITY Stiftung a not-for-profit foundation based in Zug, Switzerland.
DFINITY is a decentralized network design whose protocols generate a reliable “virtual blockchain computer” running on top of a peer-to-peer network upon which software can be installed and can operate in the tamperproof mode of smart contracts.
DFINITY introduces algorithmic governance in the form of a “Blockchain Nervous System” that can protect users from attacks and help restart broken systems, dynamically optimize network security and efficiency, upgrade the protocol and mitigate misuse of the platform, for example by those wishing to run illegal or immoral systems.
DFINITY is an Ethereum-compatible smart contract platform that is implementing some revolutionary ideas to address blockchain performance, scaling, and governance. Whereas
DFINITY could pose a credible threat to Ethereum’s extinction, the project is pursuing a coevolutionary strategy by contributing funding and effort to Ethereum projects and freely offering their technology to Ethereum for adoption. DFINITY has labeled itself Ethereum’s “crazy sister” to express it’s close genetic resemblance to Ethereum, differentiated by its obsession with performance and neuron-inspired governance model.
Dfinity raised $61 million from Andreesen Horowitz and Polychain Capital in a February 2018 funding round. At the time, Dfinity said it wanted to create an “internet computer” to cut the costs of running cloud-based business applications. A further $102 million funding round in August 2018 brought the project’s total funding to $195 million.
In May 2018, Dfinity announced plans to distribute around $35 million worth of Dfinity tokens in an airdrop. It was part of the company’s plan to create a “Cloud 3.0.” Because of regulatory concerns, none of the tokens went to US residents.
DFINITY be broadening and strengthening the EVM ecosystem by giving applications a choice of platforms with different characteristics. However, if DFINITY succeeds in delivering a fully EVM-compatible smart contract platform with higher transaction throughput, faster confirmation times, and governance mechanisms that can resolve public disputes without causing community splits, then it will represent a clearly superior choice for deploying new applications and, as its network effects grow, an attractive place to bring existing ones. Of course the challenge for DFINITY will be to deliver on these promises while meeting the security demands of a public chain with significant value at risk.

1.1.1 DFINITY FUTURE

  • DFINITY aims to explore new blockchain territory related to the original goals of the Ethereum project and is sometimes considered “Ethereum’s crazy sister.”
  • DFINITY is developing blockchain-based infrastructure to support a new style of the internet (akin to Ethereum’s “World Computer”), one in which the internet itself will support software applications and data rather than various cloud hosting providers.
  • The project suggests this reinvented software platform can simplify the development of new software systems, reduce the human capital needed to maintain and secure data, and preserve user data privacy.
  • Dfinity aims to reduce the costs of cloud services by creating a decentralized “internet computer” which may launch in 2020
  • Dfinity claims transactions on its network are finalized in 3–5 seconds, compared to 1 hour for Bitcoin and 10 minutes for Ethereum.

1.1.2 DFINITY’S VISION

DFINITY’s vision is its new internet infrastructure can support a wide variety of end-user and enterprise applications. Social media, messaging, search, storage, and peer-to-peer Internet interactions are all examples of functionalities that DFINITY plans to host atop its public Web 3.0 cloud-like computing resource. In order to provide the transaction and data capacity necessary to support this ambitious vision, DFINITY features a unique consensus model (dubbed Threshold Relay) and algorithmic governance via its Blockchain Nervous System (BNS) — sometimes also referred to as the Network Nervous System or NNS.

1.2 DFINITY COMMUNITY

The DFINITY community brings people and organizations together to learn and collaborate on products that help steward the next-generation of internet software and services. The Internet Computer allows developers to take on the monopolization of the internet, and return the internet back to its free and open roots. We’re committed to connecting those who believe the same through our events, content, and discussions.

https://preview.redd.it/0zv64fzf05e51.png?width=637&format=png&auto=webp&s=e2b17365fae3c679a32431062d8e3c00a57673cf

1.3 DFINITY ROADMAP (TIMELINE) February 15, 2017

February 15, 2017
Ethereum based community seed round raises 4M Swiss francs (CHF)
The DFINITY Stiftung, a not-for-profit foundation entity based in Zug, Switzerland, raised the round. The foundation held $10M of assets as of April 2017.
February 8, 2018
Dfinity announces a $61M fundraising round led by Polychain Capital and Andreessen Horowitz
The round $61M round led by Polychain Capital and Andreessen Horowitz, along with an DFINITY Ecosystem Venture Fund which will be used to support projects developing on the DFINITY platform, and an Ethereum based raise in 2017 brings the total funding for the project over $100 million. This is the first cryptocurrency token that Andressen Horowitz has invested in, led by Chris Dixon.
August 2018
Dfinity raises a $102,000,000 venture round from Multicoin Capital, Village Global, Aspect Ventures, Andreessen Horowitz, Polychain Capital, Scalar Capital, Amino Capital and SV Angel.
January 23, 2020
Dfinity launches an open source platform aimed at the social networking giants

2.DFINITY TECHNOLOGY

Dfinity is building what it calls the internet computer, a decentralized technology spread across a network of independent data centers that allows software to run anywhere on the internet rather than in server farms that are increasingly controlled by large firms, such as Amazon Web Services or Google Cloud. This week Dfinity is releasing its software to third-party developers, who it hopes will start making the internet computer’s killer apps. It is planning a public release later this year.
At its core, the DFINITY consensus mechanism is a variation of the Proof of Stake (PoS) model, but offers an alternative to traditional Proof of Work (PoW) and delegated PoS (dPoS) networks. Threshold Relay intends to strike a balance between inefficiencies of decentralized PoW blockchains (generally characterized by slow block times) and the less robust game theory involved in vote delegation (as seen in dPoS blockchains). In DFINITY, a committee of “miners” is randomly selected to add a new block to the chain. An individual miner’s probability of being elected to the committee proposing and computing the next block (or blocks) is proportional to the number of dfinities the miner has staked on the network. Further, a “weight” is attributed to a DFINITY chain based on the ranks of the miners who propose blocks in the chain, and that weight is used to choose between competing chains (i.e. resolve chain forks).
A decentralized random beacon manages the random selection process of temporary block producers. This beacon is a Variable Random Function (VRF), which is a pseudo-random function that provides publicly verifiable proofs of its outputs’ correctness. A core component of the random beacon is the use of Boneh-Lynn-Shacham (BLS) signatures. By leveraging the BLS signature scheme, the DFINITY protocol ensures no actor in the network can determine the outcome of the next random assignment.
Dfinity is introducing a new standard, which it calls the internet computer protocol (ICP). These new rules let developers move software around the internet as well as data. All software needs computers to run on, but with ICP the computers could be anywhere. Instead of running on a dedicated server in Google Cloud, for example, the software would have no fixed physical address, moving between servers owned by independent data centers around the world. “Conceptually, it’s kind of running everywhere,” says Dfinity engineering manager Stanley Jones.
DFINITY also features a native programming language, called ActorScript (name may be subject to change), and a virtual machine for smart contract creation and execution. The new smart contract language is intended to simplify the management of application state for programmers via an orthogonal persistence environment (which means active programs are
not required to retrieve or save their state). All ActorScript contracts are eventually compiled down to WebAssembly instructions so the DFINITY virtual machine layer can execute the logic of applications running on the network. The advantage of using the WebAssembly standard is that all major browsers support it and a variety of programming languages can compile down to Wasm (not just ActorScript).
Dfinity is moving fast. Recently, Dfinity showed off a TikTok clone called CanCan. In January it demoed a LinkedIn-alike called LinkedUp. Neither app is being made public, but they make a convincing case that apps made for the internet computer can rival the real things.

2.1 DFINITY CORE APPLICATIONS

The DFINITY cloud has two core applications:
  1. Enabling the re-engineering of business: DFINITY ambitiously aims to facilitate the re-engineering of mass-market services (such as Web Search, Ridesharing Services, Messaging Services, Social Media, Supply Chain, etc) into open source businesses that leverage autonomous software and decentralised governance systems to operate and update themselves more efficiently.
  2. Enable the re-engineering of enterprise IT systems to reduce costs: DFINITY seeks to re-engineer enterprise IT systems to take advantage of the unique properties that blockchain computer networks provide.
At present, computation on blockchain-based computer networks is far more expensive than traditional, centralised solutions (Amazon Web Services, Microsoft Azure, Google Cloud Platform, etc). Despite increasing computational cost, DFINITY intends to lower net costs “by 90% or more” through reducing the human capital cost associated with sustaining and supporting these services.
Whilst conceptually similar to Ethereum, DFINITY employs original and new cryptography methods and protocols (crypto:3) at the network level, in concert with AI and network-fuelled systemic governance (Blockchain Nervous System — BNS) to facilitate Corporate adoption.
DFINITY recognises that different users value different properties and sees itself as more of a fully compatible extension of the Ethereum ecosystem rather than a competitor of the Ethereum network.
In the future, DFINITY hopes that much of their “new crypto might be used within the Ethereum network and are also working hard on shared technology components.”
As the DFINITY project develops over time, the DFINITY Stiftung foundation intends to steadily increase the BNS’ decision-making responsibilities over time, eventually resulting in the dissolution of its own involvement entirely, once the BNS is sufficiently sophisticated.
DFINITY consensus mechanism is a heavily optimized proof of stake (PoS) model. It places a strong emphasis on transaction finality through implementing a Threshold Relay technique in conjunction with the BLS signature scheme and a notarization method to address many of the problems associated with PoS consensus.

2.2 THRESHOLD RELAY

As a public cloud computing resource, DFINITY targets business applications by substantially reducing cloud computing costs for IT systems. They aim to achieve this with a highly scalable and powerful network with potentially unlimited capacity. The DFINITY platform is chalk full of innovative designs and features like their Blockchain Nervous System (BNS) for algorithmic governance.
One of the primary components of the platform is its novel Threshold Relay Consensus model from which randomness is produced, driving the other systems that the network depends on to operate effectively. The consensus system was first designed for a permissioned participation model but can be paired with any method of Sybil resistance for an open participation model.
“The Threshold Relay is the mechanism by which Dfinity randomly samples replicas into groups, sets the groups (committees) up for threshold operation, chooses the current committee, and relays from one committee to the next is called the threshold relay.”
Threshold Relay consists of four layers (As mentioned previously):
  1. Notary layer, which provides fast finality guarantees to clients and external observers and eliminates nothing-at-stake and selfish mining attacks, providing Sybil attack resistance.
  2. Blockchain layer that builds a blockchain from validated transactions via the Probabilistic Slot Protocol driven by the random beacon.
  3. Random beacon, which as previously covered, provides the source of randomness for all higher layers like the blockchain layer smart contract applications.
  4. Identity layer that provides a registry of all clients.

2.2.1 HOW DOES THRESHOLD RELAY WORK?

Threshold Relay produces an endogenous random beacon, and each new value defines random group(s) of clients that may independently try and form into a “threshold group”. The composition of each group is entirely random such that they can intersect and clients can be presented in multiple groups. In DFINITY, each group is comprised of 400 members. When a group is defined, the members attempt to set up a BLS threshold signature system using a distributed key generation protocol. If they are successful within some fixed number of blocks, they then register the public key (“identity”) created for their group on the global blockchain using a special transaction, such that it will become part of the set of active groups in a following “epoch”. The network begins at “genesis” with some number of predefined groups, one of which is nominated to create a signature on some default value. Such signatures are random values — if they were not then the group’s signatures on messages would be predictable and the threshold signature system insecure — and each random value produced thus is used to select a random successor group. This next group then signs the previous random value to produce a new random value and select another group, relaying between groups ad infinitum and producing a sequence of random values.
In a cryptographic threshold signature system a group can produce a signature on a message upon the cooperation of some minimum threshold of its members, which is set to 51% in the DFINITY network. To produce the threshold signature, group members sign the message
individually (here the preceding group’s threshold signature) creating individual “signature shares” that are then broadcast to other group members. The group threshold signature can be constructed upon combination of a sufficient threshold of signature shares. So for example, if the group size is 400, if the threshold is set at 201 any client that collects that many shares will be able to construct the group’s signature on the message. Other group members can validate each signature share, and any client using the group’s public key can validate the single group threshold signature produced by combining them. The magic of the BLS scheme is that it is “unique and deterministic” meaning that from whatever subset of group members the required number of signature shares are collected, the single threshold signature created is always the same and only a single correct value is possible.
Consequently, the sequence of random values produced is entirely deterministic and unmanipulable, and signatures generated by relaying between groups produces a Verifiable Random Function, or VRF. Although the sequence of random values is pre-determined given some set of participating groups, each new random value can only be produced upon the minimal agreement of a threshold of the current group. Conversely, in order for relaying to stall because a random number was not produced, the number of correct processes must be below the threshold. Thresholds are configured so that this is extremely unlikely. For example, if the group size is set to 400, and the threshold is 201, 200 or more of the processes must become faulty to prevent production. If there are 10,000 processes in the network, of which 3,000 are faulty, the probability this will occur is less than 10e-17.

2.3 DFINITY TOKEN

The DFINITY blockchain also supports a native token, called dfinities (DFN), which perform multiple roles within the network, including:
  1. Fuel for deploying and running smart contracts.
  2. Security deposits (i.e. staking) that enable participation in the BNS governance system.
  3. Security deposits that allow client software or private DFINITY cloud networks to connect to the public network.
Although dfinities will end up being assigned a value by the market, the DFINITY team does not intend for DFN to act as a currency. Instead, the project has envisioned PHI, a “next-generation” crypto-fiat scheme, to act as a stable medium of exchange within the DFINITY ecosystem.
Neuron operators can earn Dfinities by participating in network-wide votes, which could be concerning protocol upgrades, a new economic policy, etc. DFN rewards for participating in the governance system are proportional to the number of tokens staked inside a neuron.

2.4 SCALABILITY

DFINITY is constantly developing with a structure that separates consensus, validation, and storage into separate layers. The storage layer is divided into multiple strings, each of which is responsible for processing transactions that occur in the fragment state. The verification layer is responsible for combining hashes of all fragments in a Merkle-like structure that results in a global state fractionation that is stored in blocks in the top-level chain.

2.5 DFINITY CONSENSUS ALGORITHM

The single most important aspect of the user experience is certainly the time required before a transaction becomes final. This is not solved by a short block time alone — Dfinity’s team also had to reduce the number of confirmations required to a small constant. DFINITY moreover had to provide a provably secure proof-of-stake algorithm that scales to millions of active participants without compromising any bit on decentralization.
Dfinity soon realized that the key to scalability lay in having an unmanipulable source of randomness available. Hence they built a scalable decentralized random beacon, based on what they call the Threshold Relay technique, right into the foundation of the protocol. This strong foundation drives a scalable and fast consensus layer: On top of the beacon runs a blockchain which utilizes notarization by threshold groups to achieve near-instant finality. Details can be found in the overview paper that we are releasing today.
The roots of the DFINITY consensus mechanism date back to 2014 when thair Chief Scientist, Dominic Williams, started to look for more efficient ways to drive large consensus networks. Since then, much research has gone into the protocol and it took several iterations to reach its current design.
For any practical consensus system the difficulty lies in navigating the tight terrain that one is given between the boundaries imposed by theoretical impossibility-results and practical performance limitations.
The first key milestone was the novel Threshold Relay technique for decentralized, deterministic randomness, which is made possible by certain unique characteristics of the BLS signature system. The next breakthrough was the notarization technique, which allows DFINITY consensus to solve the traditional problems that come with proof-of-stake systems. Getting the security proofs sound was the final step before publication.
DFINITY consensus has made the proper trade-offs between the practical side (realistic threat models and security assumptions) and the theoretical side (provable security). Out came a flexible, tunable algorithm, which we expect will establish itself as the best performing proof-of-stake algorithm. In particular, having the built-in random beacon will prove to be indispensable when building out sharding and scalable validation techniques.

2.6 LINKEDUP

The startup has rather cheekily called this “an open version of LinkedIn,” the Microsoft-owned social network for professionals. Unlike LinkedIn, LinkedUp, which runs on any browser, is not owned or controlled by a corporate entity.
LinkedUp is built on Dfinity’s so-called Internet Computer, its name for the platform it is building to distribute the next generation of software and open internet services.
The software is hosted directly on the internet on a Switzerland-based independent data center, but in the concept of the Internet Computer, it could be hosted at your house or mine. The compute power to run the application LinkedUp, in this case — is coming not from Amazon AWS, Google Cloud or Microsoft Azure, but is instead based on the distributed architecture that Dfinity is building.
Specifically, Dfinity notes that when enterprises and developers run their web apps and enterprise systems on the Internet Computer, the content is decentralized across a minimum of four or a maximum of an unlimited number of nodes in Dfinity’s global network of independent data centers.
Dfinity is an open source for LinkedUp to developers for creating other types of open internet services on the architecture it has built.
“Open Social Network for Professional Profiles” suggests that on Dfinity model one can create “Open WhatsApp”, “Open eBay”, “Open Salesforce” or “Open Facebook”.
The tools include a Canister Software Developer Kit and a simple programming language called Motoko that is optimized for Dfinity’s Internet Computer.
“The Internet Computer is conceived as an alternative to the $3.8 trillion legacy IT stack, and empowers the next generation of developers to build a new breed of tamper-proof enterprise software systems and open internet services. We are democratizing software development,” Williams said. “The Bronze release of the Internet Computer provides developers and enterprises a glimpse into the infinite possibilities of building on the Internet Computer — which also reflects the strength of the Dfinity team we have built so far.”
Dfinity says its “Internet Computer Protocol” allows for a new type of software called autonomous software, which can guarantee permanent APIs that cannot be revoked. When all these open internet services (e.g. open versions of WhatsApp, Facebook, eBay, Salesforce, etc.) are combined with other open software and services it creates “mutual network effects” where everyone benefits.
On 1 November, DFINITY has released 13 new public versions of the SDK, to our second major milestone [at WEF Davos] of demoing a decentralized web app called LinkedUp on the Internet Computer. Subsequent milestones towards the public launch of the Internet Computer will involve:
  1. On boarding a global network of independent data centers.
  2. Fully tested economic system.
  3. Fully tested Network Nervous Systems for configuration and upgrades

2.7 WHAT IS MOTOKO?

Motoko is a new software language being developed by the DFINITY Foundation, with an accompanying SDK, that is designed to help the broadest possible audience of developers create reliable and maintainable websites, enterprise systems and internet services on the Internet Computer with ease. By developing the Motoko language, the DFINITY Foundation will ensure that a language that is highly optimized for the new environment is available. However, the Internet Computer can support any number of different software frameworks, and the DFINITY Foundation is also working on SDKs that support the Rust and C languages. Eventually, it is expected there will be many different SDKs that target the Internet Computer.
Full article
submitted by CoinEx_Institution to u/CoinEx_Institution [link] [comments]

Looking for Technical Information about Mining Pools

I'm doing research on how exactly bitcoins are mined, and I'm looking for detailed information about how mining pools work - i.e. what exactly is the pool server telling each participating miner to do.
It's so far my understanding that, when Bitcoins are mined, the following steps take place:
  1. Transactions from the mempool are selected for a new block; this may or may not be all the transactions in said mempool. A coinable transaction - which consists of the miner's wallet's address and other arbitrary data - that will help create new Bitcoin will also be added to the new block.
  2. All of said transactions are hashed together into a Merkle Root. The hashing algorithm is Double SHA-256.
  3. A block header is formed for the new block. Said block header consists of a Version, the Block Hash of the Previous Block in the Blockchain, said Merkle Root from earlier, a timestamp in UTC, the target, and a nonce - which is 32 bits long and can be any value from 0x00000000 to 0xFFFFFFFF (a total of 4,294,967,296 nonce values in total).
  4. The nonce value is set to 0x00000000, and said block header is double hashed to get the Block Hash of the current block; and if said Block Hash starts with a certain number of zeroes (depending on the difficulty), the miner sends the block to the Bitcoin Network, the block successfully added to the blockchain and the miner is awarded with newly created bitcoin.
  5. But if said Block Hash does not start with the required number of zeroes, said block will not be accepted by the network, and the miner Double Hashes the block again, but with a different nonce value; but if none of the 4,294,967,296 nonce values yields a Block Hash with the required number of zeroes, it will be impossible to add the block to the network - and in that case, the miner will either need to change the timestamp and try all 4,294,967,296 nonce values again, or the miner will need to start all over again and compose a new block with a different set of transactions (either a different coinable transaction, a different set of transactions from the mempool, or both).
Now, what I'm trying to figure out is what exactly each miner is doing differently in a mining pool, and if it is different depending on the pool.
One thing I've read is that a mining pool gives each participating miner a different set of transactions from the mempool.
I've also read that, because the most sophisticated miners can try all 4,294,967,296 nonce values in less than a fraction of a second, and since the timestamp can only be updated every second, the coinbase transaction is used as a "second nonce" (although, it is my understanding that, being part of a transaction, if this "extra nonce" is changed, all the transactions need to be double hashed into a new Merkle Root); and I may have read someplace that miners could also be given the same set of transactions from the mempool, but are each told to use a different set of "extra nonce" values for the coinbase transaction.
Is there anything else that pools tell miners to do differently? Is each pool different in the instructions it gives to the participating miners? Did I get anything wrong?
I want to make sure I have a full technical understanding of what mining pools are doing to mine bitcoin.
submitted by sparky77734 to Bitcoin [link] [comments]

Proof Of Work Explained

Proof Of Work Explained
https://preview.redd.it/hl80wdx61j451.png?width=1200&format=png&auto=webp&s=c80b21c53ae45c6f7d618f097bc705a1d8aaa88f
A proof-of-work (PoW) system (or protocol, or function) is a consensus mechanism that was first invented by Cynthia Dwork and Moni Naor as presented in a 1993 journal article. In 1999, it was officially adopted in a paper by Markus Jakobsson and Ari Juels and they named it as "proof of work".
It was developed as a way to prevent denial of service attacks and other service abuse (such as spam on a network). This is the most widely used consensus algorithm being used by many cryptocurrencies such as Bitcoin and Ethereum.
How does it work?
In this method, a group of users competes against each other to find the solution to a complex mathematical puzzle. Any user who successfully finds the solution would then broadcast the block to the network for verifications. Once the users verified the solution, the block then moves to confirm the state.
The blockchain network consists of numerous sets of decentralized nodes. These nodes act as admin or miners which are responsible for adding new blocks into the blockchain. The miner instantly and randomly selects a number which is combined with the data present in the block. To find a correct solution, the miners need to select a valid random number so that the newly generated block can be added to the main chain. It pays a reward to the miner node for finding the solution.
The block then passed through a hash function to generate output which matches all input/output criteria. Once the result is found, other nodes in the network verify and validate the outcome. Every new block holds the hash of the preceding block. This forms a chain of blocks. Together, they store information within the network. Changing a block requires a new block containing the same predecessor. It is almost impossible to regenerate all successors and change their data. This protects the blockchain from tampering.
What is Hash Function?
A hash function is a function that is used to map data of any length to some fixed-size values. The result or outcome of a hash function is known as hash values, hash codes, digests, or simply hashes.
https://preview.redd.it/011tfl8c1j451.png?width=851&format=png&auto=webp&s=ca9c2adecbc0b14129a9b2eea3c2f0fd596edd29
The hash method is quite secure, any slight change in input will result in a different output, which further results in discarded by network participants. The hash function generates the same length of output data to that of input data. It is a one-way function i.e the function cannot be reversed to get the original data back. One can only perform checks to validate the output data with the original data.
Implementations
Nowadays, Proof-of-Work is been used in a lot of cryptocurrencies. But it was first implemented in Bitcoin after which it becomes so popular that it was adopted by several other cryptocurrencies. Bitcoin uses the puzzle Hashcash, the complexity of a puzzle is based upon the total power of the network. On average, it took approximately 10 min to block formation. Litecoin, a Bitcoin-based cryptocurrency is having a similar system. Ethereum also implemented this same protocol.
Types of PoW
Proof-of-work protocols can be categorized into two parts:-
· Challenge-response
This protocol creates a direct link between the requester (client) and the provider (server).
In this method, the requester needs to find the solution to a challenge that the server has given. The solution is then validated by the provider for authentication.
The provider chooses the challenge on the spot. Hence, its difficulty can be adapted to its current load. If the challenge-response protocol has a known solution or is known to exist within a bounded search space, then the work on the requester side may be bounded.
https://preview.redd.it/ij967dof1j451.png?width=737&format=png&auto=webp&s=12670c2124fc27b0f988bb4a1daa66baf99b4e27
Source-wiki
· Solution–verification
These protocols do not have any such prior link between the sender and the receiver. The client, self-imposed a problem and solve it. It then sends the solution to the server to check both the problem choice and the outcome. Like Hashcash these schemes are also based on unbounded probabilistic iterative procedures.
https://preview.redd.it/gfobj9xg1j451.png?width=740&format=png&auto=webp&s=2291fd6b87e84395f8a4364267f16f577b5f1832
Source-wiki
These two methods generally based on the following three techniques:-
CPU-bound
This technique depends upon the speed of the processor. The higher the processor power greater will be the computation.
Memory-bound
This technique utilizes the main memory accesses (either latency or bandwidth) in computation speed.
Network-bound
In this technique, the client must perform a few computations and wait to receive some tokens from remote servers.
List of proof-of-work functions
Here is a list of known proof-of-work functions:-
o Integer square root modulo a large prime
o Weaken Fiat–Shamir signatures`2
o Ong–Schnorr–Shamir signature is broken by Pollard
o Partial hash inversion
o Hash sequences
o Puzzles
o Diffie–Hellman–based puzzle
o Moderate
o Mbound
o Hokkaido
o Cuckoo Cycle
o Merkle tree-based
o Guided tour puzzle protocol
A successful attack on a blockchain network requires a lot of computational power and a lot of time to do the calculations. Proof of Work makes hacks inefficient since the cost incurred would be greater than the potential rewards for attacking the network. Miners are also incentivized not to cheat.
It is still considered as one of the most popular methods of reaching consensus in blockchains. Though it may not be the most efficient solution due to high energy extensive usage. But this is why it guarantees the security of the network.
Due to Proof of work, it is quite impossible to alter any aspect of the blockchain, since any such changes would require re-mining all those subsequent blocks. It is also difficult for a user to take control over the network computing power since the process requires high energy thus making these hash functions expensive.
submitted by RumaDas to u/RumaDas [link] [comments]

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.
  • Bitcoin (BTC) is a peer-to-peer cryptocurrency that aims to function as a means of exchange that is independent of any central authority. BTC can be transferred electronically in a secure, verifiable, and immutable way.
  • Launched in 2009, BTC is the first virtual currency to solve the double-spending issue by timestamping transactions before broadcasting them to all of the nodes in the Bitcoin network. The Bitcoin Protocol offered a solution to the Byzantine Generals’ Problem with a blockchain network structure, a notion first created by Stuart Haber and W. Scott Stornetta in 1991.
  • Bitcoin’s whitepaper was published pseudonymously in 2008 by an individual, or a group, with the pseudonym “Satoshi Nakamoto”, whose underlying identity has still not been verified.
  • The Bitcoin protocol uses an SHA-256d-based Proof-of-Work (PoW) algorithm to reach network consensus. Its network has a target block time of 10 minutes and a maximum supply of 21 million tokens, with a decaying token emission rate. To prevent fluctuation of the block time, the network’s block difficulty is re-adjusted through an algorithm based on the past 2016 block times.
  • With a block size limit capped at 1 megabyte, the Bitcoin Protocol has supported both the Lightning Network, a second-layer infrastructure for payment channels, and Segregated Witness, a soft-fork to increase the number of transactions on a block, as solutions to network scalability.

https://preview.redd.it/s2gmpmeze3151.png?width=256&format=png&auto=webp&s=9759910dd3c4a15b83f55b827d1899fb2fdd3de1

1. What is Bitcoin (BTC)?

  • Bitcoin is a peer-to-peer cryptocurrency that aims to function as a means of exchange and is independent of any central authority. Bitcoins are transferred electronically in a secure, verifiable, and immutable way.
  • Network validators, whom are often referred to as miners, participate in the SHA-256d-based Proof-of-Work consensus mechanism to determine the next global state of the blockchain.
  • The Bitcoin protocol has a target block time of 10 minutes, and a maximum supply of 21 million tokens. The only way new bitcoins can be produced is when a block producer generates a new valid block.
  • The protocol has a token emission rate that halves every 210,000 blocks, or approximately every 4 years.
  • Unlike public blockchain infrastructures supporting the development of decentralized applications (Ethereum), the Bitcoin protocol is primarily used only for payments, and has only very limited support for smart contract-like functionalities (Bitcoin “Script” is mostly used to create certain conditions before bitcoins are used to be spent).

2. Bitcoin’s core features

For a more beginner’s introduction to Bitcoin, please visit Binance Academy’s guide to Bitcoin.

Unspent Transaction Output (UTXO) model

A UTXO transaction works like cash payment between two parties: Alice gives money to Bob and receives change (i.e., unspent amount). In comparison, blockchains like Ethereum rely on the account model.
https://preview.redd.it/t1j6anf8f3151.png?width=1601&format=png&auto=webp&s=33bd141d8f2136a6f32739c8cdc7aae2e04cbc47

Nakamoto consensus

In the Bitcoin network, anyone can join the network and become a bookkeeping service provider i.e., a validator. All validators are allowed in the race to become the block producer for the next block, yet only the first to complete a computationally heavy task will win. This feature is called Proof of Work (PoW).
The probability of any single validator to finish the task first is equal to the percentage of the total network computation power, or hash power, the validator has. For instance, a validator with 5% of the total network computation power will have a 5% chance of completing the task first, and therefore becoming the next block producer.
Since anyone can join the race, competition is prone to increase. In the early days, Bitcoin mining was mostly done by personal computer CPUs.
As of today, Bitcoin validators, or miners, have opted for dedicated and more powerful devices such as machines based on Application-Specific Integrated Circuit (“ASIC”).
Proof of Work secures the network as block producers must have spent resources external to the network (i.e., money to pay electricity), and can provide proof to other participants that they did so.
With various miners competing for block rewards, it becomes difficult for one single malicious party to gain network majority (defined as more than 51% of the network’s hash power in the Nakamoto consensus mechanism). The ability to rearrange transactions via 51% attacks indicates another feature of the Nakamoto consensus: the finality of transactions is only probabilistic.
Once a block is produced, it is then propagated by the block producer to all other validators to check on the validity of all transactions in that block. The block producer will receive rewards in the network’s native currency (i.e., bitcoin) as all validators approve the block and update their ledgers.

The blockchain

Block production

The Bitcoin protocol utilizes the Merkle tree data structure in order to organize hashes of numerous individual transactions into each block. This concept is named after Ralph Merkle, who patented it in 1979.
With the use of a Merkle tree, though each block might contain thousands of transactions, it will have the ability to combine all of their hashes and condense them into one, allowing efficient and secure verification of this group of transactions. This single hash called is a Merkle root, which is stored in the Block Header of a block. The Block Header also stores other meta information of a block, such as a hash of the previous Block Header, which enables blocks to be associated in a chain-like structure (hence the name “blockchain”).
An illustration of block production in the Bitcoin Protocol is demonstrated below.

https://preview.redd.it/m6texxicf3151.png?width=1591&format=png&auto=webp&s=f4253304912ed8370948b9c524e08fef28f1c78d

Block time and mining difficulty

Block time is the period required to create the next block in a network. As mentioned above, the node who solves the computationally intensive task will be allowed to produce the next block. Therefore, block time is directly correlated to the amount of time it takes for a node to find a solution to the task. The Bitcoin protocol sets a target block time of 10 minutes, and attempts to achieve this by introducing a variable named mining difficulty.
Mining difficulty refers to how difficult it is for the node to solve the computationally intensive task. If the network sets a high difficulty for the task, while miners have low computational power, which is often referred to as “hashrate”, it would statistically take longer for the nodes to get an answer for the task. If the difficulty is low, but miners have rather strong computational power, statistically, some nodes will be able to solve the task quickly.
Therefore, the 10 minute target block time is achieved by constantly and automatically adjusting the mining difficulty according to how much computational power there is amongst the nodes. The average block time of the network is evaluated after a certain number of blocks, and if it is greater than the expected block time, the difficulty level will decrease; if it is less than the expected block time, the difficulty level will increase.

What are orphan blocks?

In a PoW blockchain network, if the block time is too low, it would increase the likelihood of nodes producingorphan blocks, for which they would receive no reward. Orphan blocks are produced by nodes who solved the task but did not broadcast their results to the whole network the quickest due to network latency.
It takes time for a message to travel through a network, and it is entirely possible for 2 nodes to complete the task and start to broadcast their results to the network at roughly the same time, while one’s messages are received by all other nodes earlier as the node has low latency.
Imagine there is a network latency of 1 minute and a target block time of 2 minutes. A node could solve the task in around 1 minute but his message would take 1 minute to reach the rest of the nodes that are still working on the solution. While his message travels through the network, all the work done by all other nodes during that 1 minute, even if these nodes also complete the task, would go to waste. In this case, 50% of the computational power contributed to the network is wasted.
The percentage of wasted computational power would proportionally decrease if the mining difficulty were higher, as it would statistically take longer for miners to complete the task. In other words, if the mining difficulty, and therefore targeted block time is low, miners with powerful and often centralized mining facilities would get a higher chance of becoming the block producer, while the participation of weaker miners would become in vain. This introduces possible centralization and weakens the overall security of the network.
However, given a limited amount of transactions that can be stored in a block, making the block time too longwould decrease the number of transactions the network can process per second, negatively affecting network scalability.

3. Bitcoin’s additional features

Segregated Witness (SegWit)

Segregated Witness, often abbreviated as SegWit, is a protocol upgrade proposal that went live in August 2017.
SegWit separates witness signatures from transaction-related data. Witness signatures in legacy Bitcoin blocks often take more than 50% of the block size. By removing witness signatures from the transaction block, this protocol upgrade effectively increases the number of transactions that can be stored in a single block, enabling the network to handle more transactions per second. As a result, SegWit increases the scalability of Nakamoto consensus-based blockchain networks like Bitcoin and Litecoin.
SegWit also makes transactions cheaper. Since transaction fees are derived from how much data is being processed by the block producer, the more transactions that can be stored in a 1MB block, the cheaper individual transactions become.
https://preview.redd.it/depya70mf3151.png?width=1601&format=png&auto=webp&s=a6499aa2131fbf347f8ffd812930b2f7d66be48e
The legacy Bitcoin block has a block size limit of 1 megabyte, and any change on the block size would require a network hard-fork. On August 1st 2017, the first hard-fork occurred, leading to the creation of Bitcoin Cash (“BCH”), which introduced an 8 megabyte block size limit.
Conversely, Segregated Witness was a soft-fork: it never changed the transaction block size limit of the network. Instead, it added an extended block with an upper limit of 3 megabytes, which contains solely witness signatures, to the 1 megabyte block that contains only transaction data. This new block type can be processed even by nodes that have not completed the SegWit protocol upgrade.
Furthermore, the separation of witness signatures from transaction data solves the malleability issue with the original Bitcoin protocol. Without Segregated Witness, these signatures could be altered before the block is validated by miners. Indeed, alterations can be done in such a way that if the system does a mathematical check, the signature would still be valid. However, since the values in the signature are changed, the two signatures would create vastly different hash values.
For instance, if a witness signature states “6,” it has a mathematical value of 6, and would create a hash value of 12345. However, if the witness signature were changed to “06”, it would maintain a mathematical value of 6 while creating a (faulty) hash value of 67890.
Since the mathematical values are the same, the altered signature remains a valid signature. This would create a bookkeeping issue, as transactions in Nakamoto consensus-based blockchain networks are documented with these hash values, or transaction IDs. Effectively, one can alter a transaction ID to a new one, and the new ID can still be valid.
This can create many issues, as illustrated in the below example:
  1. Alice sends Bob 1 BTC, and Bob sends Merchant Carol this 1 BTC for some goods.
  2. Bob sends Carols this 1 BTC, while the transaction from Alice to Bob is not yet validated. Carol sees this incoming transaction of 1 BTC to him, and immediately ships goods to B.
  3. At the moment, the transaction from Alice to Bob is still not confirmed by the network, and Bob can change the witness signature, therefore changing this transaction ID from 12345 to 67890.
  4. Now Carol will not receive his 1 BTC, as the network looks for transaction 12345 to ensure that Bob’s wallet balance is valid.
  5. As this particular transaction ID changed from 12345 to 67890, the transaction from Bob to Carol will fail, and Bob will get his goods while still holding his BTC.
With the Segregated Witness upgrade, such instances can not happen again. This is because the witness signatures are moved outside of the transaction block into an extended block, and altering the witness signature won’t affect the transaction ID.
Since the transaction malleability issue is fixed, Segregated Witness also enables the proper functioning of second-layer scalability solutions on the Bitcoin protocol, such as the Lightning Network.

Lightning Network

Lightning Network is a second-layer micropayment solution for scalability.
Specifically, Lightning Network aims to enable near-instant and low-cost payments between merchants and customers that wish to use bitcoins.
Lightning Network was conceptualized in a whitepaper by Joseph Poon and Thaddeus Dryja in 2015. Since then, it has been implemented by multiple companies. The most prominent of them include Blockstream, Lightning Labs, and ACINQ.
A list of curated resources relevant to Lightning Network can be found here.
In the Lightning Network, if a customer wishes to transact with a merchant, both of them need to open a payment channel, which operates off the Bitcoin blockchain (i.e., off-chain vs. on-chain). None of the transaction details from this payment channel are recorded on the blockchain, and only when the channel is closed will the end result of both party’s wallet balances be updated to the blockchain. The blockchain only serves as a settlement layer for Lightning transactions.
Since all transactions done via the payment channel are conducted independently of the Nakamoto consensus, both parties involved in transactions do not need to wait for network confirmation on transactions. Instead, transacting parties would pay transaction fees to Bitcoin miners only when they decide to close the channel.
https://preview.redd.it/cy56icarf3151.png?width=1601&format=png&auto=webp&s=b239a63c6a87ec6cc1b18ce2cbd0355f8831c3a8
One limitation to the Lightning Network is that it requires a person to be online to receive transactions attributing towards him. Another limitation in user experience could be that one needs to lock up some funds every time he wishes to open a payment channel, and is only able to use that fund within the channel.
However, this does not mean he needs to create new channels every time he wishes to transact with a different person on the Lightning Network. If Alice wants to send money to Carol, but they do not have a payment channel open, they can ask Bob, who has payment channels open to both Alice and Carol, to help make that transaction. Alice will be able to send funds to Bob, and Bob to Carol. Hence, the number of “payment hubs” (i.e., Bob in the previous example) correlates with both the convenience and the usability of the Lightning Network for real-world applications.

Schnorr Signature upgrade proposal

Elliptic Curve Digital Signature Algorithm (“ECDSA”) signatures are used to sign transactions on the Bitcoin blockchain.
https://preview.redd.it/hjeqe4l7g3151.png?width=1601&format=png&auto=webp&s=8014fb08fe62ac4d91645499bc0c7e1c04c5d7c4
However, many developers now advocate for replacing ECDSA with Schnorr Signature. Once Schnorr Signatures are implemented, multiple parties can collaborate in producing a signature that is valid for the sum of their public keys.
This would primarily be beneficial for network scalability. When multiple addresses were to conduct transactions to a single address, each transaction would require their own signature. With Schnorr Signature, all these signatures would be combined into one. As a result, the network would be able to store more transactions in a single block.
https://preview.redd.it/axg3wayag3151.png?width=1601&format=png&auto=webp&s=93d958fa6b0e623caa82ca71fe457b4daa88c71e
The reduced size in signatures implies a reduced cost on transaction fees. The group of senders can split the transaction fees for that one group signature, instead of paying for one personal signature individually.
Schnorr Signature also improves network privacy and token fungibility. A third-party observer will not be able to detect if a user is sending a multi-signature transaction, since the signature will be in the same format as a single-signature transaction.

4. Economics and supply distribution

The Bitcoin protocol utilizes the Nakamoto consensus, and nodes validate blocks via Proof-of-Work mining. The bitcoin token was not pre-mined, and has a maximum supply of 21 million. The initial reward for a block was 50 BTC per block. Block mining rewards halve every 210,000 blocks. Since the average time for block production on the blockchain is 10 minutes, it implies that the block reward halving events will approximately take place every 4 years.
As of May 12th 2020, the block mining rewards are 6.25 BTC per block. Transaction fees also represent a minor revenue stream for miners.
submitted by D-platform to u/D-platform [link] [comments]

TKEYSPACE — blockchain in your mobile

TKEYSPACE — blockchain in your mobile

https://preview.redd.it/w8o3bcvjrtx41.png?width=1400&format=png&auto=webp&s=840ac3872156215b30e708920edbef4583190654
Someone says that the blockchain in the phone is marketing. This is possible for most applications, but not for Tkeycoin. Today we will talk about how the blockchain works in the TkeySpace app.
Who else is not in the topic, TkeySpace is a financial application for decentralized and efficient management of various cryptocurrencies, based on a distributed architecture without using a client-server.
In simple words, it is a blockchain in the user’s mobile device that excludes hacking and hacker attacks, and all data is encrypted using modern cryptographic methods.
https://preview.redd.it/8uku6thlrtx41.png?width=1280&format=png&auto=webp&s=e1a610244da53100a5bc6b821ee5c799c6493ac4

Blockchain

Let’s start with the most important thing — the blockchain works on the principles of P2P networks, when there is no central server and each device is both a server and a client, such an organization allows you to maintain the network performance with any number and any combination of available nodes.
For example, there are 12 machines in the network, and anyone can contact anyone. As a client (resource consumer), each of these machines can send requests for the provision of some resources to other machines within this network and receive them. As a server, each machine must process requests from other machines in the network, send what was requested, and perform some auxiliary and administrative functions.
With traditional client-server systems, we can get a completely disabled social network, messenger, or another service, given that we rely on a centralized infrastructure — we have a very specific number of points of failure. If the main data center is damaged due to an earthquake or any other event, access to information will be slowed down or completely disabled.
With a P2P solution, the failure of one network member does not affect the network operation in any way. P2P networks can easily switch to offline mode when the channel is broken — in which it will exist completely independently and without any interaction.
Instead of storing information in a single central point, as traditional recording methods do, multiple copies of the same data are stored in different locations and on different devices on the network, such as computers or mobile devices.

https://i.redd.it/2c4sv7rnrtx41.gif
This means that even if one storage point is damaged or lost, multiple copies remain secure in other locations. Similarly, if one part of the information is changed without the consent of the rightful owners, there are many other copies where the information is correct, which makes the false record invalid.
The information recorded in the blockchain can take any form, whether it is a transfer of money, ownership, transaction, someone’s identity, an agreement between two parties, or even how much electricity a light bulb used.
However, this requires confirmation from multiple devices, such as nodes in the network. Once an agreement, otherwise known as consensus, is reached between these devices to store something on the blockchain — it can’t be challenged, deleted, or changed.
The technology also allows you to perform a truly huge amount of computing in a relatively short time, which even on supercomputers would require, depending on the complexity of the task, many years or even centuries of work. This performance is achieved because a certain global task is divided into a large number of blocks, which are simultaneously performed by hundreds of thousands of devices participating in the project.

P2P messaging and syncing in TkeySpace

TkeySpace is a node of the TKEY network and other supported networks. when you launch the app, your mobile node connects to an extensive network of supported blockchains, syncs with full nodes to validate transactions and incoming information between nodes, so the nodes organize a graph of connections between them.
You can always check the node information in the TkeySpace app in the ⚙ Settings Contact and peer info App Status;

https://preview.redd.it/co1k25kqrtx41.png?width=619&format=png&auto=webp&s=e443a436b11d797b475b00a467cd9609cac66b83
TkeySpace creates initiating connections to servers registered in the blockchain Protocol as the main ones, from these servers it gets the addresses of nodes to which it can join, in turn, the nodes to which the connection occurred share information about other nodes.

https://i.redd.it/m21pw88srtx41.gif
TkeySpace sends network messages to nodes from supported blockchains in the app to get up-to-date data from the network.
The Protocol uses data structures for communication between nodes, such as block propagation over the network, so before network messages are read, nodes check the “magic number”, check the first bytes, and determine the type of data structure. In the blockchain, the “magic number” is the network ID used to filter messages and block traffic from other p2p networks.
Magic numbers are used in computer science, both for files and protocols. They identify the type of file/data structure. A program that receives such a file/data structure can check the magic number and immediately find out the intended type of this file/data structure.
The first message that your node sends is called a Version Message. In response, the node waits for a Verack message to establish a connection between other peers. The exchange of such messages is called a “handshake”.

https://preview.redd.it/b6gh0hitrtx41.png?width=785&format=png&auto=webp&s=0101eaec6469fb53818486fa13da110f6a4a851d
After the “handshake” is set, TkeySpace will start connecting to other nodes in the network to determine the last block at the end of the required blockchain. At this point — nodes request information about blocks they know using GetBlock messages — in response, your node receives an inv (Inventory Message) from another node with the information that it has the information that was requested by the TkeySpace node.
In response to the received message, inv — TkeySpace sends a GetData message containing a list of blocks starting immediately after the last known hash.

https://preview.redd.it/lare5lsurtx41.png?width=768&format=png&auto=webp&s=da8d27110f406f715292b439051ca221fab47f77

Loading and storing blocks

After exchanging messages, the block information is loaded and transactions are uploaded to your node. To avoid storing tons of information and optimize hard disk space and data processing speed, we use RDBMS — PostgreSQL in full nodes (local computer wallet).
In the TkeySpace mobile app, we use SQLite, and validation takes place by uploading block headers through the Merkle Tree, using the bloom filter — this allows you to optimize the storage of your mobile device as much as possible.
The block header includes its hash, the hash of the previous block, transaction hashes, and additional service information.
Block headers in the Tkeycoin network=84 bytes due to the extension of parameters to support nChains, which will soon be launched in “combat” mode. The titles of the Bitcoin block, Dash, Litecoin=80 bytes.

https://preview.redd.it/uvv3qz7wrtx41.png?width=1230&format=png&auto=webp&s=5cf0cd8b6d099268f3d941aac322af05e781193c
And so, let’s continue — application nodes receive information from the blockchain by uploading block headers, all data is synchronized using the Merkle Tree, or rather your node receives and validates information from the Merkle root.
The hash tree was developed in 1979 by Ralph Merkle and named in his honor. The structure of the system has received this name also because it resembles a tree.
The Merkle tree is a complete binary tree with leaf vertexes containing hashes from data blocks, and inner vertexes containing hashes from adding values in child vertexes. The root node of the tree contains a hash from the entire data set, meaning the hash tree is a unidirectional hash function. The Merkle tree is used for the efficient storage of transactions in the cryptocurrency blockchain. It allows you to get a “fingerprint” of all transactions in the block, as well as effectively verify transactions.

https://preview.redd.it/3hmbthpxrtx41.png?width=677&format=png&auto=webp&s=cca3d54c585747e0431c6c4de6eec7ff7e3b2f4d
Hash trees have an advantage over hash chains or hash functions. When using hash trees, it is much less expensive to prove that a certain block of data belongs to a set. Since different blocks are often independent data, such as transactions or parts of files, we are interested in being able to check only one block without recalculating the hashes for the other nodes in the tree.
https://i.redd.it/f7o3dh7zrtx41.gif
The Merkle Tree scheme allows you to check whether the hash value of a particular transaction is included in Merkle Root, without having all the other transactions in the block. So by having the transaction, block header, and Merkle Branch for that transaction requested from the full node, the digital wallet can make sure that the transaction was confirmed in a specific block.

https://i.redd.it/88sz13w0stx41.gif
The Merkle tree, which is used to prove that a transaction is included in a block, is also very well scaled. Because each new “layer” added to the tree doubles the total number of “leaves” it can represent. You don’t need a deep tree to compactly prove transaction inclusion, even among blocks with millions of transactions.

Statistical constants and nChains

To support the Tkeycoin cryptocurrency, the TkeySpace application uses additional statistical constants to prevent serialization of Merkle tree hashes, which provides an additional layer of security.
Also, for Tkeycoin, support for multi-chains (nChains) is already included in the TkeySpace app, which will allow you to use the app in the future with most of the features of the TKEY Protocol, including instant transactions.

The Bloom Filter

An additional level of privacy is provided by the bloom filter — which is a probabilistic data structure that allows you to check whether an element belongs to a set.

https://preview.redd.it/7ejkvi82stx41.png?width=374&format=png&auto=webp&s=ed75cd056949fc3a2bcf48b4d7ea78d3dc6d81f3
The bloom filter looks for whether a particular transaction is linked to Alice, not whether Alice has a specific cryptocurrency. In this way, transactions and received IDs are analyzed through a bloom filter. When “Alice wants to know about transaction X”, an ID is requested for transaction X, which is compared with the filled segments in her bloom filter. If “Yes” is received, the node can get the information and verify the transaction.

https://preview.redd.it/gjpsbss3stx41.png?width=1093&format=png&auto=webp&s=4cdcbc827849d13b7d6f0b7e7ba52e65ddc03a82

HD support

The multi-currency wallet TkeySpace is based on HD (or hierarchical determinism), a privacy-oriented method for generating and managing addresses. Each wallet address is generated from an xPub wallet (or extended public key). The app is completely anonymous — and individual address is generated for each transaction to accept a particular cryptocurrency. Even for low-level programming, using the same address is negative for the system, not to mention your privacy. We recommend that you always use a new address for transactions to ensure the necessary level of privacy and security.
The EXT_PUBLIC_KEY and EXT_SECRET_KEY values for DASH, Bitcoin, and Litecoin are completely identical. Tkeycoin uses its values, as well as other methods for storing transactions and blocks (RDBMS), and of course — nChains.

Secret key

Wallets in the blockchain have public and private keys.
https://preview.redd.it/br9kk8n5stx41.png?width=840&format=png&auto=webp&s=a36e4c619451735469a9cff57654d322467e4fba
Centralized applications usually store users’ private keys on their servers, which makes users’ funds vulnerable to hacker attacks or theft.
A private key is a special combination of characters that provides access to cryptocurrencies stored on the account. Only a person who knows the key can move and spend digital assets.
TkeySpace — stores the encrypted key only on the user’s device and in encrypted form. The encrypted key is displayed as a mnemonic phrase (backup phrase), which is very convenient for users. Unlike complex cryptographic ciphers, the phrase is easy to save or write. A backup keyword provides the maximum level of security.
A mnemonic phrase is 12 or 24 words that are generated using random number entropy. If a phrase consists of 12 words, then the number of possible combinations is 204⁸¹² or 21¹³² — the phrase will have 132 security bits. To restore the wallet, you must enter the mnemonic phrase in strict order, as it was presented after generation.

Result

Now we understand that your application TkeySpace is a node of the blockchain that communicates with other nodes using p2p messages, stores block headers and validate information using the Merkle Tree, verifies transactions, filters information using the bloom filter, and operates completely in a decentralized model. The application code contains all the necessary blockchain settings for communicating with the network, the so-called chain parameters.
TkeySpace is a new generation mobile app. A completely new level of security, easy user-friendly interfaces and all the necessary features that are required to work with cryptocurrency.
submitted by tkeycoin to Tkeycoin_Official [link] [comments]

Continuous Proof of Bitcoin Burn: trust minimized sidechains and bitcoin-pegs w/o oracles/federations today

Original design presented for discussion and criticism
originally posted here: https://bitcointalk.org/index.php?topic=5212814.0
TLDR: Proposing the following that's possible today to use for any existing or new altcoins:
_______________________________________

Disclaimer:

This is not an altcoin thread. I'm not making anything. The design discussed options for existing altcoins and new ways to built on top of Bitcoin inheriting some of its security guarantees. 2 parts: First, the design allows any altcoins to switch to securing themselves via Bitcoin instead of their own PoW or PoS with significant benefits to both altcoins and Bitcoin (and environment lol). Second, I explain how to create Bitcoin-pegged assets to turn altcoins into a Bitcoin sidechain equivalent. Let me know if this is of interest or if it exists, feel free to use or do anything with this, hopefully I can help.

Issue:

Solution to first few points:

PoW altcoin switching to CPoBB would trade:

PoS altcoin switching to CPoBB would trade:

We already have a permissionless, compact, public, high-cost-backed finality base layer to build on top - Bitcoin! It will handle sorting, data availability, finality, and has something of value to use instead of capital or energy that's outside the sidechain - the Bitcoin coins. The sunk costs of PoW can be simulated by burning Bitcoin, similar to concept known as Proof of Burn where Bitcoin are sent to unspendable address. Unlike ICO's, no contributors can take out the Bitcoins and get rewards for free. Unlike PoS, entry into supply lies outside the alt-chain and thus doesn't depend on permission of alt-chain stake-coin holders. It's hard to find a more bandwidth or state size protective blockchain to use other than Bitcoin as well so altcoins can be Bitcoin-aware at little marginal difficulty - 10 years of history fully validates in under a day.

What are typical issues with Proof of Burn?

Solution:

This should be required for any design for it to stay permissionless. Optional is constant fixed emission rate for altcoins not trying to be money if goal is to maximize accessibility. Since it's not depending on brand new PoW for security, they don't have to depend on massive early rewards giving disproportionate fraction of supply at earliest stage either. If 10 coins are created every block, after n blocks, at rate of 10 coins per block, % emission per block is = (100/n)%, an always decreasing number. Sidechain coin doesn't need to be scarce money, and could maximize distribution of control by encouraging further distribution. If no burners exist in a block, altcoin block reward is simply added to next block reward making emission predictable.
Sidechain block content should be committed in burn transaction via a root of the merkle tree of its transactions. Sidechain state will depend on Bitcoin for finality and block time between commitment broadcasts. However, the throughput can be of any size per block, unlimited number of such sidechains can exist with their own rules and validation costs are handled only by nodes that choose to be aware of a specific sidechain by running its consensus compatible software.
Important design decision is how can protocol determine the "true" side-block and how to distribute incentives. Simplest solution is to always :
  1. Agree on the valid sidechain block matching the merkle root commitment for the largest amount of Bitcoin burnt, earliest inclusion in the bitcoin block as the tie breaker
  2. Distribute block reward during the next side-block proportional to current amounts burnt
  3. Bitcoin fee market serves as deterrent for spam submissions of blocks to validate
e.g.
sidechain block reward is set always at 10 altcoins per block Bitcoin block contains the following content embedded and part of its transactions: tx11: burns 0.01 BTC & OP_RETURN tx56: burns 0.05 BTC & OP_RETURN ... <...root of valid sidechain block version 1> ... tx78: burns 1 BTC & OP_RETURN ... <...root of valid sidechain block version 2> ... tx124: burns 0.2 BTC & OP_RETURN ... <...root of INVALID sidechain block version 3> ...
Validity is deterministic by rules in client side node software (e.g. signature validation) so all nodes can independently see version 3 is invalid and thus burner of tx124 gets no reward allocated. The largest valid burn is from tx78 so version 2 is used for the blockchain in sidechain. The total valid burn is 1.06 BTC, so 10 altcoins to be distributed in the next block are 0.094, 0.472, 9.434 to owners of first 3 transactions, respectively.
Censorship attack would require continuous costs in Bitcoin on the attacker and can be waited out. Censorship would also be limited to on-sidechain specific transactions as emission distribution to others CPoB contributors wouldn't be affected as blocks without matching coin distributions on sidechain wouldn't be valid. Additionally, sidechains can allow a limited number of sidechain transactions to happen via embedding transaction data inside Bitcoin transactions (e.g. OP_RETURN) as a way to use Bitcoin for data availability layer in case sidechain transactions are being censored on their network. Since all sidechain nodes are Bitcoin aware, it would be trivial to include.
Sidechain blocks cannot be reverted without reverting Bitcoin blocks or hard forking the protocol used to derive sidechain state. If protocol is forked, the value of sidechain coins on each fork of sidechain state becomes important but Proof of Burn natively guarantees trust minimized and permissionless distribution of the coins, something inferior methods like obscure early distributions, trusted pre-mines, and trusted ICO's cannot do.
More bitcoins being burnt is parallel to more hash rate entering PoW, with each miner or burner getting smaller amount of altcoins on average making it unprofitable to burn or mine and forcing some to exit. At equilibrium costs of equipment and electricity approaches value gained from selling coins just as at equilibrium costs of burnt coins approaches value of altcoins rewarded. In both cases it incentivizes further distribution to markets to cover the costs making burners and miners dependent on users via markets. In both cases it's also possible to mine without permission and mine at a loss temporarily to gain some altcoins without permission if you want to.
Altcoins benefit by inheriting many of bitcoin security guarantees, bitcoin parties have to do nothing if they don't want to, but will see their coins grow more scarce through burning. The contributions to the fee market will contribute to higher Bitcoin miner rewards even after block reward is gone.

Sidechain Bitcoin-pegs:

What is the ideal goal of the sidechains? Ideally to have a token that has the bi-directionally pegged value to Bitcoin and tradeable ~1:1 for Bitcoin that gives Bitcoin users an option of a different rule set without compromising the base chain nor forcing base chain participants to do anything different.
Issues with value pegs:
Let's get rid of the idea of needing Bitcoin collateral to back pegged coins 1:1 as that's never secure, independent, or scalable at same security level. As drive-chain design suggested the peg doesn't have to be fast, can take months, just needs to exist so other methods can be used to speed it up like atomic swaps by volunteers taking on the risk for a fee.
In continuous proof of burn we have another source of Bitcoins, the burnt Bitcoins. Sidechain protocols can require some minor percentage (e.g. 20%) of burner tx value coins via another output to go to reimburse those withdrawing side-Bitcoins to Bitcoin chain until they are filled. If withdrawal queue is empty that % is burnt instead. Selection of who receives reimbursement is deterministic per burner. Percentage must be kept small as it's assumed it's possible to get up to that much discount on altcoin emissions.
Let's use a really simple example case where each burner pays 20% of burner tx amount to cover withdrawal in exact order requested with no attempts at other matching, capped at half amount requested per payout. Example:
withdrawal queue: request1: 0.2 sBTC request2: 1.0 sBTC request3: 0.5 sBTC
same block burners: tx burns 0.8 BTC, 0.1 BTC is sent to request1, 0.1 BTC is sent to request2 tx burns 0.4 BTC, 0.1 BTC is sent to request1 tx burns 0.08 BTC, 0.02 BTC is sent to request 1 tx burns 1.2 BTC, 0.1 BTC is sent to request1, 0.2 BTC is sent to request2
withdrawal queue: request1: filled with 0.32 BTC instead of 0.2 sBTC, removed from queue request2: partially-filled with 0.3 BTC out of 1.0 sBTC, 0.7 BTC remaining for next queue request3: still 0.5 sBTC
Withdrawal requests can either take long time to get to filled due to cap per burn or get overfilled as seen in "request1" example, hard to predict. Overfilling is not a big deal since we're not dealing with a finite source. The risk a user that chooses to use the sidechain pegged coin takes on is based on the rate at which they can expect to get paid based on value of altcoin emission that generally matches Bitcoin burn rate. If sidechain loses interest and nobody is burning enough bitcoin, the funds might be lost so the scale of risk has to be measured. If Bitcoins burnt per day is 0.5 BTC total and you hope to deposit or withdraw 5000 BTC, it might take a long time or never happen to withdraw it. But for amounts comparable or under 0.5 BTC/day average burnt with 5 side-BTC on sidechain outstanding total the risks are more reasonable.
Deposits onto the sidechain are far easier - by burning Bitcoin in a separate known unspendable deposit address for that sidechain and sidechain protocol issuing matching amount of side-Bitcoin. Withdrawn bitcoins are treated as burnt bitcoins for sake of dividing block rewards as long as they followed the deterministic rules for their burn to count as valid and percentage used for withdrawals is kept small to avoid approaching free altcoin emissions by paying for your own withdrawals and ensuring significant unforgeable losses.
Ideally more matching is used so large withdrawals don't completely block everyone else and small withdrawals don't completely block large withdrawals. Better methods should deterministically randomize assigned withdrawals via previous Bitcoin block hash, prioritized by request time (earliest arrivals should get paid earlier), and amount of peg outstanding vs burn amount (smaller burns should prioritize smaller outstanding balances). Fee market on bitcoin discourages doing withdrawals of too small amounts and encourages batching by burners.
The second method is less reliable but already known that uses over-collateralized loans that create a oracle-pegged token that can be pegged to the bitcoin value. It was already used by its inventors in 2014 on bitshares (e.g. bitCNY, bitUSD, bitBTC) and similarly by MakerDAO in 2018. The upside is a trust minimized distribution of CPoB coins can be used to distribute trust over selection of price feed oracles far better than pre-mined single trusted party based distributions used in MakerDAO (100% pre-mined) and to a bit lesser degree on bitshares (~50% mined, ~50% premined before dpos). The downside is 2 fold: first the supply of BTC pegged coin would depend on people opening an equivalent of a leveraged long position on the altcoin/BTC pair, which is hard to convince people to do as seen by very poor liquidity of bitBTC in the past. Second downside is oracles can still collude to mess with price feeds, and while their influence might be limited via capped price changes per unit time and might compromise their continuous revenue stream from fees, the leverage benefits might outweight the losses. The use of continous proof of burn to peg withdrawals is superior method as it is simply a minor byproduct of "mining" for altcoins and doesn't depend on traders positions. At the moment I'm not aware of any market-pegged coins on trust minimized platforms or implemented in trust minimized way (e.g. premined mkr on premined eth = 2 sets of trusted third parties each of which with full control over the design).
_______________________________________

Brief issues with current altchains options:

  1. PoW: New PoW altcoins suffer high risk of attacks. Additional PoW chains require high energy and capital costs to create permissionless entry and trust minimized miners that are forever dependent on markets to hold them accountable. Using same algorithm or equipment as another chain or merge-mining puts you at a disadvantage by allowing some miners to attack and still cover sunk costs on another chain. Using a different algorithm/equipment requires building up the value of sunk costs to protect against attacks with significant energy and capital costs. Drive-chains also require miners to allow it by having to be sidechain aware and thus incur additional costs on them and validating nodes if the sidechain rewards are of value and importance.
  2. PoS: PoS is permissioned (requires permission from internal party to use network or contribute to consensus on permitted scale), allows perpetual control without accountability to others, and incentivizes centralization of control over time. Without continuous source of sunk costs there's no reason to give up control. By having consensus entirely dependent on internal state network, unlike PoW but like private databases, cannot guarantee independent permissionless entry and thus cannot claim trust minimization. Has no built in distribution methods so depends on safe start (snapshot of trust minimized distributions or PoW period) followed by losing that on switch to PoS or starting off dependent on a single trusted party such as case in all significant pre-mines and ICO's.
  3. Proof of Capacity: PoC is just shifting costs further to capital over PoW to achieve same guarantees.
  4. PoW/PoS: Still require additional PoW chain creation. Strong dependence on PoS can render PoW irrelevant and thus inherit the worst properties of both protocols.
  5. Tokens inherit all trust dependencies of parent blockchain and thus depend on the above.
  6. Embedded consensus (counterparty, veriblock?, omni): Lacks mechanism for distribution, requires all tx data to be inside scarce Bitcoin block space so high cost to users instead of compensated miners. If you want to build a very expressive scripting language, might very hard & expensive to fit into Bitcoin tx vs CPoBB external content of unlimited size in a committed hash. Same as CPoBB is Bitcoin-aware so can respond to Bitcoin being sent but without source of Bitcoins like burning no way to do any trust minimized Bitcoin-pegs it can control fully.

Few extra notes from my talks with people:

Main questions to you:

open to working on this further with others
submitted by awasi868 to CryptoTechnology [link] [comments]

Nonce – Definition, Meaning, Review, Description, Example, Proof-Of-Work Bitcoin Merkle Tree  Merkle Root  Blockchain - YouTube Merkle Trees - Efficient Data Verification Aaron Goldstein: The Number 1 problem with bitcoin Were Back! - YouTube

Good job! We now have the original merkleroot value from our bitcoin-cli getblock command! You can repeat the process for as many transaction as you like. Exception for merkle roots of block containing only the coinbase transaction. A exception to the process above is when generating the merkle root for a block which contains a single transaction. A Merkle root is a simple mathematical way to verify the data on a Merkle tree. Merkle roots are used in cryptocurrency to make sure data blocks passed between peers on a peer-to-peer network are ... Merkle tree aka binary hash tree is a data structure used for efficiently summarising and verifying the integrity of large data sets Merkle tree is a kind of inverted tree structure with the root… The Bitcoin wiki Vocabulary article explains why the Merkle root exists:. Every transaction has a hash associated with it. In a block, all of the transaction hashes in the block are themselves hashed (sometimes several times -- the exact process is complex), and the result is the Merkle root. In other words, the Merkle root is the hash of all the hashes of all the transactions in the block. The Merkle root is a part of the block header. With this scheme, it is possible to securely verify that a transaction has been accepted by the network (and get the number of confirmations) by downloading just the small block headers and Merkle ...

[index] [8170] [16499] [31138] [19198] [13649] [7731] [24483] [19318] [3140] [8617]

Nonce – Definition, Meaning, Review, Description, Example, Proof-Of-Work Bitcoin

Bitcoin 101 - Merkle Roots and Merkle Trees - Bitcoin Coding and Software - The Block Header - Duration: 24:18. CRI 43,503 views. 24:18. Hash Tables and Hash Functions - Duration: 13:54. Editing Monitors : https://amzn.to/2RfKWgL https://amzn.to/2Q665JW https://amzn.to/2OUP21a. Check out our website: http://www.telusko.com Follow Telusko on T... Close. This video is unavailable. Bitcoin. this is the number one problem with bitcoin. bitcoin, is not backed by anything. it has no intrinsic value. i get on facebook, what do i see litecoin. another variation of bitcoin. the ... The Merkle hash root does not indicate the tree depth, enabling a second-preimage attack in which an attacker creates a document other than the original that has the same Merkle hash root. For the ...

#