SHA-256 hash calculator Xorbin

Robinhood vs. The Paywall

Paywalls are, technologically speaking, quite fragile. In fact, as of today, if you are quick enough at the keyboard, you can easily copy the full text of a New York Times article before the Javascript kicks in and trims it.
I do this sometimes and I have a fast machine and a fast internet connection, which should make it harder. Other sites are more clever, but for the most part, paywalls are still a bit of a joke.
However, they're getting a lot better and more prevalent. I can imagine that right now an engineer at NYT is working on a better paywall with no practical way of cheating it.
All that aside, an article is just a piece of ordered text and some formatting, and I don't see that changing any time soon. Once you're past the paywall, the text just sits there in your browser, or in your email, or whatever. It can be viewed, copied, pasted, or read by a 3rd party extension.
What would it take, practically speaking, to "Robinhood" that text and make it freely available to everyone whether or not they've paid for it? There are numerous ways to access paywalled content today, which I won't share but aren't hard to find. But I'm interested in whether or not there is a solution that is so robust that it backs publishers into a corner where they need to find another way to make money. And when I say "robust" I mostly mean "legal", because I am assuming that any illegal method would ultimately lose out in a game of legal whack-a-mole (think torrent trackers or darknet markets).
Anyways, some initial considerations...
  1. You'd have to have at least one participant who has access to the paywalled content, but ideally many more than that who can all participate in tossing the content back over the paywall.
  2. You would need to have an immutable and accessible place to put the paywalled content so that other people could point their browsers to that location and see the same content that they would if they were looking at the source.
  3. As noted, you'd want to eliminate as much legal risk as possible. That goes for both the content "suppliers" and the content "consumers" (or, Robinhood and those he gives to).
I am not sure exactly what would happen if I just started copying and pasting paywalled content on, say, Reddit, but I am pretty sure it would catch up with me eventually because I am explicitly re-publishing. This solution would need to be so foolproof that it would put those who would otherwise enforce against it in an untenable position.
So, bear with me, here's what I want to know: how flawed, immoral, antisocial, and generally lacking is the following idea? My suspicion is that it is a pretty bad idea and is also pretty naive, but it's still been fun to think about and maybe some of you would like to discuss it. I am interested in any implications that come to mind.
~
The idea:
If you want to participate in this scheme, you install a browser extension. If you have access to any paywalled content, then every time you visit a page and view that content, the browser extension grabs the text and compresses it to its smallest possible representation.
Next, the browser extension make the smallest possible arbitrary transaction on the blockchain (looks to be about $0.06 currently), and stores as much of the article as it can fit in the OP_RETURN field, which is basically just a blank field for arbitrary text and currently has a size limit of 256 bytes (Note: There are tons of similar ways to accomplish the same thing, any many better blockchains for this use case. I just don't really keep up with the smaller blockchains and think that we can use the Bitcoin blockchain as a simple way to demonstrate the idea).
It may take a few transactions to store an entire article, but once it's part of the blockchain, it's there forever, and anyone who would want to subsequently view that article would only need to have access to the indices of the transactions and software that can de-compress the OP_RETURN values and reconstruct the article. I imagine this would also happen in the browser extension.
In this way, it's a lot like private torrent trackers. Everybody shares what they have access to, and the pieces of data that comprise the underlying media fly around the network freely. The software client is responsible for piecing them together and making the data cohesive for a given end user.
Today, a torrent client is completely legal, but having pirated media on your computer is not. Also, I'm pretty sure that opening your media collection to peers is also illegal, but I'm not actually sure.
Using the blockchain as the storage mechanism changes the calculus a little bit. You're not storing any pirated data on your machine, rather, you are stashing bits and pieces of it in a decentralized ledger, which nobody owns, meaning that nobody is really accountable for it. It's also impossible to take down.
The question of legality here is something like "are you allowed to include copyrighted works in transaction text on the blockchain?". And if not, how many chunks would the article need to be broken apart into to make it no long "The Article", but rather just pieces of arbitrary data which, if put together in the right order, would happen to reproduce "The Article"? Someone who is more knowledgable than I am would need to chime in here.
~
I wanted to get a sense of if this is even practical so I grabbed the text from a NYT article called "Opinion | No, the Democrats Haven’t Gone Over the Edge" by David Brooks.
After running the text through 1000 rounds of compression I got it down to 2702 bytes. The current OP_RETURN size limit for a BTC transaction is 256 bytes, so you would need to make around 10 transactions to store this single article.
And each transaction has a fee that goes to miners, which appears to be around 128 satoshis/byte according to https://privacypros.io/tools/bitcoin-fee-estimato
The BTC sent in a given transaction is recoverable, because it could be sent to a wallet that is owned by the sender, but the fees are unavoidable. Given the current rate, storing a NYT Opinion article on the Bitcoin blockchain, forever, would cost about 2707 * 128 Satoshis, or roughly $37.
So my immediate thought is wow that's expensive. I also know that it's frowned upon by the Bitcoin community and would be perceived as antagonistic by the miners. But my guess is that there's a better way to accomplish the same thing (again, off-chain transactions or using a totally different blockchain such as Ethereum, or BSV).
In fact, in "The unfuckening of OP_RETURN", Shadders shows that one can practically store up to 100kb of text in a given BSV transaction (BSV is a fork of bitcoin, which aims to align more with Satoshi's "original" vision).
The result of Shadders experiment? Well, here's the complete prequel to "Alice and Wonderland" in a single transaction, on the blockchain, forever: https://whatsonchain.com/tx/ef21e71d00b9fce174222e679640b09e29ac8a55f321c93e64b16cc3109959f8
Good thing Alice and Wonderland is in the public domain, right? Or... should it even matter what's "public" and what's "paywalled"?
What do you think?
submitted by mrctte to TheMotte [link] [comments]

Technical: Taproot: Why Activate?

This is a follow-up on https://old.reddit.com/Bitcoin/comments/hqzp14/technical_the_path_to_taproot_activation/
Taproot! Everybody wants it!! But... you might ask yourself: sure, everybody else wants it, but why would I, sovereign Bitcoin HODLer, want it? Surely I can be better than everybody else because I swapped XXX fiat for Bitcoin unlike all those nocoiners?
And it is important for you to know the reasons why you, o sovereign Bitcoiner, would want Taproot activated. After all, your nodes (or the nodes your wallets use, which if you are SPV, you hopefully can pester to your wallet vendoimplementor about) need to be upgraded in order for Taproot activation to actually succeed instead of becoming a hot sticky mess.
First, let's consider some principles of Bitcoin.
I'm sure most of us here would agree that the above are very important principles of Bitcoin and that these are principles we would not be willing to remove. If anything, we would want those principles strengthened (especially the last one, financial privacy, which current Bitcoin is only sporadically strong with: you can get privacy, it just requires effort to do so).
So, how does Taproot affect those principles?

Taproot and Your /Coins

Most HODLers probably HODL their coins in singlesig addresses. Sadly, switching to Taproot would do very little for you (it gives a mild discount at spend time, at the cost of a mild increase in fee at receive time (paid by whoever sends to you, so if it's a self-send from a P2PKH or bech32 address, you pay for this); mostly a wash).
(technical details: a Taproot output is 1 version byte + 32 byte public key, while a P2WPKH (bech32 singlesig) output is 1 version byte + 20 byte public key hash, so the Taproot output spends 12 bytes more; spending from a P2WPKH requires revealing a 32-byte public key later, which is not needed with Taproot, and Taproot signatures are about 9 bytes smaller than P2WPKH signatures, but the 32 bytes plus 9 bytes is divided by 4 because of the witness discount, so it saves about 11 bytes; mostly a wash, it increases blockweight by about 1 virtual byte, 4 weight for each Taproot-output-input, compared to P2WPKH-output-input).
However, as your HODLings grow in value, you might start wondering if multisignature k-of-n setups might be better for the security of your savings. And it is in multisignature that Taproot starts to give benefits!
Taproot switches to using Schnorr signing scheme. Schnorr makes key aggregation -- constructing a single public key from multiple public keys -- almost as trivial as adding numbers together. "Almost" because it involves some fairly advanced math instead of simple boring number adding, but hey when was the last time you added up your grocery list prices by hand huh?
With current P2SH and P2WSH multisignature schemes, if you have a 2-of-3 setup, then to spend, you need to provide two different signatures from two different public keys. With Taproot, you can create, using special moon math, a single public key that represents your 2-of-3 setup. Then you just put two of your devices together, have them communicate to each other (this can be done airgapped, in theory, by sending QR codes: the software to do this is not even being built yet, but that's because Taproot hasn't activated yet!), and they will make a single signature to authorize any spend from your 2-of-3 address. That's 73 witness bytes -- 18.25 virtual bytes -- of signatures you save!
And if you decide that your current setup with 1-of-1 P2PKH / P2WPKH addresses is just fine as-is: well, that's the whole point of a softfork: backwards-compatibility; you can receive from Taproot users just fine, and once your wallet is updated for Taproot-sending support, you can send to Taproot users just fine as well!
(P2WPKH and P2WSH -- SegWit v0 -- addresses start with bc1q; Taproot -- SegWit v1 --- addresses start with bc1p, in case you wanted to know the difference; in bech32 q is 0, p is 1)
Now how about HODLers who keep all, or some, of their coins on custodial services? Well, any custodial service worth its salt would be doing at least 2-of-3, or probably something even bigger, like 11-of-15. So your custodial service, if it switched to using Taproot internally, could save a lot more (imagine an 11-of-15 getting reduced from 11 signatures to just 1!), which --- we can only hope! --- should translate to lower fees and better customer service from your custodial service!
So I think we can say, very accurately, that the Bitcoin principle --- that YOU are in control of your money --- can only be helped by Taproot (if you are doing multisignature), and, because P2PKH and P2WPKH remain validly-usable addresses in a Taproot future, will not be harmed by Taproot. Its benefit to this principle might be small (it mostly only benefits multisignature users) but since it has no drawbacks with this (i.e. singlesig users can continue to use P2WPKH and P2PKH still) this is still a nice, tidy win!
(even singlesig users get a minor benefit, in that multisig users will now reduce their blockchain space footprint, so that fees can be kept low for everybody; so for example even if you have your single set of private keys engraved on titanium plates sealed in an airtight box stored in a safe buried in a desert protected by angry nomads riding giant sandworms because you're the frickin' Kwisatz Haderach, you still gain some benefit from Taproot)
And here's the important part: if P2PKH/P2WPKH is working perfectly fine with you and you decide to never use Taproot yourself, Taproot will not affect you detrimentally. First do no harm!

Taproot and Your Contracts

No one is an island, no one lives alone. Give and you shall receive. You know: by trading with other people, you can gain expertise in some obscure little necessity of the world (and greatly increase your productivity in that little field), and then trade the products of your expertise for necessities other people have created, all of you thereby gaining gains from trade.
So, contracts, which are basically enforceable agreements that facilitate trading with people who you do not personally know and therefore might not trust.
Let's start with a simple example. You want to buy some gewgaws from somebody. But you don't know them personally. The seller wants the money, you want their gewgaws, but because of the lack of trust (you don't know them!! what if they're scammers??) neither of you can benefit from gains from trade.
However, suppose both of you know of some entity that both of you trust. That entity can act as a trusted escrow. The entity provides you security: this enables the trade, allowing both of you to get gains from trade.
In Bitcoin-land, this can be implemented as a 2-of-3 multisignature. The three signatories in the multisgnature would be you, the gewgaw seller, and the escrow. You put the payment for the gewgaws into this 2-of-3 multisignature address.
Now, suppose it turns out neither of you are scammers (whaaaat!). You receive the gewgaws just fine and you're willing to pay up for them. Then you and the gewgaw seller just sign a transaction --- you and the gewgaw seller are 2, sufficient to trigger the 2-of-3 --- that spends from the 2-of-3 address to a singlesig the gewgaw seller wants (or whatever address the gewgaw seller wants).
But suppose some problem arises. The seller gave you gawgews instead of gewgaws. Or you decided to keep the gewgaws but not sign the transaction to release the funds to the seller. In either case, the escrow is notified, and if it can sign with you to refund the funds back to you (if the seller was a scammer) or it can sign with the seller to forward the funds to the seller (if you were a scammer).
Taproot helps with this: like mentioned above, it allows multisignature setups to produce only one signature, reducing blockchain space usage, and thus making contracts --- which require multiple people, by definition, you don't make contracts with yourself --- is made cheaper (which we hope enables more of these setups to happen for more gains from trade for everyone, also, moon and lambos).
(technology-wise, it's easier to make an n-of-n than a k-of-n, making a k-of-n would require a complex setup involving a long ritual with many communication rounds between the n participants, but an n-of-n can be done trivially with some moon math. You can, however, make what is effectively a 2-of-3 by using a three-branch SCRIPT: either 2-of-2 of you and seller, OR 2-of-2 of you and escrow, OR 2-of-2 of escrow and seller. Fortunately, Taproot adds a facility to embed a SCRIPT inside a public key, so you can have a 2-of-2 Taprooted address (between you and seller) with a SCRIPT branch that can instead be spent with 2-of-2 (you + escrow) OR 2-of-2 (seller + escrow), which implements the three-branched SCRIPT above. If neither of you are scammers (hopefully the common case) then you both sign using your keys and never have to contact the escrow, since you are just using the escrow public key without coordinating with them (because n-of-n is trivial but k-of-n requires setup with communication rounds), so in the "best case" where both of you are honest traders, you also get a privacy boost, in that the escrow never learns you have been trading on gewgaws, I mean ewww, gawgews are much better than gewgaws and therefore I now judge you for being a gewgaw enthusiast, you filthy gewgawer).

Taproot and Your Contracts, Part 2: Cryptographic Boogaloo

Now suppose you want to buy some data instead of things. For example, maybe you have some closed-source software in trial mode installed, and want to pay the developer for the full version. You want to pay for an activation code.
This can be done, today, by using an HTLC. The developer tells you the hash of the activation code. You pay to an HTLC, paying out to the developer if it reveals the preimage (the activation code), or refunding the money back to you after a pre-agreed timeout. If the developer claims the funds, it has to reveal the preimage, which is the activation code, and you can now activate your software. If the developer does not claim the funds by the timeout, you get refunded.
And you can do that, with HTLCs, today.
Of course, HTLCs do have problems:
Fortunately, with Schnorr (which is enabled by Taproot), we can now use the Scriptless Script constuction by Andrew Poelstra. This Scriptless Script allows a new construction, the PTLC or Pointlocked Timelocked Contract. Instead of hashes and preimages, just replace "hash" with "point" and "preimage" with "scalar".
Or as you might know them: "point" is really "public key" and "scalar" is really a "private key". What a PTLC does is that, given a particular public key, the pointlocked branch can be spent only if the spender reveals the private key of the given public key to you.
Another nice thing with PTLCs is that they are deniable. What appears onchain is just a single 2-of-2 signature between you and the developemanufacturer. It's like a magic trick. This signature has no special watermarks, it's a perfectly normal signature (the pledge). However, from this signature, plus some datta given to you by the developemanufacturer (known as the adaptor signature) you can derive the private key of a particular public key you both agree on (the turn). Anyone scraping the blockchain will just see signatures that look just like every other signature, and as long as nobody manages to hack you and get a copy of the adaptor signature or the private key, they cannot get the private key behind the public key (point) that the pointlocked branch needs (the prestige).
(Just to be clear, the public key you are getting the private key from, is distinct from the public key that the developemanufacturer will use for its funds. The activation key is different from the developer's onchain Bitcoin key, and it is the activation key whose private key you will be learning, not the developer's/manufacturer's onchain Bitcoin key).
So:
Taproot lets PTLCs exist onchain because they enable Schnorr, which is a requirement of PTLCs / Scriptless Script.
(technology-wise, take note that Scriptless Script works only for the "pointlocked" branch of the contract; you need normal Script, or a pre-signed nLockTimed transaction, for the "timelocked" branch. Since Taproot can embed a script, you can have the Taproot pubkey be a 2-of-2 to implement the Scriptless Script "pointlocked" branch, then have a hidden script that lets you recover the funds with an OP_CHECKLOCKTIMEVERIFY after the timeout if the seller does not claim the funds.)

Quantum Quibbles!

Now if you were really paying attention, you might have noticed this parenthetical:
(technical details: a Taproot output is 1 version byte + 32 byte public key, while a P2WPKH (bech32 singlesig) output is 1 version byte + 20 byte public key hash...)
So wait, Taproot uses raw 32-byte public keys, and not public key hashes? Isn't that more quantum-vulnerable??
Well, in theory yes. In practice, they probably are not.
It's not that hashes can be broken by quantum computes --- they're still not. Instead, you have to look at how you spend from a P2WPKH/P2PKH pay-to-public-key-hash.
When you spend from a P2PKH / P2WPKH, you have to reveal the public key. Then Bitcoin hashes it and checks if this matches with the public-key-hash, and only then actually validates the signature for that public key.
So an unconfirmed transaction, floating in the mempools of nodes globally, will show, in plain sight for everyone to see, your public key.
(public keys should be public, that's why they're called public keys, LOL)
And if quantum computers are fast enough to be of concern, then they are probably fast enough that, in the several minutes to several hours from broadcast to confirmation, they have already cracked the public key that is openly broadcast with your transaction. The owner of the quantum computer can now replace your unconfirmed transaction with one that pays the funds to itself. Even if you did not opt-in RBF, miners are still incentivized to support RBF on RBF-disabled transactions.
So the extra hash is not as significant a protection against quantum computers as you might think. Instead, the extra hash-and-compare needed is just extra validation effort.
Further, if you have ever, in the past, spent from the address, then there exists already a transaction indelibly stored on the blockchain, openly displaying the public key from which quantum computers can derive the private key. So those are still vulnerable to quantum computers.
For the most part, the cryptographers behind Taproot (and Bitcoin Core) are of the opinion that quantum computers capable of cracking Bitcoin pubkeys are unlikely to appear within a decade or two.
So:
For now, the homomorphic and linear properties of elliptic curve cryptography provide a lot of benefits --- particularly the linearity property is what enables Scriptless Script and simple multisignature (i.e. multisignatures that are just 1 signature onchain). So it might be a good idea to take advantage of them now while we are still fairly safe against quantum computers. It seems likely that quantum-safe signature schemes are nonlinear (thus losing these advantages).

Summary

I Wanna Be The Taprooter!

So, do you want to help activate Taproot? Here's what you, mister sovereign Bitcoin HODLer, can do!

But I Hate Taproot!!

That's fine!

Discussions About Taproot Activation

submitted by almkglor to Bitcoin [link] [comments]

[ Bitcoin ] Technical: Taproot: Why Activate?

Topic originally posted in Bitcoin by almkglor [link]
This is a follow-up on https://old.reddit.com/Bitcoin/comments/hqzp14/technical_the_path_to_taproot_activation/
Taproot! Everybody wants it!! But... you might ask yourself: sure, everybody else wants it, but why would I, sovereign Bitcoin HODLer, want it? Surely I can be better than everybody else because I swapped XXX fiat for Bitcoin unlike all those nocoiners?
And it is important for you to know the reasons why you, o sovereign Bitcoiner, would want Taproot activated. After all, your nodes (or the nodes your wallets use, which if you are SPV, you hopefully can pester to your wallet vendoimplementor about) need to be upgraded in order for Taproot activation to actually succeed instead of becoming a hot sticky mess.
First, let's consider some principles of Bitcoin.
I'm sure most of us here would agree that the above are very important principles of Bitcoin and that these are principles we would not be willing to remove. If anything, we would want those principles strengthened (especially the last one, financial privacy, which current Bitcoin is only sporadically strong with: you can get privacy, it just requires effort to do so).
So, how does Taproot affect those principles?

Taproot and Your /Coins

Most HODLers probably HODL their coins in singlesig addresses. Sadly, switching to Taproot would do very little for you (it gives a mild discount at spend time, at the cost of a mild increase in fee at receive time (paid by whoever sends to you, so if it's a self-send from a P2PKH or bech32 address, you pay for this); mostly a wash).
(technical details: a Taproot output is 1 version byte + 32 byte public key, while a P2WPKH (bech32 singlesig) output is 1 version byte + 20 byte public key hash, so the Taproot output spends 12 bytes more; spending from a P2WPKH requires revealing a 32-byte public key later, which is not needed with Taproot, and Taproot signatures are about 9 bytes smaller than P2WPKH signatures, but the 32 bytes plus 9 bytes is divided by 4 because of the witness discount, so it saves about 11 bytes; mostly a wash, it increases blockweight by about 1 virtual byte, 4 weight for each Taproot-output-input, compared to P2WPKH-output-input).
However, as your HODLings grow in value, you might start wondering if multisignature k-of-n setups might be better for the security of your savings. And it is in multisignature that Taproot starts to give benefits!
Taproot switches to using Schnorr signing scheme. Schnorr makes key aggregation -- constructing a single public key from multiple public keys -- almost as trivial as adding numbers together. "Almost" because it involves some fairly advanced math instead of simple boring number adding, but hey when was the last time you added up your grocery list prices by hand huh?
With current P2SH and P2WSH multisignature schemes, if you have a 2-of-3 setup, then to spend, you need to provide two different signatures from two different public keys. With Taproot, you can create, using special moon math, a single public key that represents your 2-of-3 setup. Then you just put two of your devices together, have them communicate to each other (this can be done airgapped, in theory, by sending QR codes: the software to do this is not even being built yet, but that's because Taproot hasn't activated yet!), and they will make a single signature to authorize any spend from your 2-of-3 address. That's 73 witness bytes -- 18.25 virtual bytes -- of signatures you save!
And if you decide that your current setup with 1-of-1 P2PKH / P2WPKH addresses is just fine as-is: well, that's the whole point of a softfork: backwards-compatibility; you can receive from Taproot users just fine, and once your wallet is updated for Taproot-sending support, you can send to Taproot users just fine as well!
(P2WPKH and P2WSH -- SegWit v0 -- addresses start with bc1q; Taproot -- SegWit v1 --- addresses start with bc1p, in case you wanted to know the difference; in bech32 q is 0, p is 1)
Now how about HODLers who keep all, or some, of their coins on custodial services? Well, any custodial service worth its salt would be doing at least 2-of-3, or probably something even bigger, like 11-of-15. So your custodial service, if it switched to using Taproot internally, could save a lot more (imagine an 11-of-15 getting reduced from 11 signatures to just 1!), which --- we can only hope! --- should translate to lower fees and better customer service from your custodial service!
So I think we can say, very accurately, that the Bitcoin principle --- that YOU are in control of your money --- can only be helped by Taproot (if you are doing multisignature), and, because P2PKH and P2WPKH remain validly-usable addresses in a Taproot future, will not be harmed by Taproot. Its benefit to this principle might be small (it mostly only benefits multisignature users) but since it has no drawbacks with this (i.e. singlesig users can continue to use P2WPKH and P2PKH still) this is still a nice, tidy win!
(even singlesig users get a minor benefit, in that multisig users will now reduce their blockchain space footprint, so that fees can be kept low for everybody; so for example even if you have your single set of private keys engraved on titanium plates sealed in an airtight box stored in a safe buried in a desert protected by angry nomads riding giant sandworms because you're the frickin' Kwisatz Haderach, you still gain some benefit from Taproot)
And here's the important part: if P2PKH/P2WPKH is working perfectly fine with you and you decide to never use Taproot yourself, Taproot will not affect you detrimentally. First do no harm!

Taproot and Your Contracts

No one is an island, no one lives alone. Give and you shall receive. You know: by trading with other people, you can gain expertise in some obscure little necessity of the world (and greatly increase your productivity in that little field), and then trade the products of your expertise for necessities other people have created, all of you thereby gaining gains from trade.
So, contracts, which are basically enforceable agreements that facilitate trading with people who you do not personally know and therefore might not trust.
Let's start with a simple example. You want to buy some gewgaws from somebody. But you don't know them personally. The seller wants the money, you want their gewgaws, but because of the lack of trust (you don't know them!! what if they're scammers??) neither of you can benefit from gains from trade.
However, suppose both of you know of some entity that both of you trust. That entity can act as a trusted escrow. The entity provides you security: this enables the trade, allowing both of you to get gains from trade.
In Bitcoin-land, this can be implemented as a 2-of-3 multisignature. The three signatories in the multisgnature would be you, the gewgaw seller, and the escrow. You put the payment for the gewgaws into this 2-of-3 multisignature address.
Now, suppose it turns out neither of you are scammers (whaaaat!). You receive the gewgaws just fine and you're willing to pay up for them. Then you and the gewgaw seller just sign a transaction --- you and the gewgaw seller are 2, sufficient to trigger the 2-of-3 --- that spends from the 2-of-3 address to a singlesig the gewgaw seller wants (or whatever address the gewgaw seller wants).
But suppose some problem arises. The seller gave you gawgews instead of gewgaws. Or you decided to keep the gewgaws but not sign the transaction to release the funds to the seller. In either case, the escrow is notified, and if it can sign with you to refund the funds back to you (if the seller was a scammer) or it can sign with the seller to forward the funds to the seller (if you were a scammer).
Taproot helps with this: like mentioned above, it allows multisignature setups to produce only one signature, reducing blockchain space usage, and thus making contracts --- which require multiple people, by definition, you don't make contracts with yourself --- is made cheaper (which we hope enables more of these setups to happen for more gains from trade for everyone, also, moon and lambos).
(technology-wise, it's easier to make an n-of-n than a k-of-n, making a k-of-n would require a complex setup involving a long ritual with many communication rounds between the n participants, but an n-of-n can be done trivially with some moon math. You can, however, make what is effectively a 2-of-3 by using a three-branch SCRIPT: either 2-of-2 of you and seller, OR 2-of-2 of you and escrow, OR 2-of-2 of escrow and seller. Fortunately, Taproot adds a facility to embed a SCRIPT inside a public key, so you can have a 2-of-2 Taprooted address (between you and seller) with a SCRIPT branch that can instead be spent with 2-of-2 (you + escrow) OR 2-of-2 (seller + escrow), which implements the three-branched SCRIPT above. If neither of you are scammers (hopefully the common case) then you both sign using your keys and never have to contact the escrow, since you are just using the escrow public key without coordinating with them (because n-of-n is trivial but k-of-n requires setup with communication rounds), so in the "best case" where both of you are honest traders, you also get a privacy boost, in that the escrow never learns you have been trading on gewgaws, I mean ewww, gawgews are much better than gewgaws and therefore I now judge you for being a gewgaw enthusiast, you filthy gewgawer).

Taproot and Your Contracts, Part 2: Cryptographic Boogaloo

Now suppose you want to buy some data instead of things. For example, maybe you have some closed-source software in trial mode installed, and want to pay the developer for the full version. You want to pay for an activation code.
This can be done, today, by using an HTLC. The developer tells you the hash of the activation code. You pay to an HTLC, paying out to the developer if it reveals the preimage (the activation code), or refunding the money back to you after a pre-agreed timeout. If the developer claims the funds, it has to reveal the preimage, which is the activation code, and you can now activate your software. If the developer does not claim the funds by the timeout, you get refunded.
And you can do that, with HTLCs, today.
Of course, HTLCs do have problems:
Fortunately, with Schnorr (which is enabled by Taproot), we can now use the Scriptless Script constuction by Andrew Poelstra. This Scriptless Script allows a new construction, the PTLC or Pointlocked Timelocked Contract. Instead of hashes and preimages, just replace "hash" with "point" and "preimage" with "scalar".
Or as you might know them: "point" is really "public key" and "scalar" is really a "private key". What a PTLC does is that, given a particular public key, the pointlocked branch can be spent only if the spender reveals the private key of the given private key to you.
Another nice thing with PTLCs is that they are deniable. What appears onchain is just a single 2-of-2 signature between you and the developemanufacturer. It's like a magic trick. This signature has no special watermarks, it's a perfectly normal signature (the pledge). However, from this signature, plus some datta given to you by the developemanufacturer (known as the adaptor signature) you can derive the private key of a particular public key you both agree on (the turn). Anyone scraping the blockchain will just see signatures that look just like every other signature, and as long as nobody manages to hack you and get a copy of the adaptor signature or the private key, they cannot get the private key behind the public key (point) that the pointlocked branch needs (the prestige).
(Just to be clear, the public key you are getting the private key from, is distinct from the public key that the developemanufacturer will use for its funds. The activation key is different from the developer's onchain Bitcoin key, and it is the activation key whose private key you will be learning, not the developer's/manufacturer's onchain Bitcoin key).
So:
Taproot lets PTLCs exist onchain because they enable Schnorr, which is a requirement of PTLCs / Scriptless Script.
(technology-wise, take note that Scriptless Script works only for the "pointlocked" branch of the contract; you need normal Script, or a pre-signed nLockTimed transaction, for the "timelocked" branch. Since Taproot can embed a script, you can have the Taproot pubkey be a 2-of-2 to implement the Scriptless Script "pointlocked" branch, then have a hidden script that lets you recover the funds with an OP_CHECKLOCKTIMEVERIFY after the timeout if the seller does not claim the funds.)

Quantum Quibbles!

Now if you were really paying attention, you might have noticed this parenthetical:
(technical details: a Taproot output is 1 version byte + 32 byte public key, while a P2WPKH (bech32 singlesig) output is 1 version byte + 20 byte public key hash...)
So wait, Taproot uses raw 32-byte public keys, and not public key hashes? Isn't that more quantum-vulnerable??
Well, in theory yes. In practice, they probably are not.
It's not that hashes can be broken by quantum computes --- they're still not. Instead, you have to look at how you spend from a P2WPKH/P2PKH pay-to-public-key-hash.
When you spend from a P2PKH / P2WPKH, you have to reveal the public key. Then Bitcoin hashes it and checks if this matches with the public-key-hash, and only then actually validates the signature for that public key.
So an unconfirmed transaction, floating in the mempools of nodes globally, will show, in plain sight for everyone to see, your public key.
(public keys should be public, that's why they're called public keys, LOL)
And if quantum computers are fast enough to be of concern, then they are probably fast enough that, in the several minutes to several hours from broadcast to confirmation, they have already cracked the public key that is openly broadcast with your transaction. The owner of the quantum computer can now replace your unconfirmed transaction with one that pays the funds to itself. Even if you did not opt-in RBF, miners are still incentivized to support RBF on RBF-disabled transactions.
So the extra hash is not as significant a protection against quantum computers as you might think. Instead, the extra hash-and-compare needed is just extra validation effort.
Further, if you have ever, in the past, spent from the address, then there exists already a transaction indelibly stored on the blockchain, openly displaying the public key from which quantum computers can derive the private key. So those are still vulnerable to quantum computers.
For the most part, the cryptographers behind Taproot (and Bitcoin Core) are of the opinion that quantum computers capable of cracking Bitcoin pubkeys are unlikely to appear within a decade or two.
So:
For now, the homomorphic and linear properties of elliptic curve cryptography provide a lot of benefits --- particularly the linearity property is what enables Scriptless Script and simple multisignature (i.e. multisignatures that are just 1 signature onchain). So it might be a good idea to take advantage of them now while we are still fairly safe against quantum computers. It seems likely that quantum-safe signature schemes are nonlinear (thus losing these advantages).

Summary

I Wanna Be The Taprooter!

So, do you want to help activate Taproot? Here's what you, mister sovereign Bitcoin HODLer, can do!

But I Hate Taproot!!

That's fine!

Discussions About Taproot Activation

almkglor your post has been copied because one or more comments in this topic have been removed. This copy will preserve unmoderated topic. If you would like to opt-out, please send a message using [this link].
[deleted comment]
[deleted comment]
[deleted comment]
submitted by anticensor_bot to u/anticensor_bot [link] [comments]

The BCH blockchain is 165GB! How good can we compress it? I had a closer look

Someone posted their results for compressing the blockchain in the telegram group, this is what they were able to do:
Note, bitcoin by its nature is poorly compressible, as it contains a lot of incompressible data, such as public keys, addresses, and signatures. However, there's also a lot of redundant information in there, e.g. the transaction version, and it's usually the same opcodes, locktime, sequence number etc. over and over again.
I was curious and thought, how much could we actually compress the blockchain? This is actually very relevant: As I established in my previous post about the costs of a 1GB full node, the storage and bandwidth costs seem to be one of the biggest bottlenecks, and that CPU computation costs are actually the cheapest part, as were able almost to get away with ten year old CPUs.
Let's have a quick look at the transaction format and see what we can do. I'll have a TL;DR at the end if you don't care about how I came up with those numbers.
Before we just in, don't forget that I'll be streaming today again building a SPV node, as I've already posted about here. Last time we made some big progress, I think! Check it out here https://dlive.tv/TobiOnTheRoad. It'll start at around 15:00 UTC!

Version (32 bits)

There's currently two transaction types. Unless we add new ones, we can compress it to 1 bit (0 = version 1; and 1 = version 2).

Input/output count (8 to 72 bits)

This is the number of inputs the transaction has (see section 9 of the whitepaper). If the number of inputs is below 253, it will take 1 byte, and otherwise 2 to 8 bytes. This nice chart shows that, currently, 90% of Bitcoin transactions only have 2 inputs, sometimes 3.
A byte can represent 256 different numbers. Having this as the lowest granularity for input count seems quite wasteful! Also, 0 inputs is never allowed in Bitcoin Cash. If we represent one input with 00₂, two inputs with 01₂, three inputs with 10₂ and everything else with 11₂ + current format, we get away with only 2 bits more than 90% of the time.
Outputs are slightly higher, 3 or less 90% of the time, but the same encoding works fine.

Input (>320 bits)

There can be multiple of those. It has the following format:

Output (≥72 bits)

There can be multiple of those. They have the following format:

Lock time (32 bits)

This is FF FF FF FF most of the time and only occasionally transactions will be time-locked, and only change the meaning if a sequence number for an input is not FF FF FF FF. We can do the same trick as with the sequence number, such that most of the time, this will be just 1 bit.

Total

So, in summary, we have:
Nice table:
No. of inputs No. of outputs Uncompressed size Compressed size Ratio
1 1 191 bytes (1528 bits) 128 bytes (1023 bits) 67.0%
1 2 226 bytes (1808 bits) 151 bytes (1202 bits) 66.5%
2 1 339 bytes (2712 bits) 233 bytes (1861 bits) 68.6%
2 2 374 bytes (2992 bits) 255 bytes (2040 bits) 68.2%
2 3 408 bytes (3264 bits) 278 bytes (2219 bits) 68.0%
3 2 520 bytes (4160 bits) 360 bytes (2878 bits) 69.2%
3 3 553 bytes (4424 bits) 383 bytes (3057 bits) 69.1%
Interestingly, if we take a compression of 69%, if we were to compress the 165 GB blockchain, we'd get 113.8GB. Which is (almost) exactly the amount which 7zip was able to give us given ultra compression!
I think there's not a lot we can do to compress the transaction further, even if we only transmit public keys, signatures and addresses, we'd at minimum have 930 bits, which would still only be at 61% compression ratio (and missing outpoint and value). 7zip is probably also able to utilize re-using of addresses/public keys if someone sends to/from the same address multiple times, which we haven't explored here; but it's generally discouraged to send to the same address multiple times anyway so I didn't explore that. We'd still have signatures clocking in at 512 bits.
Note that the compression scheme I outlined here operates on a per transaction or per block basis (if we compress transacted satoshis per block), unlike 7zip, which compresses per blockchain.
I hope this was an interesting read. I expected the compression ratio to be higher, but still, if it takes 3 weeks to sync uncompressed, it'll take just 2 weeks compressed. Which can mean a lot for a business, actually.

I'll be streaming again today!

As I've already posted about here, I will stream about building an SPV node in Python again. It'll start at 15:00 UTC. Last time we made some big progress, I think! We were able to connect to my Bitcoin ABC node and send/receive our first version message. I'll do a nice recap of what we've done in that time, as there haven't been many present last time. And then we'll receive our first headers and then transactions! Check it out here: https://dlive.tv/TobiOnTheRoad.
submitted by eyeofpython to btc [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:
https://youtu.be/tPImTXFb_U8

BEGIN TRANSCRIPT:

Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

What is ProgPoW? Why Ethereum needs it moving forward.

Update: ASIC Manufacture say they can make a ProgPoW ASIC

Disclosure, I'm a avid GPU miner with some 90 Nvidia GPUs running out of my garage. I've been in and out of the mining scene since 2011,2014, and recently 2017. I Hold BTC, ETH, RVN. I directly benefit from them moving to ProgPOW, but not without a good reason. Everytime I've gotten into home GPU mining ASICs comes out BTC, LTC, I've had to give up every time. I refuse to see it happen to another excellent coin.

I've been a proponent of Ethereum following there ASIC resistance stance outlined in the original white-paper. Now that ProgPOW has been given the "Green-light" by Hudson Jameson to move forward with ProgPOW. I really think its time to discuss the Algorithm. What it is, who created it, why Ethereum needs it and dismiss crazy theories such as Nvidia funding development.

Before we start highly suggest everyone watch BitsBeTrippin's video where she breaks down ProgPOW at devcon4.

A Quick breakdown of What is ProgPOW?
ProgPoW is a proof-of-work algorithm designed to close the efficency gap available to specialized ASICs. It utilizes almost all parts of commodity hardware (GPUs), and comes pre-tuned for the most common hardware utilized in the Ethereum network.

From reading the white paper listed on Github the main idea behind ProgPOW is NOT to achieve total ASIC-resistance. The idea is to kill the 50-1000x Efficiency gains from specialized ASIC hardware. Such as what we saw recently with Equihash 200/9 coins where 50x was achieved over GPUs. ProgPOW algorithm uses most of the GPU minus a few parts. It takes the original Eth-Hash algorithm and add more features.
The main elements of the algorithm are:
ProgPOW will Inherit Eth-Hash current DAG size meaning 2GB and 3GB will not be able to mine still. Additionally no advantage is given to Either Nvidia or AMD GPUs
ProgPoW has been designed to be a vendor-neutral proof-of-work, or more specifically, proof-of-GPU. ProgPoW has intentionally avoided using features that only one core architecture has, such as LOP3 on NVIDIA, or indexed register files on AMD.

According to Kristy, she has had direct contact with AMD and Nvidia on testing ProgPOW.
As part of its review process, ProgPoW was submitted to (and reviewed by) both AMD and NVIDIA engineers. The group known as IfDefElse — of which I am a part of — has been actively working with both companies to ensure this effectively closes the efficiency gap that we speak publicly of in our papers and articles
This does not mean one side is favored over the other. She's giving and getting input from the major GPU manufactures in order to support Crypto-mining. Additionally she says "AMD is actively working with us to optimize ProgPoW for their architectures.". Using ProgPOW optimized for GPUs rids us of bowing to Bitmain, innosilicon, halong and there scandalous ways for hardware.

ProgPOW IS NOT the "God-sent savior of all GPUS" Even Kristy understand that complete ASIC-resistance is a fallacy. This will never be achieved. However By working with GPU manufactures and Crypto Dev's we can make a coin where GPUs run along-side with ASICs, but the efficiency gains are diluted. Meaning the time and money invested into an ProgPOW ASIC machine does not make economical sense. Rather just buy the actual GPU.

Quote sources from Kristy's Medium article.

Why does Ethereum need ProgPOW?

I suggest reading Siacoin's good medium article on the subject of ASICs.
It's too much to cover here but in short why we need ProgPOW against current ASICs and future ASICs
At his point in time we actually don't need ProgPOW. However we do need it as time goes on. Early Bitcoin ASICs didn't dominate BTC however as time went on, they became better more efficient than GPUs, and started dominating BTC's network. The same fate happens to any "ASIC-Resistant coin" that decides it's not a big deal (looking at you ZEN). Without a set date on POS Ethereum would have suffered the same fate. As Siacoin Dev states;
We also had loose designs for ethash (Ethereum’s algorithm). Admittedly, ethash was not as easily amenable to ASICs as equihash, but as we’ve seen from products on the market today, you can still do well enough to obsolete GPUs.
What makes ASICs bad? Isn't it better to get Hash/watt ratio? This saves tons of electric. One of PoW biggest faults. I think there is nothing bad about the ASICs hardware. Equihash ASICs achieved 20 1080ti level hashrate at 1/20 of the power. That's impressive. The problem with ASIC hardware is who, where it comes from, and there shady business practices.

  1. "It’s estimated that Monero’s secret ASICs made up more than 50% of the hashrate for almost a full year before discovery, and during that time, nobody noticed." How much of ETH hashrate could be ASICs? We won't know till the fork.
  2. I've heard a lot that ASICs aren't all that big of a deal. Focus on POS. Take in account Siacoins own network hashrate which allowed bitmain/innosilicon ASICs on the network till they forked in favor of their own ASICs after just a year (Siacoins drops 96% network hashrate).
  3. "In the case of Halong’s Decred miner, we saw them “sell out” of an unknown batch size of $10,000 miners. After that, it was observed that more than 50% of the mining rewards were collecting into a single address that was known to be associated with Halong, meaning that they did keep the majority of the hashrate and profits to themselves." GPU manufactures would not and cannot be do the same.
ASICs destroy networks, centralize the pools, and hardware. Leading to them to be controlled by large entity in this case its Chinese companies. Anyone who thinks otherwise is fool. Of course this doesn't happen overnight, hence my original statement that we don't need ProgPoW now. In a years time that may totally change and it will be far to late.

GPUs allow anyone to support the network. Think of the crypto run-up. Fry's Electronics, Microceneter, online E-tailers were SOLD OUT OF GPUs. Think of that! People were buying GPUs to support the network for token rewards(worth money) How many new miners, people, got interested in crypto because of this? How about friends who saw the rigs and word of mouth spread that you could go out buy a graphics card, built a rig, and earn money? obviously we know the effects because it wasn't sustainable in the remotest. However it's an attest that GPU mineable coins makes it accessible to everyone.

For Ethereum to successfully go POS it cannot hand it network over to ASIC mining companies in the meantime. POS is on an unknown release date/timeframe. I understand Vitalk does not like PoW however that's what currently securing the network. Because of this Ethereum must maintain as much decentralization as possible with GPU mining. This is what ProgPOW does. It gives AMD and Nvidia GPUs the advantage they need over ASICs created by Bitmain or others. It allows me to continue to secure the Ethereum network with my 90 GPUs until full POS switch.

Conclusion
Did it have to be ProgPOW? No, as UBIQ has shown they created there own unique ASIC-resistant algorithm. ProgPOW was given to us by the Ifdefelse team completed. This required no work from the ETH devs at all. It's open source and has been reviewed by the Etheruem Dev team. If they haven't found any issues with it yet, I don't see why we cannot implement it.

An argument can be made that if we do switch we risk security, because we'll lose network hashrate and decrease the cost to attack the network. I have two things to say to that. One since ProgPOW is new, Nicehash has not added it to it's network to rent yet. I wouldn't know how long nicehash would take to it add it, but it gives us a short while to get people on new ETH POW network. Additionally to attack the network, they would need massive coordination from GPU mining farms. Such a thing has never been recorded.

The 51% attacks that have happened recently (BCD/BTG/ZEN) and as of 1/8/18, ETC. These were all ASIC mineable coins. In the case of equihash coins, an ASIC that achieved 50x more efficiency had just came to market. It's not proven, but it leads me to believe a bad actor with early access to ASICs was able to attack those coins. All except ZEN have switched to Zhash algorithm. Even ZCASH/Zelcash has funded ProgPOW development. While I disagree they should do this, because that's entirely the problem too many coins using too many of the same algorithm, in the end it's up to the devs.

TL:DR; ASIC-Resistance is futile and a fallacy. PoS or other solutions are needed but to get there we need to keep PoW as Decentralized as possible this is what ProgPOW does.


submitted by Xazax310 to EtherMining [link] [comments]

What is ProgPoW? Why Ethereum needs it moving forward.

Update: ASIC Manufacture say they can make a ProgPoW ASIC

Disclosure, I'm a avid GPU miner with some 90 Nvidia GPUs running out of my garage. I've been in and out of the mining scene since 2011,2014, and recently 2017. I Hold BTC, ETH, RVN. I directly benefit from them moving to ProgPOW, but not without a good reason. Every-time I've gotten into home GPU mining ASICs comes out BTC, LTC, I've had to give up every time. I refuse to see it happen to another excellent coin.

I've been a proponent of Ethereum following there ASIC resistance stance outlined in the original white-paper. Now that ProgPOW has been given the "Green-light" by Hudson Jameson to move forward with ProgPOW. I really think its time to discuss the Algorithm. What it is, who created it, why Ethereum needs it and dismiss crazy theories such as Nvidia funding development.

Before we start highly suggest everyone watch BitsBeTrippin's video where she breaks down ProgPOW at devcon4.

A Quick breakdown of What is ProgPOW?
ProgPoW is a proof-of-work algorithm designed to close the efficency gap available to specialized ASICs. It utilizes almost all parts of commodity hardware (GPUs), and comes pre-tuned for the most common hardware utilized in the Ethereum network.

From reading the white paper listed on Github the main idea behind ProgPOW is NOT to achieve total ASIC-resistance. The idea is to kill the 50-1000x Efficiency gains from specialized ASIC hardware. Such as what we saw recently with Equihash 200/9 coins where 50x was achieved over GPUs. ProgPOW algorithm uses most of the GPU minus a few parts. It takes the original Eth-Hash algorithm and add more features.
The main elements of the algorithm are:
ProgPOW will Inherit Eth-Hash current DAG size meaning 2GB and 3GB will not be able to mine still. Additionally no advantage is given to Either Nvidia or AMD GPUs
ProgPoW has been designed to be a vendor-neutral proof-of-work, or more specifically, proof-of-GPU. ProgPoW has intentionally avoided using features that only one core architecture has, such as LOP3 on NVIDIA, or indexed register files on AMD.

According to Kristy, she has had direct contact with AMD and Nvidia on testing ProgPOW.
As part of its review process, ProgPoW was submitted to (and reviewed by) both AMD and NVIDIA engineers. The group known as IfDefElse — of which I am a part of — has been actively working with both companies to ensure this effectively closes the efficiency gap that we speak publicly of in our papers and articles
This does not mean one side is favored over the other. She's giving and getting input from the major GPU manufactures in order to support Crypto-mining. Additionally she says "AMD is actively working with us to optimize ProgPoW for their architectures.". Using ProgPOW optimized for GPUs rids us of bowing to Bitmain, innosilicon, halong and there scandalous ways for hardware.

ProgPOW IS NOT the "God-sent savior of all GPUS" Even Kristy understand that complete ASIC-resistance is a fallacy. This will never be achieved. However By working with GPU manufactures and Crypto Dev's we can make a coin where GPUs run along-side with ASICs, but the efficiency gains are diluted. Meaning the time and money invested into an ProgPOW ASIC machine does not make economical sense. Rather just buy the actual GPU.

Quote sources from Kristy's Medium article.

Why does Ethereum need ProgPOW?

I suggest reading Siacoin's good medium article on the subject of ASICs.
It's too much to cover here but in short why we need ProgPOW against current ASICs
At his point in time we actually don't need ProgPOW. However we do need it as time goes on. Early Bitcoin ASICs didn't dominate BTC however as time went on, they became better more efficient than GPUs, and started dominating BTC's network. The same fate happens to any "ASIC-Resistant coin" that decides it's not a big deal (looking at you ZEN). Without a set date on POS Ethereum would have suffered the same fate. As Siacoin Dev states;
We also had loose designs for ethash (Ethereum’s algorithm). Admittedly, ethash was not as easily amenable to ASICs as equihash, but as we’ve seen from products on the market today, you can still do well enough to obsolete GPUs.
What makes ASICs bad? Isn't it better to get Hash/watt ratio? This saves tons of electric. One of PoW biggest faults. I think there is nothing bad about the ASICs hardware. Equihash ASICs achieved 20 1080ti level hashrate at 1/20 of the power. That's impressive. The problem with ASIC hardware is who, where it comes from, and there shady business practices.

  1. "It’s estimated that Monero’s secret ASICs made up more than 50% of the hashrate for almost a full year before discovery, and during that time, nobody noticed." How much of ETH hashrate could be ASICs? We won't know till the fork.
  2. I've heard a lot that ASICs aren't all that big of a deal. Focus on POS. Take in account Siacoins own network hashrate which allowed bitmain/innosilicon ASICs on the network till they forked in favor of their own ASICs after just a year (Siacoins drops 96% network hashrate).
  3. "In the case of Halong’s Decred miner, we saw them “sell out” of an unknown batch size of $10,000 miners. After that, it was observed that more than 50% of the mining rewards were collecting into a single address that was known to be associated with Halong, meaning that they did keep the majority of the hashrate and profits to themselves." GPU manufactures would not and cannot be do the same.
ASICs destroy networks, centralize the pools, and hardware. Leading to them to be controlled by large entity in this case its Chinese companies. Anyone who thinks otherwise is fool. Of course this doesn't happen overnight, hence my original statement that we don't need ProgPoW now. In a years time that may totally change and it will be far to late.

GPUs allow anyone to support the network. Think of the crypto run-up. Fry's Electronics, Microceneter, online E-tailers were SOLD OUT OF GPUs. Think of that! People were buying GPUs to support the network for token rewards(worth money) How many new miners, people, got interested in crypto because of this? How about friends who saw the rigs and word of mouth spread that you could go out buy a graphics card, built a rig, and earn money? obviously we know the effects because it wasn't sustainable in the remotest. However it's an attest that GPU mineable coins makes it accessible to everyone.

For Ethereum to successfully go POS it cannot hand it network over to ASIC mining companies in the meantime. POS is on an unknown release date/timeframe. I understand Vitalk does not like PoW however that's what currently securing the network. Because of this Ethereum must maintain as much decentralization as possible with GPU mining. This is what ProgPOW does. It gives AMD and Nvidia GPUs the advantage they need over ASICs created by Bitmain or others. It allows me to continue to secure the Ethereum network with my 90 GPUs until full POS switch.

Conclusion
Did it have to be ProgPOW? No, as UBIQ has shown they created there own unique ASIC-resistant algorithm. ProgPOW was given to us by the Ifdefelse team completed. This required no work from the ETH devs at all. It's open source and has been reviewed by the Etheruem Dev team. If they haven't found any issues with it yet, I don't see why we cannot implement it.

An argument can be made that if we do switch we risk security, because we'll lose network hashrate and decrease the cost to attack the network. I have two things to say to that. One, since ProgPOW is new, Nicehash has not added it to it's network to rent yet. I wouldn't know how long nicehash would take to it add it, but it gives us a short while to get people on new ETH POW network. Additionally to attack the network, they would need massive coordination from GPU mining farms. Such a thing has never been recorded.

The 51% attacks that have happened recently (BCD/BTG/ZEN) and as of 1/8/18, ETC. These were all ASIC mineable coins. In the case of equihash coins, an ASIC that achieved 50x more efficiency had just came to market. It's not proven, but it leads me to believe a bad actor with early access to ASICs was able to attack those coins. All except ZEN have switched to Zhash algorithm. Even ZCASH/Zelcash has funded ProgPOW development. While I disagree they should do this, because that's entirely the problem too many coins using too many of the same algorithm, in the end it's up to the devs.

TL:DR; ASIC-Resistance is futile and a fallacy. PoS or other solutions are needed but to get there we need to keep PoW as Decentralized as possible this is what ProgPOW does.


Update 10/10/19 See medium article on ProgPoW FAQs.
submitted by Xazax310 to gpumining [link] [comments]

Transcript of Open Developer Meeting in Discord - 7/19/2019

[Dev-Happy] BlondfrogsLast Friday at 3:58 PM
Hey everyone. The channel is now open for the dev meeting.
LSJI07 - MBITLast Friday at 3:58 PM
Hi
TronLast Friday at 3:59 PM
Hi all!
JerozLast Friday at 3:59 PM
:wave:
TronLast Friday at 3:59 PM
Topics: Algo stuff - x22rc, Ownership token for Restricted Assets and Assets.
JerozLast Friday at 4:00 PM
@Milo is also here from coinrequest.
MiloLast Friday at 4:00 PM
Hi :thumbsup:
Pho3nix Monk3yLast Friday at 4:00 PM
welcome, @Milo
TronLast Friday at 4:00 PM
Great.
@Milo Was there PRs for Android and iOS?
MiloLast Friday at 4:01 PM
Yes, I've made a video. Give me a second I'll share it asap.
JerozLast Friday at 4:02 PM
I missed the iOS one.
MiloLast Friday at 4:02 PM
Well its 1 video, but meant for all.
JerozLast Friday at 4:02 PM
Ah, there's an issue but no pull request (yet?)
https://github.com/RavenProject/ravenwallet-ios/issues/115
[Dev-Happy] BlondfrogsLast Friday at 4:03 PM
nice @Milo
MiloLast Friday at 4:04 PM
Can it be that I have no video post rights?
JerozLast Friday at 4:05 PM
In discord?
MiloLast Friday at 4:05 PM
yes?
[Dev-Happy] BlondfrogsLast Friday at 4:05 PM
just a link?
JerozLast Friday at 4:05 PM
Standard version has a file limit afaik
Pho3nix Monk3yLast Friday at 4:05 PM
try now
gave permissions
MiloLast Friday at 4:05 PM
it's not published yet on Youtube, since I didn't knew when it would be published in the wallets
file too big. Hold on i'll put it on youtube and set it on private
LSJI07 - MBITLast Friday at 4:06 PM
no worries ipfs it...:yum:
Pho3nix Monk3yLast Friday at 4:06 PM
ok, just send link when you can
[Dev-Happy] BlondfrogsLast Friday at 4:07 PM
So guys. We released Ravencoin v2.4.0!
JerozLast Friday at 4:08 PM
If you like the code. Go update them nodes! :smiley:
[Dev-Happy] BlondfrogsLast Friday at 4:08 PM
We are recommending that you are upgrading to it. It fixes a couple bugs in the code base inherited from bitcoin!
MiloLast Friday at 4:08 PM
https://www.youtube.com/watch?v=t\_g7NpFXm6g&feature=youtu.be
sorry for the hold up
YouTube
Coin Request
Raven dev Gemiddeld
LSJI07 - MBITLast Friday at 4:09 PM
thanks short and sweet!!
KAwARLast Friday at 4:10 PM
Is coin request live on the android wallet?
TronLast Friday at 4:10 PM
Nice video.
It isn't in the Play Store yet.
Pho3nix Monk3yLast Friday at 4:10 PM
Well, this is the first time in a while where we have this many devs online. What questions do y'all have?
LSJI07 - MBITLast Friday at 4:11 PM
Algo questions?
Pho3nix Monk3yLast Friday at 4:11 PM
sure
KAwARLast Friday at 4:11 PM
KK
LSJI07 - MBITLast Friday at 4:12 PM
what are the proposed 22 algos in x22r? i could only find the original 16 plus 5 on x21.
TronLast Friday at 4:12 PM
Likely the 5 from x21 and find one more.
We need to make sure they're all similar in time profile.
liqdmetalLast Friday at 4:14 PM
should we bother fixing a asic-problem that we dont know exists for sure or not?
TronLast Friday at 4:14 PM
That's the 170 million dollar question.
[Dev-Happy] BlondfrogsLast Friday at 4:14 PM
I would prefer to be proactive not reactive.
imo
JerozLast Friday at 4:14 PM
same
LSJI07 - MBITLast Friday at 4:15 PM
RIPEMD160 is a golden oldie but not sure on hash speed compared to the others.
liqdmetalLast Friday at 4:15 PM
in my mind we should focus on the restricted messaging etc
Sevvy (y rvn pmp?)Last Friday at 4:15 PM
probably won't know if the action was needed until after you take the action
liqdmetalLast Friday at 4:15 PM
we are at risk of being interventionistas
acting under opacity
TronLast Friday at 4:15 PM
Needs to spit out at least 256 bit. Preferably 512 bit.
LSJI07 - MBITLast Friday at 4:15 PM
ok
TronLast Friday at 4:15 PM
If it isn't 512 bit, it'll cause some extra headache for the GPU mining software.
liqdmetalLast Friday at 4:16 PM
i seek to avoid iatrogenics
TronLast Friday at 4:16 PM
Similar to the early problems when all the algos except the first one were built for 64-bytes (512-bit) inputs.
Had to look that one up. TIL iatrogenics
JerozLast Friday at 4:17 PM
I have to google most of @liqdmetal's vocabulary :smile:
liqdmetalLast Friday at 4:17 PM
@Tron tldr: basically the unseen, unintended negative side effects of the asic "cure"
Sevvy (y rvn pmp?)Last Friday at 4:18 PM
10 dolla word
liqdmetalLast Friday at 4:19 PM
we need a really strong case to intervene in what has been created.
TronLast Friday at 4:19 PM
I agree. I'm less concerned with the technical risk than I am the potential split risk experienced multiple times by Monero.
Sevvy (y rvn pmp?)Last Friday at 4:20 PM
tron do you agree that forking the ravencoin chain presents unique risks compared to other chains that aren't hosting assets?
JerozLast Friday at 4:21 PM
Yes, if you fork, you need to figure out for each asset which one you want to support.
Sevvy (y rvn pmp?)Last Friday at 4:21 PM
yeah. and the asset issuer could have a chain preference
TronLast Friday at 4:22 PM
@Sevvy (y rvn pmp?) Sure. Although, I'd expect that the asset issuers will be honor the assets on the dominant chain. Bigger concern is the branding confusion of multiple forks. See Bitcoin, Bitcoin Cash, Bitcoin SV for an example. We know they're different, but do non-crypto folks?
Hans_SchmidtLast Friday at 4:22 PM
I thought that the take-away from the recently published analyses and discussions was that ASICs for RVN may be active, but if so then they are being not much more effective than GPUs.
Sevvy (y rvn pmp?)Last Friday at 4:22 PM
agreed on all accounts there tron
TronLast Friday at 4:23 PM
I'm not yet convinced ASICs are on the network.
KAwARLast Friday at 4:23 PM
It would be better to damage an asic builder by forking after they made major expenses. Creating for them the type of deficit that could be negated by just buying instead of mining. Asic existence should be 100 percent confirmed before fork.
liqdmetalLast Friday at 4:23 PM
170million dollar question is right.lol
TronLast Friday at 4:24 PM
I've had someone offer to connect me to the folks at Fusion Silicon.
Sevvy (y rvn pmp?)Last Friday at 4:25 PM
yes. and if they are active on the network they are not particularly good ASICs
which makes it a moot point probably
TronLast Friday at 4:26 PM
The difficult part of this problem is that by the time everyone agrees that ASICs are problematic on the network, then voting the option in is likely no longer an option.
Sevvy (y rvn pmp?)Last Friday at 4:26 PM
yes. part of me wonders if we would say "okay, the clock on the asic countdown is reset by this new algo. but now the race is on"
[Dev-Happy] BlondfrogsLast Friday at 4:26 PM
There are always risks when making a change that will fork the network. We want wait to long though, as tron said. It wont be a voting change. it will be a mandatory change at a block number.
Sevvy (y rvn pmp?)Last Friday at 4:26 PM
acknowledge the inevitable
MiloLast Friday at 4:27 PM
I had just a small question from my side. When do you think the android version would be published, and do you maybe have a time-frame for the others?
TronLast Friday at 4:27 PM
Quick poll. How would everyone here feel about a BIP9 option - separate from the new features that can be voted in?
KAwARLast Friday at 4:27 PM
Maybe voting should not be a strictly blockchain vote. A republic and a democratic voice?
[Dev-Happy] BlondfrogsLast Friday at 4:27 PM
@Milo We can try and get a beta out next week, and publish soon after that.
MiloLast Friday at 4:28 PM
@[Dev-Happy] Blondfrogs :thumbsup::slight_smile:
[Dev-Happy] BlondfrogsLast Friday at 4:28 PM
BIP9 preemptive vote. I like it.
TronLast Friday at 4:30 PM
The advantage to a BIP9 vote is that it puts the miners and mining pools at a clear majority before activation.
LSJI07 - MBITLast Friday at 4:30 PM
Centralisation is inevitable unless we decide to resist it. ASIC's are market based and know the risks and rewards possible. A key step in resisting is sending a message. An algo change to increase asic resistance is imho a strong message. A BIP9 vote now would also be an indicator of bad actors early....
TronLast Friday at 4:30 PM
The disadvantage is that it may not pass if the will isn't there.
LSJI07 - MBITLast Friday at 4:30 PM
Before assets are on main net and cause additional issues.
KAwARLast Friday at 4:31 PM
I am not schooled in coding to have an educated voice. I only understand social problems and how it affects the economy.
SpyderDevLast Friday at 4:31 PM
All are equal on RVN
TronLast Friday at 4:31 PM
It is primarily a social problem. The tech change is less risky and is easier than the social.
LSJI07 - MBITLast Friday at 4:32 PM
All can have a share....people who want more of a share however pay for the privilege and associated risks.
KAwARLast Friday at 4:33 PM
Assets and exchange listings need to be consistent and secure.
brutoidLast Friday at 4:36 PM
I'm still not entirely clear on what the overall goal to the algo change is? Is it just to brick the supposed ASICs (unknown 45%) which could still be FPGAs as seen from the recent block analysis posted in the nest. Is the goal to never let ASICs on? Is it to brick FPGAs ultimately. Are we making Raven strictly GPU only? I'm still unclear
LSJI07 - MBITLast Friday at 4:37 PM
What about the future issue of ASICs returning after a BIP9 fork "soon"? Are all following the WP as a community? i.e asic resistant or are we prepared to change that to asic resistant for early coin emission. Ideally we should plan for the future. Could the community make a statement that no future algo changes will be required to incentivise future public asic manufacturers?
Lol. Same question @brutoid
brutoidLast Friday at 4:37 PM
Haha it is
You mind-beamed me!
[Dev-Happy] BlondfrogsLast Friday at 4:38 PM
The is up to the community.
Currently, the feel seems like the community is anti asic forever.
The main issue is getting people to upgrade.
KAwARLast Friday at 4:38 PM
Clarity is important. Otherwise we are attacking windmills like Don Quixote.
brutoidLast Friday at 4:39 PM
I'm not getting the feeling of community ASIC hate if the last few weeks of discussion are anything to go by?
Hans_SchmidtLast Friday at 4:39 PM
A unilateral non-BIP9 change at a chosen block height is a serious thing, but anti-ASIC has been part of the RVN philosophy since the whitepaper and is therefore appropriate for that purpose.
[Dev-Happy] BlondfrogsLast Friday at 4:39 PM
We can use the latest release as an example. It was a non forking release, announced for 2 weeks. and only ~30% of the network has upgraded.
TronLast Friday at 4:39 PM
@Hans_Schmidt Well said.
liqdmetalLast Friday at 4:40 PM
I'm not concerned about a "asic hardware problem" so much as I believe it more likely what we are seeing is several big fish miners (perhaps a single really big fish). For now I recommend standing pat on x16r. In the future I can see an algo upgrade fork to keep the algo up to date. If we start fighting against dedicated x16r hashing machines designed and built to secure our network we are more likely to go down in flames. The custom SHA256 computers that make the bitcoin the most secure network in existence are a big part of that security. If some party has made an asic that performs up to par or better than FPGA or GPU on x16r, that is a positive for this network, a step towards SHA256 security levels. It is too bad the community is in the dark regarding their developments. Therefore I think the community has to clarify its stance towards algorithm changes. I prefer a policy that will encourage the development of mining software, bitstreams and hardware by as many parties as possible. The imminent threat of ALGO fork screws the incentive up for developers.
JerozLast Friday at 4:40 PM
@brutoid the vocal ones are lenient towards asics, but the outcome of the 600+ votes seemed pretty clear.
brutoidLast Friday at 4:40 PM
This is my confusion
TronLast Friday at 4:41 PM
More hashes are only better if the cost goes up proportionally. Machines that do more hashes for less $ doesn't secure the network more, and trends towards centralization.
JerozLast Friday at 4:41 PM
I would argue for polling ever so often as it certainly will evolve dynamically with the state of crypto over time.
TronLast Friday at 4:41 PM
Measure security in two dimensions. Distribution, and $/hash.
liqdmetalLast Friday at 4:41 PM
and volume of hash
traysiLast Friday at 4:42 PM
45% of the hashrate going to one party is unhealthy, and standing pat on x16r just keeps that 45% where it is.
TronLast Friday at 4:42 PM
Volume doesn't matter if the cost goes down. For example, lets say software shows up that does 1000x better than the software from yesterday, and everyone moves to it. That does not add security. Even if the "difficulty" and embedded hashes took 1000x more attempts to find.
brutoidLast Friday at 4:42 PM
My issue is defintely centralization of hash and not so much what machine is doing it. I mine with both GPU and FPGA. Of course, the FPGAs are not on raven
TJayLast Friday at 4:44 PM
easy solution is just to replace a few of 16 current hash functions, without messing with x21r or whatever new shit
TronLast Friday at 4:44 PM
How do folks here feel about allowing CPUs back in the game?
traysiLast Friday at 4:44 PM
Botnets is my concern with CPUs
brutoidLast Friday at 4:44 PM
Botnets is my concern
SpyderDevLast Friday at 4:44 PM
Yes please.
LSJI07 - MBITLast Friday at 4:44 PM
the poll votes seem not very security conscious. More of day miners chasing profits. I love them bless! Imho the future is bright for raven, however these issues if not sorted out now will bite hard long term when asset are on the chain and gpu miners are long gone.....
ZaabLast Friday at 4:45 PM
How has the testing of restricted assets been on the test net?
liqdmetalLast Friday at 4:45 PM
Agreed. I dont think x16r is obsolete like that yet however
[Dev-Happy] BlondfrogsLast Friday at 4:45 PM
@Zaab not enough testing at the moment.
HedgerLast Friday at 4:45 PM
Yes, how is the Testing going?
justinjjaLast Friday at 4:45 PM
Like randomX or how are cpus going to be back in the game?
TronLast Friday at 4:45 PM
@Zaab Just getting started at testing at the surface level (RPC calls), and fixing as we go.
ZaabLast Friday at 4:45 PM
And or any updates on the review of dividend code created by the community
Lokar -=Kai=-Last Friday at 4:45 PM
if the amount of hash the unknown pool has is fixed as standarderror indicated then waiting for the community of FPGAers to get onto raven might be advantageous if the fork doesn't hurt FPGAs.
ZaabLast Friday at 4:45 PM
Can't rememeber who was on it
SpyderDevLast Friday at 4:45 PM
@Zaab But we are working on it...
Lokar -=Kai=-Last Friday at 4:46 PM
more hash for votes
JerozLast Friday at 4:46 PM
@Maldon is, @Zaab
TronLast Friday at 4:46 PM
@Zaab There are unit tests and functional tests already, but we'd like more.
[Dev-Happy] BlondfrogsLast Friday at 4:46 PM
@Zaab Dividend code is currently adding test cases for better security. Should have more update on that next meeting
KAwARLast Friday at 4:46 PM
Absolute democracy seems to resemble anarchy or at least civil war. In EVE online they have a type of community voice that get voted in by the community.
ZaabLast Friday at 4:46 PM
No worries was just curious if it was going as planned or significant issues were being found
Obviously some hiccups are expected
More testing is always better!
TronLast Friday at 4:47 PM
Who in here is up for a good civil war? :wink:
ZaabLast Friday at 4:47 PM
Tron v Bruce. Celebrity fight night with proceeds to go to the RVN dev fund
SpyderDevLast Friday at 4:48 PM
Cagefight or mudpit?
JerozLast Friday at 4:48 PM
talking about dev funds..... :wink:
Pho3nix Monk3yLast Friday at 4:49 PM
and there goes the conversation....
KAwARLast Friday at 4:49 PM
I am trying to be serious...
ZaabLast Friday at 4:49 PM
Sorry back to the ascii topic!
traysiLast Friday at 4:49 PM
@Tron What do we need in order to make progress toward a decision on the algo? Is there a plan or a roadmap of sorts to get us some certainty about what we're going to do?
LSJI07 - MBITLast Friday at 4:50 PM
Could we have 3 no BIP9 votes? No1 Friendly to asics, retain status quo. No2 change to x17r minimal changes etc, with no additional future PoW/algo upgrades. No3. Full Asic resistance x22r and see what happens...
:thonk~1:
Sounds messy....
TronLast Friday at 4:51 PM
Right now we're in research mode. We're building CNv4 so we can run some metrics. If that goes well, we can put together x22rc and see how it performs. It will likely gore everyone's ox. CPUs can play, GPUs work, but aren't dominant. ASICs VERY difficult, and FPGAs will have a tough time.
ZaabLast Friday at 4:51 PM
Yeah i feel like the results would be unreliable
TronLast Friday at 4:51 PM
Is this good, or do we lose everyone's vote?
PlayHardLast Friday at 4:52 PM
Fpga will be dead
Lokar -=Kai=-Last Friday at 4:52 PM
why isn;t a simple XOR or something on the table?
ZaabLast Friday at 4:52 PM
The multiple bip9 that is
Lokar -=Kai=-Last Friday at 4:52 PM
something asic breaking but doesn't greatly complicate ongoing efforts for FPGA being my point.
justinjjaLast Friday at 4:52 PM
How are you going to vote for x22rc?
Because if by hashrate that wouldn't pass.
traysiLast Friday at 4:52 PM
Personally I like the idea of x22rc but I'd want to investigate the botnet threat if CPUs are allowed back in.
TronLast Friday at 4:52 PM
XOR is on the table, and was listed in my Medium post. But, the social risk of chain split remains, for very little gain.
traysiLast Friday at 4:53 PM
@Lokar -=Kai=- A small change means that whoever has 45% can probably quickly adapt.
LSJI07 - MBITLast Friday at 4:53 PM
Research sounds good. x22rc could be reduce to x22r for simplicity...
TronLast Friday at 4:53 PM
x22r is a viable option. No CNv4.
LSJI07 - MBITLast Friday at 4:53 PM
Don't know how much time we have to play with though...
Lokar -=Kai=-Last Friday at 4:53 PM
if they have FPGAs yes if they have ASIC then not so much, but I guess that gets to the point, what exactly are we trying to remove from the network?
PlayHardLast Friday at 4:54 PM
Guys my name is Arsen and we designed x16r fpga on bcus. Just about to release it to the public. I am buzzdaves partner.
Cryptonight
Will kill us
But agreed
Asic is possible on x16r
And you dont need 256 core
Cores
traysiLast Friday at 4:55 PM
Hi Arsen. Are you saying CN will kill "us" meaning RVN, or meaning FPGA?
JerozLast Friday at 4:55 PM
This is what im afraid of ^ an algo change killing FPGA as I have the feeling there is a big fpga community working on this
PlayHardLast Friday at 4:55 PM
Fpgas ))
whitefire990Last Friday at 4:55 PM
I am also about to release X16R for CVP13 + BCU1525 FPGA's. I'm open to algo changes but I really don't believe in CPU mining because of botnets. Any CNv4 shifts 100% to CPU mining, even if it is only 1 of the 22 functions.
Lokar -=Kai=-Last Friday at 4:55 PM
namely FPGAs that aren;t memory equipped
like fast mem
not ddr
PlayHardLast Friday at 4:55 PM
Hbm non hbm
Cryptonight
whitefire990Last Friday at 4:56 PM
Right now with both Buzzdave/Altered Silicon and myself (Zetheron) about to release X16R for FPGA's, then the 45% miner's share will decrease to 39% or less.
PlayHardLast Friday at 4:56 PM
Will be dead for fpga
LSJI07 - MBITLast Friday at 4:56 PM
sound so x22r is fpga "friendly" ... more so than asic anyway...
PlayHardLast Friday at 4:56 PM
But a change must be planned
X16r is no way possible to avoid asics
TJayLast Friday at 4:56 PM
@LSJI07 - MBIT I would say less friendly...
whitefire990Last Friday at 4:57 PM
As I mentioned in thenest discussion, asic resistance increases with the square of the number of functions, so X21R is more asic resistant than X16R, but both are pretty resistant
PlayHardLast Friday at 4:58 PM
Yeah more algos make it heavier on ASIC
DirkDiggler (Citadel Architect)Last Friday at 4:58 PM
My interpretation of the whitepaper was that we used x16r as it was brand new (thus ASIC resistant), and that was to ensure a fair launch... We've launched... I don't like the idea of constantly forking to avoid the inevitable ASICs.
x16r was a great "experiment" before we had any exchange listings... that ship has sailed though... not sure about all these x22rs lmnop changes
KAwARLast Friday at 5:00 PM
I believe that it is easier to change the direction of a bicycle than an oil tanker. We feel more like a train. We should lay out new tracks and test on them and find benefits that are acceptable to everyone except train robbers. Then open the new train station with no contentious feelings except a silently disgruntled minority group. ???
Hans_SchmidtLast Friday at 5:01 PM
The most productive action the community can do now re ASICs is to voice support for the devs to make a non-BIP9 change at a chosen block height if/when the need is clear. That removes the pressure to act rashly to avoid voting problems.
LSJI07 - MBITLast Friday at 5:01 PM
Thats why im proposing to fork at least once to a more asic resistant algo (but FPGA "friendly/possible"), with the proviso ideally that no more PoW algo forks are require to provide future ASICs some opportunity to innovate with silicon and efficiency.
TJayLast Friday at 5:01 PM
folks should take into account, that high end FPGAs like BCU1525 on x16r can't beat even previous gen GPUs (Pascal) in terms of hash cost. so they aren't a threat to miners community
PlayHardLast Friday at 5:02 PM
A proper change
Requires proper research
eyz (Silence)Last Friday at 5:02 PM
Just so I'm clear here, we are trying to boot ASICS, don't want CPUs because of Botnets, and are GPU and FPGA friendly right?
PlayHardLast Friday at 5:02 PM
It is not a quick one day process
eyz (Silence)Last Friday at 5:02 PM
If there is a bip9 vote there needs to be a clear explanation as I feel most in the community don't understand exactly what we are trying to fix
TronLast Friday at 5:03 PM
@Hans_Schmidt I like that route. It has some game theoretics. It gives time for miners to adapt. It is only used if needed. It reduces the likelihood of ASICs dominating the network, or even being built.
[Dev-Happy] BlondfrogsLast Friday at 5:03 PM
Hey guys. great convo. We are of course looking to do the best thing for the community and miner. We are going to be signing off here though.
justinjjaLast Friday at 5:03 PM
TJay that comes down to power cost.
If your paying 4c/kw gpus all the way.
But if your a home miner in europe an fpga is your only chance
LSJI07 - MBITLast Friday at 5:03 PM
@Hans_Schmidt How do we decide the block limit and when sufficient evidence is available? I would say we have had much compelling information to date...
[Dev-Happy] BlondfrogsLast Friday at 5:03 PM
Thanks for participating. and keep up the good work :smiley:
Have a good weekend.
CAWWWW
TronLast Friday at 5:03 PM
I haven't seen any compelling evidence of ASICs - yet.
Pho3nix Monk3yLast Friday at 5:03 PM
:v:
JerozLast Friday at 5:04 PM
I suggest to continue discussion in #development and #thenest :smiley:
thanks all!
TronLast Friday at 5:04 PM
Cheers everyone!
KAwARLast Friday at 5:04 PM
Agree with Hans.
DirkDiggler (Citadel Architect)Last Friday at 5:04 PM
thanks Tron
Pho3nix Monk3yLast Friday at 5:04 PM
Ending here. continue in Nest if wanted
DirkDiggler (Citadel Architect)Last Friday at 5:04 PM
I am waiting for compelling evidence myself.
submitted by mrderrik to Ravencoin [link] [comments]

What is SHA 256 - How sha256 algorithm works  sha 256 bitcoin  sha 256 blockchain  sha2 in hindi Bit and Byte Explained in 6 Minutes - What Are Bytes and ... Buying Bitcoin from a General Bytes Bitcoin ATM How many RAM chips of size 256 k x 1 bit are required to ... SHA256 Code Animation

Bits. Bit (b) is a measurement unit used in binary system to store or transmit data, like internet connection speed or the quality scale of an audio or a video recording. A bit is usually represented with a 0 or a 1. 8 bits make 1 byte. A bit can also be represented by other values like yes/no, true/false, plus/minus, and so on. SHA-256 produces 256 bits which is 32 bytes, not characters, each byte has 256 possible values. There are 256 bits and each bit has 2 values (0 or 1), thus 2^256. There are 32 bytes and each byte has 256 values, thus 256^32. Note: 2^256 == 256^32 ~= 10^77. The 32 bytes can be encoded many ways, in hexadecimal it would be 64 characters, in ... Bits. Bit (b) is a measurement unit used in binary system to store or transmit data, like internet connection speed or the quality scale of an audio or a video recording. A bit is usually represented with a 0 or a 1. 8 bits make 1 byte. A bit can also be represented by other values like yes/no, true/false, plus/minus, and so on. How Many Hashes? If a hashing algorithm is supposed to produce unique hashes for every possible input, just how many possible hashes are there? A bit has two possible values: 0 and 1. The possible number of unique hashes can be expressed as the number of possible values raised to the number of bits. For SHA-256 there are 2 256 possible ... In Bitcoin, someone with the private key that corresponds to funds on the block chain can spend the funds. In Bitcoin, a private key is a single unsigned 256 bit integer (32 bytes). public key: A number that corresponds to a private key, but does not need to be kept secret. A public key can be calculated from a private key, but not vice versa. A public key can be used to determine if a ...

[index] [7622] [4911] [3469] [29964] [2310] [39969] [45619] [14317] [30374] [1767]

What is SHA 256 - How sha256 algorithm works sha 256 bitcoin sha 256 blockchain sha2 in hindi

This feature is not available right now. Please try again later. Bitcoin ATMs are a very commonly discussed methods of buying Bitcoin with money/cash. But what's it like to use one? And are they easy/difficult to use? In t... I created this video with the YouTube Video Editor (http://www.youtube.com/editor) IGNOU MCS-012 TEE solved question How many RAM chips of size 256 k x 1 bit are required to build 1 MB of memory ? MY ALL-ENCOMPASSING GUIDE TO GETTING STARTED WITH BITCOIN: https://www.btcsessions.ca/post/how-to-buy-sell-and-use-bitcoin-in-canada Today I check out anothe...

#