Quick discussion about Bitcoin and the alternative issues that are before us.
There's this thing called SegWit, which is segmented witness.
And there's this other alternative called Bitcoin Unlimited.
Both of these are in response to the blockchain slowing down due to the success of Bitcoin.
And the blockchain processing is slowing down.
It is reckoned, the discussion is around the idea that the block size is basically too small.
And so two alternatives have appeared in the Bitcoin community.
One of them is called SegWit, Segmented Witness, and it basically, if you want to think about it in a particular way, is a secondary blockchain so that all of the functions of the public ledger are maintained and a second blockchain runs alongside which references the block.
And it's going to be expensive because of the way that they proposed in implementing it.
The other is the Bitcoin Unlimited approach.
Its idea is to scale up the size of the block to accommodate more information.
And on the understanding or theory being that if it takes the same amount of time to process a block, you've processed that much more information in there.
And so you would gain scalability and you would gain speed with the ability to ever increase the block size and cram more stuff in there to be met with your current level of computing power.
Well, both of these would work.
There's drawbacks to both.
The drawbacks to the segmented witness is the cost and no guarantee of speed, actually.
And then there's some security issues in my way of thinking.
But beyond that, on the Bitcoin Unlimited, the problem there is the alteration, the hard fork, unrecoverable fork, because the block is going to change thereafter.
And so you wouldn't be able to decide to undo it later and merge them.
It's not going to happen easily.
And it necessarily creates, it duplicates everything.
It doubles everything.
So, you know, it's a solution.
I'm not arguing for it.
It does work.
I'm not particularly of the opinion that we need to rush into any solution.
And I'm going to offer a third one.
My third solution or potential solution here, and I'm not going to do the work for this, but it's easily achieved, is that there's the programmers will understand what I'm saying here, but there's 80 free bytes in the current existent block structure.
These 80 free bytes could easily be used to chain a number of other blocks of exactly the same structure with exactly the same free 80 bytes.
Now at the moment, people are just shoving crup and crud in there.
I mean, there's all kinds of weird spam and people are storing records in there and doing all sorts of things.
They've got photos and strange images and GIFs and this sort of thing just because they can.
But that space could be allocated to chaining blocks.
And so my idea would be instead of segmented witness or altering the block, would be iterative processing.
So every time the miner software hit a block, it would know to jump right to the particular point of 80 bytes free, pick up on the ancillary blocks, process through the primary block, and then process through any and all slave blocks.
Now also bear in mind this means that as long as you're keeping the block size the same and the structure the same, just adding a little bit, not altering, and adding a bit to your code, then the same processing for any of the slave blocks would be the processing for the master block.
You would just need to have an iterative accumulator or counter to know where you are in the chain.
And technically it would be infinite.
You know, you'd have infinite scalability laterally, so to speak, off each and every block.
And you'd have to, as you're processing through the blocks, you'd just have to stop on each block and process laterally to the end of its chain, come back, and then go to the next one.
I do this with my spiders for gathering data all the time.
And it works very well.
You need to know where you are in your accumulator, and you've got to watch your stack as you're doing this kind of stuff.
But that shouldn't be any problem with today's modern hardware.
So there's a third possibility for you, an expando block, if you will, without the block ever expanding.
it replicates itself or replicating blocks or cloning blocks or whatever you want and then they're all chained with smart numbers so that the the continuity of the blockchain is is maintained the the records on each and every one of them is is maintained you can even devote uh you would have to devote some of your um 80 bytes to a checksum uh to give you a uh you know smart number analysis about the number of ancillary blocks that you had to process but it's no big deal So,
and there may be technical issues for why people wouldn't want to implement that, but it is another solution.