It's around uh probably 12.30 or so out in the uh mobile recording studio.
Uh quick discussion about Bitcoin and the um uh alternative issues that are before us.
There's this thing called SegWit, which is segmented witness, and uh there's this other alternative called um uh Bitcoin Unlimited.
Both of these are in response to the blockchain slowing down uh due to the success of Bitcoin, and the blockchain processing is slowing down.
Um it is reckoned, uh the discussion is around the idea that the block size is basically too small.
And so two alternatives have appeared in the uh Bitcoin community.
Uh one of them is called a SegWit, segmented witness, and it basically, if you want to think about it a particular way, uh, is a secondary block chain uh so that all of the functions of the uh public ledger are maintained uh in a second blockchain runs alongside, which references as well.
And um it because it's gonna be expensive because of the way that they've proposed in implementing it.
The other is the Bitcoin unlimited approach.
Um it is its idea is to scale up the size of the block to accommodate more information.
And uh if the and on the understanding or theory being that if that takes the same amount of time to process a block, you've processed that much more information in there, and so you would gain scalability and you would gain speed uh with the ability to ever increase the block size and cram more stuff in there to um be met with your current level of computing power.
Well, uh both of these would work.
There's drawbacks to both.
The uh drawbacks to the um segmented witness is the cost and no guarantee of speed, actually.
And then there's some security issues in my way of thinking, but but beyond that, on the uh block uh on the Bitcoin Unlimited, the uh problem there is the alteration, the hard fork, unrecoverable fork because the block is gonna change thereafter, and so they wouldn't be able you wouldn't be able to uh decide to undo it later and merge them.
It's not gonna happen easily.
Uh so uh and it and it necessarily creates uh duplicates everything, it doubles everything.
So you know it's a solution, I'm not arguing for it.
Uh it does work.
Uh I'm not particularly of the opinion that we need to rush into any solution.
And I'm gonna offer a third one.
Uh my third solution or potential solution here, and I'm not gonna do the work for this, but it's easily achieved, is that there's um uh uh block or the programmers will understand what I'm saying here, but there's 83 bytes in the current uh existent block structure.
Uh these 83 bytes could easily be used to uh chain uh a number of other blocks of exactly the same structure with exactly the same free 80 bytes.
Now at the moment people are just shoving crupping crud in there.
I mean there's all kinds of weird spam and and people are storing records in there and doing all sorts of things.
Have you got photos and uh strange images and gifts and this sort of thing just because they can.
But that that space could be allocated to um chaining blocks, and so my idea would be instead of uh segmented witness or uh altering the block would be iterative processing.
So every time the minor software hit a block, it would know to jump right to the particular point of um 80 bytes free, pick up on the uh ancillary blocks, process through the primary block, and then um uh process through any and all slave blocks.
Now also bear in mind this this means that as long as you're keeping the block size the same and the structure the same, just adding a little bit, not altering, um, and adding a bit to your code, then the same processing for any of the slave blocks would be the processing for the master block.
You would just need to have an iterative accumulator or counter to know where you are in the chain, and technically it would be infinite.
You know, you'd have infinite scalability laterally so to speak off each and every block, and you'd have to as you're processing through the blocks, you'd just have to stop on each block and process laterally to the end of each chain, come back and then go to the next one.
Um, I do this with my spiders for gathering data all the time.
And it uh it works very well.
Uh you need to know where you are in your accumulator, and you gotta watch your stack as you're doing this kind of stuff.
But that shouldn't be any problem with today's modern hardware.
So there's a third possibility for you.
Uh an expando block, if you will, without the block ever expanding, uh, it replicates itself or replicating blocks or cloning blocks or whatever you want.
And then they're all chained with smart numbers so that the the continuity of the blockchain is is maintained.
The the records on each and every one of them is is maintained.
You can even devote uh you would have to devote some some of your um 80 bytes to a checksum uh to give you a uh you know smart number analysis about the number of ancillary blocks that you had to process, but it's no big deal.
Uh so and there may be technical issues that you know for why people wouldn't want to implement that, but it is another solution.