Encapsulation in Software Design - Coiner Blog

Why Osana takes so long? (Programmer's point of view on current situation)

I decided to write a comment about «Why Osana takes so long?» somewhere and what can be done to shorten this time. It turned into a long essay. Here's TL;DR of it:
The cost of never paying down this technical debt is clear; eventually the cost to deliver functionality will become so slow that it is easy for a well-designed competitive software product to overtake the badly-designed software in terms of features. In my experience, badly designed software can also lead to a more stressed engineering workforce, in turn leading higher staff churn (which in turn affects costs and productivity when delivering features). Additionally, due to the complexity in a given codebase, the ability to accurately estimate work will also disappear.
Junade Ali, Mastering PHP Design Patterns (2016)
Longer version: I am not sure if people here wanted an explanation from a real developer who works with C and with relatively large projects, but I am going to do it nonetheless. I am not much interested in Yandere Simulator nor in this genre in general, but this particular development has a lot to learn from for any fellow programmers and software engineers to ensure that they'll never end up in Alex's situation, especially considering that he is definitely not the first one to got himself knee-deep in the development hell (do you remember Star Citizen?) and he is definitely not the last one.
On the one hand, people see that Alex works incredibly slowly, equivalent of, like, one hour per day, comparing it with, say, Papers, Please, the game that was developed in nine months from start to finish by one guy. On the other hand, Alex himself most likely thinks that he works until complete exhaustion each day. In fact, I highly suspect that both those sentences are correct! Because of the mistakes made during early development stages, which are highly unlikely to be fixed due to the pressure put on the developer right now and due to his overall approach to coding, cost to add any relatively large feature (e.g. Osana) can be pretty much comparable to the cost of creating a fan game from start to finish. Trust me, I've seen his leaked source code (don't tell anybody about that) and I know what I am talking about. The largest problem in Yandere Simulator right now is its super slow development. So, without further ado, let's talk about how «implementing the low hanging fruit» crippled the development and, more importantly, what would have been an ideal course of action from my point of view to get out. I'll try to explain things in the easiest terms possible.
  1. else if's and lack any sort of refactoring in general
The most «memey» one. I won't talk about the performance though (switch statement is not better in terms of performance, it is a myth. If compiler detects some code that can be turned into a jump table, for example, it will do it, no matter if it is a chain of if's or a switch statement. Compilers nowadays are way smarter than one might think). Just take a look here. I know that it's his older JavaScript code, but, believe it or not, this piece is still present in C# version relatively untouched.
I refactored this code for you using C language (mixed with C++ since there's no this pointer in pure C). Take a note that else if's are still there, else if's are not the problem by itself.
The refactored code is just objectively better for one simple reason: it is shorter, while not being obscure, and now it should be able to handle, say, Trespassing and Blood case without any input from the developer due to the usage of flags. Basically, the shorter your code, the more you can see on screen without spreading your attention too much. As a rule of thumb, the less lines there are, the easier it is for you to work with the code. Just don't overkill that, unless you are going to participate in International Obfuscated C Code Contest. Let me reiterate:
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
Antoine de Saint-Exupéry
This is why refactoring — activity of rewriting your old code so it does the same thing, but does it quicker, in a more generic way, in less lines or simpler — is so powerful. In my experience, you can only keep one module/class/whatever in your brain if it does not exceed ~1000 lines, maybe ~1500. Splitting 17000-line-long class into smaller classes probably won't improve performance at all, but it will make working with parts of this class way easier.
Is it too late now to start refactoring? Of course NO: better late than never.
  1. Comments
If you think that you wrote this code, so you'll always easily remember it, I have some bad news for you: you won't. In my experience, one week and that's it. That's why comments are so crucial. It is not necessary to put a ton of comments everywhere, but just a general idea will help you out in the future. Even if you think that It Just Works™ and you'll never ever need to fix it. Time spent to write and debug one line of code almost always exceeds time to write one comment in large-scale projects. Moreover, the best code is the code that is self-evident. In the example above, what the hell does (float) 6 mean? Why not wrap it around into the constant with a good, self-descriptive name? Again, it won't affect performance, since C# compiler is smart enough to silently remove this constant from the real code and place its value into the method invocation directly. Such constants are here for you.
I rewrote my code above a little bit to illustrate this. With those comments, you don't have to remember your code at all, since its functionality is outlined in two tiny lines of comments above it. Moreover, even a person with zero knowledge in programming will figure out the purpose of this code. It took me less than half a minute to write those comments, but it'll probably save me quite a lot of time of figuring out «what was I thinking back then» one day.
Is it too late now to start adding comments? Again, of course NO. Don't be lazy and redirect all your typing from «debunk» page (which pretty much does the opposite of debunking, but who am I to judge you here?) into some useful comments.
  1. Unit testing
This is often neglected, but consider the following. You wrote some code, you ran your game, you saw a new bug. Was it introduced right now? Is it a problem in your older code which has shown up just because you have never actually used it until now? Where should you search for it? You have no idea, and you have one painful debugging session ahead. Just imagine how easier it would be if you've had some routines which automatically execute after each build and check that environment is still sane and nothing broke on a fundamental level. This is called unit testing, and yes, unit tests won't be able to catch all your bugs, but even getting 20% of bugs identified at the earlier stage is a huge boon to development speed.
Is it too late now to start adding unit tests? Kinda YES and NO at the same time. Unit testing works best if it covers the majority of project's code. On the other side, a journey of a thousand miles begins with a single step. If you decide to start refactoring your code, writing a unit test before refactoring will help you to prove to yourself that you have not broken anything without the need of running the game at all.
  1. Static code analysis
This is basically pretty self-explanatory. You set this thing once, you forget about it. Static code analyzer is another «free estate» to speed up the development process by finding tiny little errors, mostly silly typos (do you think that you are good enough in finding them? Well, good luck catching x << 4; in place of x <<= 4; buried deep in C code by eye!). Again, this is not a silver bullet, it is another tool which will help you out with debugging a little bit along with the debugger, unit tests and other things. You need every little bit of help here.
Is it too late now to hook up static code analyzer? Obviously NO.
  1. Code architecture
Say, you want to build Osana, but then you decided to implement some feature, e.g. Snap Mode. By doing this you have maybe made your game a little bit better, but what you have just essentially done is complicated your life, because now you should also write Osana code for Snap Mode. The way game architecture is done right now, easter eggs code is deeply interleaved with game logic, which leads to code «spaghettifying», which in turn slows down the addition of new features, because one has to consider how this feature would work alongside each and every old feature and easter egg. Even if it is just gazing over one line per easter egg, it adds up to the mess, slowly but surely.
A lot of people mention that developer should have been doing it in object-oritented way. However, there is no silver bullet in programming. It does not matter that much if you are doing it object-oriented way or usual procedural way; you can theoretically write, say, AI routines on functional (e.g. LISP)) or even logical language if you are brave enough (e.g. Prolog). You can even invent your own tiny programming language! The only thing that matters is code quality and avoiding the so-called shotgun surgery situation, which plagues Yandere Simulator from top to bottom right now. Is there a way of adding a new feature without interfering with your older code (e.g. by creating a child class which will encapsulate all the things you need, for example)? Go for it, this feature is basically «free» for you. Otherwise you'd better think twice before doing this, because you are going into the «technical debt» territory, borrowing your time from the future by saying «I'll maybe optimize it later» and «a thousand more lines probably won't slow me down in the future that much, right?». Technical debt will incur interest on its own that you'll have to pay. Basically, the entire situation around Osana right now is just a huge tale about how just «interest» incurred by technical debt can control the entire project, like the tail wiggling the dog.
I won't elaborate here further, since it'll take me an even larger post to fully describe what's wrong about Yandere Simulator's code architecture.
Is it too late to rebuild code architecture? Sadly, YES, although it should be possible to split Student class into descendants by using hooks for individual students. However, code architecture can be improved by a vast margin if you start removing easter eggs and features like Snap Mode that currently bloat Yandere Simulator. I know it is going to be painful, but it is the only way to improve code quality here and now. This will simplify the code, and this will make it easier for you to add the «real» features, like Osana or whatever you'd like to accomplish. If you'll ever want them back, you can track them down in Git history and re-implement them one by one, hopefully without performing the shotgun surgery this time.
  1. Loading times
Again, I won't be talking about the performance, since you can debug your game on 20 FPS as well as on 60 FPS, but this is a very different story. Yandere Simulator is huge. Once you fixed a bug, you want to test it, right? And your workflow right now probably looks like this:
  1. Fix the code (unavoidable time loss)
  2. Rebuild the project (can take a loooong time)
  3. Load your game (can take a loooong time)
  4. Test it (unavoidable time loss, unless another bug has popped up via unit testing, code analyzer etc.)
And you can fix it. For instance, I know that Yandere Simulator makes all the students' photos during loading. Why should that be done there? Why not either move it to project building stage by adding build hook so Unity does that for you during full project rebuild, or, even better, why not disable it completely or replace with «PLACEHOLDER» text for debug builds? Each second spent watching the loading screen will be rightfully interpreted as «son is not coding» by the community.
Is it too late to reduce loading times? Hell NO.
  1. Jenkins
Or any other continuous integration tool. «Rebuild a project» can take a long time too, and what can we do about that? Let me give you an idea. Buy a new PC. Get a 32-core Threadripper, 32 GB of fastest RAM you can afford and a cool motherboard which would support all of that (of course, Ryzen/i5/Celeron/i386/Raspberry Pi is fine too, but the faster, the better). The rest is not necessary, e.g. a barely functional second hand video card burned out by bitcoin mining is fine. You set up another PC in your room. You connect it to your network. You set up ramdisk to speed things up even more. You properly set up Jenkins) on this PC. From now on, Jenkins cares about the rest: tracking your Git repository, (re)building process, large and time-consuming unit tests, invoking static code analyzer, profiling, generating reports and whatever else you can and want to hook up. More importantly, you can fix another bug while Jenkins is rebuilding the project for the previous one et cetera.
In general, continuous integration is a great technology to quickly track down errors that were introduced in previous versions, attempting to avoid those kinds of bug hunting sessions. I am highly unsure if continuous integration is needed for 10000-20000 source lines long projects, but things can be different as soon as we step into the 100k+ territory, and Yandere Simulator by now has approximately 150k+ source lines of code. I think that probably continuous integration might be well worth it for Yandere Simulator.
Is it too late to add continuous integration? NO, albeit it is going to take some time and skills to set up.
  1. Stop caring about the criticism
Stop comparing Alex to Scott Cawton. IMO Alex is very similar to the person known as SgtMarkIV, the developer of Brutal Doom, who is also a notorious edgelord who, for example, also once told somebody to kill himself, just like… However, being a horrible person, SgtMarkIV does his job. He simply does not care much about public opinion. That's the difference.
  1. Go outside
Enough said. Your brain works slower if you only think about games and if you can't provide it with enough oxygen supply. I know that this one is probably the hardest to implement, but…
That's all, folks.
Bonus: Do you think how short this list would have been if someone just simply listened to Mike Zaimont instead of breaking down in tears?
submitted by Dezhitse to Osana [link] [comments]

08-10 07:44 - 'Who is forking Filecoin?' (self.Bitcoin) by /u/paulcheung1990 removed from /r/Bitcoin within 6-16min

'''
Forking Filecoin is a $500 million to $1 billion business?
On July 17, cryptocurrency analyst Bitfool mentioned via Weibo: “Recently, people who forked Filecoin in the market have been undercurrents; as far as I know, there are 4-5 teams. From a strategic point of view, the project Teams, investors, and miners get two of the three to successfully fork; one of the three can steal 5-10% of the market value. Therefore, Filecoin's fork is a $500 million to $1 billion Business."
"It's even more awesome, full of courage, and ready to build a team to fork Fliecoin. Well done, famous in the world, poor done, and scorned by thousands of people."
Sun Ming, a partner of Fenbushi Capital, mentioned in an interview: "Miners who have invested a lot of hardware resources are promoting the fork of Filecoin.
Hu Feng, operating partner of the FILPool mining pool, said: "Currently, big miners have ideas, but it will only be possible after the mainnet is online.
Filecoin economic model is not friendly to miners
At the beginning of the establishment of the Filecoin economic model, a pledge and reward and punishment mechanism was proposed, which has undergone many adjustments. The last three adjustments have made the mechanism increasingly stringent.
In April of this year, the Filecoin project team introduced their thinking on the economic model and refined the reward and punishment mechanism. Miners who complete file storage can get corresponding block rewards, and fail to store files within the promised period will be punished. This fine is imposed on the Filecoin collateral pool (locked funds) provided by each storage miner. Locked funds include a small amount of early FIL tokens and token rewards obtained from miners.
Miners need to mortgage a certain amount of tokens in the early stage. If the amount of mortgage is too large, it will cause a shortage of FIL tokens in the early stage. The improvement made by the economic model is to transfer some of the early-stage costs to future block rewards.
The severe punishment mechanism made some miners dissatisfied, and some miners commented that the mechanism was too "crude".
In May, Filecoin made major adjustments to its economic model. This adjustment raises the threshold for miners to leave. Filecoin continues to strengthen the miner's mortgage mechanism, and part of the rewards mined by the miners will be locked. The penalty mechanism has also been changed accordingly. Only when the task of file hosting is completed can the mining reward be unlocked. If miners want to profit, they need to have strong computing power and be able to provide stable storage services for a long time.
If this is acceptable to miners, the recent "pre-mortgage" mechanism has left miners at a loss.
"Pre-mortgage" is proposed in the latest Calibration version of Filecoin, which means that every sector encapsulated requires a certain amount of FIL to be pledged in advance, and the pledged token needs to be locked for 180 days and then released in 180 days.
The consequence of "pre-mortgage" is that FIL token has worse liquidity in the early stage.
A large number of FIL mortgages are required in the early stage, which will force miners to find the official to buy coins, and the long lock-up period causes most miners to choose to sell coins instead of encapsulation. "The miners have put their money in the hardware, where can they go out and buy coins?"
Since there is not enough funds to buy coins as collateral, it loses the qualification for mining. Even if the mortgage funds are saved, it is almost impossible to pay back with the small amount of currency in the early period.
Sun Ming said: "The mining output is too small, making it difficult for early miners to maintain operations."
The adjustment of the economic model continues to compress the income of early miners, and the voice of miners proposing to fork Filecoin is also getting louder.
Sun Ming believes: "On the one hand, it is the protest of the miners against ProtocolLabs (requesting it to modify the economic model), and on the other hand, it is also the desperate fight of the miners forced to do nothing."
Li Bai posted a circle of friends to express his attitude. As shown below:

[link]1
Another very important point is that, according to the current reward mechanism, Filecoin competition in China is tantamount to "college entrance examination".
Take the Filecoin big miner test competition as an example, miners can only be rewarded if they are ranked in the top 50 in their area or in the top 100 among all miners. Looking at the situation of Chinese miners, 9 of the top 10 nodes in the world are from China. According to people familiar with the matter, about 80% of Filecoin miners are concentrated in China. The fierce competition can be imagined.
Wang Qingshui expressed his concern: more than 90% of miners may not make money. Many miners saw that they couldn't make money, and they had the idea of opening up Filecoin "other tracks". Therefore, the call for a Filecoin fork is the strongest in China.
Unaffordable mining costs and thresholds
In addition to Filecoin's economic model, another point that miners complain about is Filecoin's threshold and cost.
The cost of Filecoin mining input and the technical threshold of operation are beyond the reach of many miners and mines.
Filecoin has a severe punishment mechanism, which can ensure the safety of the data party, but at the same time it will bring a high threshold for mining professionalism and operation and maintenance stability.
In order to ensure uninterrupted power and no disconnection, it must be hosted in a high-level IDC computer room. In order to ensure mining efficiency, the network, computing power, and storage hardware must not be poor. Therefore, miners need to use a large sum of money to purchase high-end hardware equipment.
Instant window-POST verification and submission requires high algorithms and error repair capabilities, and requires professional algorithms and operation and maintenance teams.
In addition, the threshold for Filecoin mining may be above 10TB or even higher.
Entry mining has a threshold for storage and technical maintenance, and a lot of money is needed to purchase hardware equipment.
Earlier, a blogger did a cost calculation. With 30 mining machines as a cluster calculation, the expenditure for purchasing mining machines alone was as high as 6 million. Coupled with the cost of computer room construction, operation and maintenance, Filecoin mining costs may be more than 10 million yuan.
Wang Qingshui also mentioned the flaws: “Many ordinary miners and even servers cannot participate, which is contrary to the original intention of the project.”
Some people in the community expressed their concerns: "I have invested so much. What if something goes wrong after Filecoin goes online? Wouldn't it be a loss?"
So some miners are thinking, can they lower the threshold of mining while ensuring safety?
Some miners pointed out that not all mining machines need to be hosted in the IDC computer room, which is costly and prone to waste of resources. If it can be hosted in different computer rooms according to the performance of each type of mining machine, it can not only ensure safety, but also reduce costs.
Judging from the interview, many industry insiders are on the sidelines of the Filecoin fork.
Li Bai said: "There are many people who have ideas, but few people can put them into action."
Wang Qingshui believes that any popular big project will be forked. Are BTC and ETH forked less? But how many forks can surpass the original version?
Some miners think that the fork is just talking: "Someone will follow the official game.", "Who wrote the code for you after the fork? Do you dare to use the code you wrote?"
The Filecoin fork is "undercurrent". As the Filecoin mainnet approaches, miners' actions will become more frequent, and we will continue to report.
What do you think of the Filecoin fork? Please let us know in the comments section.
'''
Who is forking Filecoin?
Go1dfish undelete link
unreddit undelete link
Author: paulcheung1990
1: ****ie*.redd***/4l6*p**nn4g51.jpg*width=676&*forma**pjpg&am**auto=*e*p&***c*16a*61e2*0d1a*4*3f9f*9c8*fdfcebfdb*d3
Unknown links are censored to prevent spreading illicit content.
submitted by removalbot to removalbot [link] [comments]

Review and Prospect of Crypto Economy-Development and Evolution of Consensus Mechanism (1)

Review and Prospect of Crypto Economy-Development and Evolution of Consensus Mechanism (1)

https://preview.redd.it/7skleasc80a51.png?width=553&format=png&auto=webp&s=fc18cee10bff7b65d5b02487885d936d23382fc8
Table 1 Classification of consensus system
Source: Yuan Yong, Ni Xiaochun, Zeng Shuai, Wang Feiyue, "Development Status and Prospect of Blockchain Consensus Algorithm"
Figure 4 Evolution of consensus algorithm

Figure 4 Evolution of consensus algorithm
Source: Network data

Foreword
The consensus mechanism is one of the important elements of the blockchain and the core rule of the normal operation of the distributed ledger. It is mainly used to solve the trust problem between people and determine who is responsible for generating new blocks and maintaining the effective unification of the system in the blockchain system. Thus, it has become an everlasting research hot topic in blockchain.
This article starts with the concept and role of the consensus mechanism. First, it enables the reader to have a preliminary understanding of the consensus mechanism as a whole; then starting with the two armies and the Byzantine general problem, the evolution of the consensus mechanism is introduced in the order of the time when the consensus mechanism is proposed; Then, it briefly introduces the current mainstream consensus mechanism from three aspects of concept, working principle and representative project, and compares the advantages and disadvantages of the mainstream consensus mechanism; finally, it gives suggestions on how to choose a consensus mechanism for blockchain projects and pointed out the possibility of the future development of the consensus mechanism.
Contents
First, concept and function of the consensus mechanism
1.1 Concept: The core rules for the normal operation of distributed ledgers
1.2 Role: Solve the trust problem and decide the generation and maintenance of new blocks
1.2.1 Used to solve the trust problem between people
1.2.2 Used to decide who is responsible for generating new blocks and maintaining effective unity in the blockchain system
1.3 Mainstream model of consensus algorithm
Second, the origin of the consensus mechanism
2.1 The two armies and the Byzantine generals
2.1.1 The two armies problem
2.1.2 The Byzantine generals problem
2.2 Development history of consensus mechanism
2.2.1 Classification of consensus mechanism
2.2.2 Development frontier of consensus mechanism
Third, Common Consensus System
Fourth, Selection of consensus mechanism and summary of current situation
4.1 How to choose a consensus mechanism that suits you
4.1.1 Determine whether the final result is important
4.1.2 Determine how fast the application process needs to be
4.1.2 Determining the degree to which the application requires for decentralization
4.1.3 Determine whether the system can be terminated
4.1.4 Select a suitable consensus algorithm after weighing the advantages and disadvantages
4.2 Future development of consensus mechanism
Chapter 1 Concept and Function of Consensus Mechanism
1.1 Concept: The core rules for the normal operation of distributed ledgers
Since most cryptocurrencies use decentralized blockchain design, nodes are scattered and parallel everywhere, so a system must be designed to maintain the order and fairness of the system's operation, unify the version of the blockchain, and reward users maintaining the blockchain and punish malicious harmers. Such a system must rely on some way to prove that who has obtained the packaging rights (or accounting rights) of a blockchain and can obtain the reward for packaging this block; or who intends to harm , and will receive certain penalty. Such system is consensus mechanism.
1.2 Role: Solve the trust problem and decide the generation and maintenance of new blocks
1.2.1 Used to solve the trust problem between people
The reason why the consensus mechanism can be at the core of the blockchain technology is that it has formulated a set of rules from the perspective of cryptographic technologies such as asymmetric encryption and time stamping. All participants must comply with this rules. And theese rules are transparent, and cannot be modified artificially. Therefore, without the endorsement of a third-party authority, it can also mobilize nodes across the network to jointly monitor, record all transactions, and publish them in the form of codes, effectively achieving valuable information transfer, solving or more precisely, greatly improving the trust problem between two unrelated strangers who do not trust each other. After all, trusting the objective technology is less risky than trusting a subjective individual.
1.2.2 Used to decide who is responsible for generating new blocks and maintaining effective unity in the blockchain system
On the other hand, in the blockchain system, due to the high network latency of the peer-to-peer network, the sequence of transactions observed by each node is different. To solve this, the consensus mechanism can be used to reach consensus on transactions order within a short period of time to decide who is responsible for generating new blocks in the blockchain system, and to maintain the effective unity of the blockchain.
1.3 The mainstream model of consensus algorithm
The blockchain system is built on the P2P network, and the set of all nodes can be recorded as PP, generally divided into ordinary nodes that produce data or transactions, and"miner" nodes (denoted as M) responsible for mining operations, like verifying, packaging, and updating the data generated by ordinary nodes or transactions. The functions of the two types of nodes may be overlapped; miner nodes usually participate in the consensus competition process in general, and will select certain representative nodes and replace them to participant in the consensus process and compete for accounting rights in specific algorithms. The collection of these representative nodes is recorded as DD; the accounting nodes selected through the consensus process are recorded as AA. The consensus process is repeated in accordance with the round, and each round of the consensus process generally reselects the accounting node for the round . The core of the consensus process is the "select leader" and "accounting" two parts. In the specific operation process, each round can be divided into four stages: Leader election, Block generation, Data validation and Chain updating namely accounting). As shown in Figure 1, the input of the consensus process is the transaction or data generated and verified by the data node, and the output is the encapsulated data block and updated blockchain. The four stages are executed repeatedly, and each execution round will generate a new block.
Stage 1: Leader election
The election is the core of the consensus process, that is, the process of selecting the accounting node AA from all the miner node sets MM: we can use the formula f(M)→f(M)→AA to represent the election process, where the function ff represents the specific implementation of the consensus algorithm. Generally speaking, |A|=1,|A|=1, that is, the only miner node is finally selected to keep accounts.
Stage 2: Block generation
The accounting node selected in the first stage packages the transactions or data generated by all nodes PP in the current time period into a block according to a specific strategy, and broadcasts the generated new block to all miner nodes MM or their representative nodes DD. These transactions or data are usually sorted according to various factors such as block capacity, transaction fees, transaction waiting time, etc., and then packaged into new blocks in sequence. The block generation strategy is a key factor in the performance of the blockchain system, and it also exposes the strategic behavior of miners such as greedy transactions packaging and selfish mining.
Stage 3: Verification
After receiving the broadcasted new block, the miner node MM or the representative node DD will verify the correctness and rationality of the transactions or data encapsulated in the block. If the new block is approved by most verification/representative nodes, the block will be updated to the blockchain as the next block.
Stage 4: On-Chain
The accounting node adds new blocks to the main chain to form a complete and longer chain from the genesis block to the latest block. If there are multiple fork chains on the main chain, the main chain needs to be based on the consensus algorithm judging criteria to choose one of the appropriate fork chain as the main chain.
Chapter 2 The Origin of Consensus Mechanism
2.1 The two armies problems and the Byzantium generals problem
2.1.1 The two armies


Figure 2 Schematic diagram of the two armed forces
Selected from Yuan Yong, Ni Xiaochun, Zeng Shuai, Wang Feiyue, "Development Status and Prospect of Blockchain Consensus Algorithm", Journal of Automation, 2018, 44(11): 2011-2022
As shown in the figure, the 1st and 2nd units of the Blue Army are stationed on two sides of the slope, and cannot communicate remotely between each other. While the White Army is just stationed in the middle of the two Blue Army units. Suppose that the White Army is stronger than either of the two Blue Army units, but it is not as strong as the two Blue Army units combined. If the two units of the Blue Army want to jointly attack the White Army at the same time, they need to communicate with each other, but the White Army is stationed in the middle of them. It is impossible to confirm whether the messengers of two Blue Army units have sent the attack signal to each other, let alone the tampering of the messages. In this case, due to the inability to fully confirm with each other, ultimately no effective consensus can be reached between the two Blue Army units, rendering the "paradox of the two armies".
2.1.2 The Byzantine generals problem


Figure 3 Diagram of the Byzantine generals' problem
Due to the vast territory of the Byzantine roman empire at that time, in order to better achieve the purpose of defense, troops were scattered around the empire, and each army was far apart, and only messengers could deliver messages. During the war, all generals must reach an agreement, or decide whether to attack the enemy based on the majority principle. However, since it is completely dependent on people, if there is a situation where the general rebels or the messenger delivers the wrong message, how can it ensure that the loyal generals can reach agreement without being influenced by the rebels is a problem which was called the Byzantine problem.
The two armies problems and the Byzantine generals problem are all elaborating the same problem: in the case of unreliable information exchange, it is very difficult to reach consensus and coordinate action. The Byzantine general problem is more like a generalization of the "paradox of the two armies".
From the perspective of the computer network, the two armies problem and the Byzantine problem are common contents of computer network courses: the direct communication between two nodes on the network may fail, so the TCP protocol cannot completely guarantee the consistence between the two terminal networks. However, the consensus mechanism can use economic incentives and other methods to reduce this uncertainty to a level acceptable to most people.
It is precisely because of the two armies problem and the Byzantine problem that the consensus mechanism has begun to show its value.
2.2 Development history of consensus mechanism
2.2.1 Classification of consensus mechanism
Because different types of blockchain projects have different requirements for information recording and block generation, and as the consensus mechanism improves due to the development of blockchain technology, there are currently more than 30 consensus mechanisms. These consensus mechanisms can be divided into two categories according to their Byzantine fault tolerance performance: Byzantine fault tolerance system and non-Byzantine fault tolerance system.

Table 1 Classification of consensus mechanism
Source: Yuan Yong, Ni Xiaochun, Zeng Shuai, Wang Feiyue, "Development Status and Prospect of Blockchain Consensus Algorithm"
2.2.2 Development frontier of consensus mechanism
-Development of consensus algorithm
According to the proposed time of the consensus algorithm, we can see relatively clearly the development of the consensus algorithm.
Source: Network data

Figure 4 Development frontier of consensus algorithm

Figure 5 Historical evolution of blockchain consensus algorithm
Source: Yuan Yong, Ni Xiaochun, Zeng Shuai, Wang Feiyue, "Development Status and Prospect of Blockchain Consensus Algorithm"
The consensus algorithm has laid the foundation for the blockchain consensus mechanism. Initially, the research of consensus algorithms was mainly used by computer scientists and computer professors to improve the spam problem or conduct academic discussions.
For example, in 1993, American computer scientist and Harvard professor Cynthia Dwork first proposed the idea of proof of work in order to solve the spam problem; in 1997, the British cryptographer Adam Back also independently proposed to solve the spam problem by use of the mechanism of proof of work for hashing cash and published officially in 2002; in 1999, Markus Jakobsson officially proposed the concept of "proof of work", which laid the foundation for the subsequent design of Satoshi Nakamoto's Bitcoin consensus mechanism.
Next lecture: Chapter 3 Detailed Explanation of Consensus Mechanism Technology
CelesOS
As the first DPOW financial blockchain operating system, CelesOS adopts consensus mechanism 3.0 to break through the "impossible triangle". It provides both high TPS and decentralization. Committed to creating a financial blockchain operating system that embraces regulation, providing services for financial institutions and the development of applications on the regulation chain, and developing a role and consensus eco-system regulation level agreement for regulation.
The CelesOS team is committed to building a bridge between blockchain and regulatory agencies / finance industry. We believe that only blockchain technology that cooperates with regulators will have a bright future and strive to achieve this goal.
📷Website
https://www.celesos.com/
📷 Telegram
https://t.me/celeschain
📷 Twitter
https://twitter.com/CelesChain
📷 Reddit
https://www.reddit.com/useCelesOS
📷 Medium
https://medium.com/@celesos
📷 Facebook
https://www.facebook.com/CelesOS1
📷 Youtube
https://www.youtube.com/channel/UC1Xsd8wU957D-R8RQVZPfGA
submitted by CelesOS to u/CelesOS [link] [comments]

Bitcoin Unlimited - Bitcoin Cash edition 1.6.0.0 has just been released

Download the latest Bitcoin Cash compatible release of Bitcoin Unlimited (1.6.0.0, April 24th, 2019) from:
 
https://www.bitcoinunlimited.info/download
 
This is a major release of Bitcoin Unlimited which it is compatible with the upcoming May 2019 BCH protocol upgrade; this release is also compatible with all the already activated Bitcoin Cash network upgrades, namely:
List of notable changes and fixes contained in BUcash 1.6.0.0:
 
Release notes: https://github.com/BitcoinUnlimited/BitcoinUnlimited/blob/dev/doc/release-notes/release-notes-bucash1.6.0.md
 
PS Ubuntu PPA repository is currently being updated to serve for BUcash 1.6.0.0.
submitted by s1ckpig to btc [link] [comments]

I Created a Custom Lightning Payment Jackpot Website from Scratch, This Is What I Learnt

TL;DR: I wanted to learn how the Lightning Network operates. So I came up with an idea for a jackpot site using the Lightning Network to handle micro-payments. Operating a Lightning node is complicated and challenging for a beginner. Using custodial wallets like Wallet of Satoshi, BlueWallet or Breez is easy to use but not your keys. Please come by and help me test my Lightning integrated new website. I’m happy to help anyone that’s new to Lightning setup a wallet and play a game. It all helps with learning and adoption, that’s why we’re all here! Long Bitcoin, Short the Bankers!

Introduction: Welcome to a brand new concept in random number seeding. Generating a truly random number is quite hard. You could use the current time, divided by the RPM spin of your hard disk, squared by the temperature of your CPU, and so on. Other extreme methods include measuring quantum fluctuations in a vacuum, see ANU Quantum Random Number. All these methods are fine but none of these are really verifiable by a 3rd party. Whoever running the system can change the outcome. I'm not saying they do, simply stating that if the payoff was great enough to alter the 'reported' outcome they could. So what's different here? We're using the Bitcoin blockchain itself as the arbitrator. Every outcome is not only provably fair but verifiably fair and immutable. Trying to cheat this system is impossible.

So that’s the pitch. Make a website using the idea of whoever’s guess is closest, wins the jackpot; using Lightning to handle all the incoming and outgoing payments. I started to look around at other fully functional websites offering Lightning as a payment method. It turns out most use a 3rd party like OpenNode or CoinGate. To me, this defeats the whole purpose of Bitcoin. Why build a website/offer a service/offer Lightning as a payment method if you don’t even own or control your funds. A payment processor could simply turn off withdrawals and it’s over. Not your keys, not your coins!

It’s been quite a learning experience for me. I think the most frustrating thing to figure out and attempt to solve was channel capacity. For example, with a fresh new wallet setup on Bitcoin Lightning for Andriod (blue bolt logo), you can open a channel to anyone fine, but trying to receive money won’t work. I think for a beginneadoption this is the greatest hurdle to understand/overcome.
You need to spend money so the other side has some collateral to send back. One explanation I read was, opening Lightning channels are like a full glass of water, I need to tip some of my water into your empty glass so my glass has some room to fill it back up, it can’t overflow. Another one is like beads on a string. The number of beads is up to you but if all the beads are on your side, the other party can’t push any beats your way because you have them all. There’s ways to fix this. Either spend into the channel or buy incoming channel capacity. On the spend side, you can use websites like lightningconductor.net which allow you to send money to their Lightning node, from your new channel, and they’ll send the coins to your on-chain Bitcoin wallet. This is a simple way to empty your glass or push those beads to the other side and still retain all your money, minus LN and on-chain fees. For incoming capacity, you can use LNBig and get 400k satoshis of incoming capacity for free or lightningto.me, or you can pay lightningpowerusers.com or bitrefill.com to open larger capacity channels to you for a small fee.

For a beginner or someone new to Bitcoin/Lightning, using a custodial wallet like BlueWallet, Wallet of Satosh or Breez is far easier than trying to setup channels and buy or massage incoming capacity. You can simply install the application and using lightningconductor.net BTC to LN you can send some Bitcoin and they’ll forward it on to your lightning wallet, for a fee. These custodial wallets accept incoming transactions of 1 million satoshis or more. So now you’ve got a working wallet that’s got a few thousand satoshis, keep reading!

How to play: Two things are verifiable on the blockchain, time between blocks and transactions included in that block. First choose which block#, by default it will be the next one coming up. Then choose a public alias, others will be able to see your bets but they won’t know if you’ve paid or not, only you can see that. Next, guess the time it will take to mine the next Bitcoin or the number of transactions in that block. You can make multiple guesses. If you want to place a number of spread bets, I suggest opening a spreadsheet and getting it to generate the times or transactions for you. For example, put in 2300, then 2350, 2375, 2400, then drag down to generate as many in the sequence as you want. You can bet a maximum of 25 per invoice. This will hopefully ensure the small transaction amount will be successful. Once you’ve generated an invoice, pay it from the QR code or the lightning bolt11 string.
Now you’re ready to go. Wait till the next block goes active or the block you’ve bet on and you’ll see your bets and everyone else’s. Most importantly, what the final jackpot is. Unpaid invoices are discarded. If the block rolls over while you’re making up your mind the page will refresh and you could lose your input. Please plan your bets in notepad or a spreadsheet. I know this is annoying but I never claimed to be a UX codedesigner! It was a struggle getting all the css, ajax and javascript working, ahhhrrrrggg!! Next is the interesting part as this game can become competitive.

Game theory: As others make bets, you can encapsulate theirs. For example, they guess 2750 transactions, you can bet 2749 and 2751. While at first this seems unfair, what it doesn't show is what bets have been paid for and what have not. Only you can see your own bets that are paid and unpaid. To everyone else they look like paid bets. Only when the next block/jackpot starts can you see what's been paid for as unpaid bets are discarded. By placing dummy bets, unpaid, you can sucker someone in and greatly increase the jackpot payout at no cost to yourself. You can also use the same alias, for example, open up two different browsers, one for real bets and one for fake bets. This is why there’s a 25 bet limit, I don’t want people going too crazy with this. You can check your bets in the footer bar under ‘previous bets’. Also, IMPORTANT, please keep track of your account number at the top. If your session or browser has a problem, you can lose access to your bets and jackpot winnings. If this happens and you receive a new account number, simple use the claim jackpot in the footer to claim your winning jackpot. If you don’t have this, I can’t help you if something goes wrong. Rather than having a login/password system you have a unique account id. Don’t lose it! Now back to the blockchain.

What a minute… I though it took 10 minutes to confirm a block? Not always, actually it does this very rarely. If you average out every block over time, it averages around ten minutes. A block is confirmed when a miner takes transactions from the memory pool, up to ~1.2mb worth. Next, now this is the hard part, they need to generate a hash for that block, but it needs to start with X number of leading zeros. To achieve this, they use a random number called a nonce to seed/salt the hash and hopefully, it contains X number of zeros at the start of the block hash. If not, discard and keep trying. The winning block contains the miners local time, which can sometimes be wrong. This is why sometimes you get negative block times. See block #180966 then the next block, #180967's time stamp is before the first one! Who cares, as long as the later block references the previous block to keep the chain intact. You can’t guess negative numbers but you can guess 0 seconds. Which I guess is like betting on the green zero in roulette.

Ready to play?
Each bet is worth 5,000 satoshis. I wanted it to be expensive enough to prevent spam and also the jackpots be large enough that it would be worth playing. I hope you have fun.
Website is https://blockwisdom.com
My Twitter handle is @nixdice
If you have any questions or issues, please contact me here or on Twitter I’ll try my best to sort it out quickly.
submitted by nixdice to Bitcoin [link] [comments]

Searching for the Unicorn Cryptocurrency

Searching for the Unicorn Cryptocurrency
For someone first starting out as a cryptocurrency investor, finding a trustworthy manual for screening a cryptocurrency’s merits is nonexistent as we are still in the early, Wild West days of the cryptocurrency market. One would need to become deeply familiar with the inner workings of blockchain to be able to perform the bare minimum due diligence.
One might believe, over time, that finding the perfect cryptocurrency may be nothing short of futile. If a cryptocurrency purports infinite scalability, then it is probably either lightweight with limited features or it is highly centralized among a limited number of nodes that perform consensus services especially Proof of Stake or Delegated Proof of Stake. Similarly, a cryptocurrency that purports comprehensive privacy may have technical obstacles to overcome if it aims to expand its applications such as in smart contracts. The bottom line is that it is extremely difficult for a cryptocurrency to have all important features jam-packed into itself.
The cryptocurrency space is stuck in the era of the “dial-up internet” in a manner of speaking. Currently blockchain can’t scale – not without certain tradeoffs – and it hasn’t fully resolved certain intractable issues such as user-unfriendly long addresses and how the blockchain size is forever increasing to name two.
In other words, we haven’t found the ultimate cryptocurrency. That is, we haven’t found the mystical unicorn cryptocurrency that ushers the era of decentralization while eschewing all the limitations of traditional blockchain systems.
“But wait – what about Ethereum once it implements sharding?”
“Wouldn’t IOTA be able to scale infinitely with smart contracts through its Qubic offering?”
“Isn’t Dash capable of having privacy, smart contracts, and instantaneous transactions?”
Those thoughts and comments may come from cryptocurrency investors who have done their research. It is natural for the informed investors to invest in projects that are believed to bring cutting edge technological transformation to blockchain. Sooner or later, the sinking realization will hit that any variation of the current blockchain technology will always likely have certain limitations.
Let us pretend that there indeed exists a unicorn cryptocurrency somewhere that may or may not be here yet. What would it look like, exactly? Let us set the 5 criteria of the unicorn cryptocurrency:
Unicorn Criteria
(1) Perfectly solves the blockchain trilemma:
o Infinite scalability
o Full security
o Full decentralization
(2) Zero or minimal transaction fee
(3) Full privacy
(4) Full smart contract capabilities
(5) Fair distribution and fair governance
For each of the above 5 criteria, there would not be any middle ground. For example, a cryptocurrency with just an in-protocol mixer would not be considered as having full privacy. As another example, an Initial Coin Offering (ICO) may possibly violate criterion (5) since with an ICO the distribution and governance are often heavily favored towards an oligarchy – this in turn would defy the spirit of decentralization that Bitcoin was found on.
There is no cryptocurrency currently that fits the above profile of the unicorn cryptocurrency. Let us examine an arbitrary list of highly hyped cryptocurrencies that meet the above list at least partially. The following list is by no means comprehensive but may be a sufficient sampling of various blockchain implementations:
Bitcoin (BTC)
Bitcoin is the very first and the best known cryptocurrency that started it all. While Bitcoin is generally considered extremely secure, it suffers from mining centralization to a degree. Bitcoin is not anonymous, lacks smart contracts, and most worrisomely, can only do about 7 transactions per seconds (TPS). Bitcoin is not the unicorn notwithstanding all the Bitcoin maximalists.
Ethereum (ETH)
Ethereum is widely considered the gold standard of smart contracts aside from its scalability problem. Sharding as part of Casper’s release is generally considered to be the solution to Ethereum’s scalability problem.
The goal of sharding is to split up validating responsibilities among various groups or shards. Ethereum’s sharding comes down to duplicating the existing blockchain architecture and sharing a token. This does not solve the core issue and simply kicks the can further down the road. After all, full nodes still need to exist one way or another.
Ethereum’s blockchain size problem is also an issue as will be explained more later in this article.
As a result, Ethereum is not the unicorn due to its incomplete approach to scalability and, to a degree, security.
Dash
Dash’s masternodes are widely considered to be centralized due to their high funding requirements, and there are accounts of a pre-mine in the beginning. Dash is not the unicorn due to its questionable decentralization.
Nano
Nano boasts rightfully for its instant, free transactions. But it lacks smart contracts and privacy, and it may be exposed to well orchestrated DDOS attacks. Therefore, it goes without saying that Nano is not the unicorn.
EOS
While EOS claims to execute millions of transactions per seconds, a quick glance reveals centralized parameters with 21 nodes and a questionable governance system. Therefore, EOS fails to achieve the unicorn status.
Monero (XMR)
One of the best known and respected privacy coins, Monero lacks smart contracts and may fall short of infinite scalability due to CryptoNote’s design. The unicorn rank is out of Monero’s reach.
IOTA
IOTA’s scalability is based on the number of transactions the network processes, and so its supposedly infinite scalability would fluctuate and is subject to the whims of the underlying transactions. While IOTA’s scalability approach is innovative and may work in the long term, it should be reminded that the unicorn cryptocurrency has no middle ground. The unicorn cryptocurrency would be expected to scale infinitely on a consistent basis from the beginning.
In addition, IOTA’s Masked Authenticated Messaging (MAM) feature does not bring privacy to the masses in a highly convenient manner. Consequently, the unicorn is not found with IOTA.

PascalCoin as a Candidate for the Unicorn Cryptocurrency
Please allow me to present a candidate for the cryptocurrency unicorn: PascalCoin.
According to the website, PascalCoin claims the following:
“PascalCoin is an instant, zero-fee, infinitely scalable, and decentralized cryptocurrency with advanced privacy and smart contract capabilities. Enabled by the SafeBox technology to become the world’s first blockchain independent of historical operations, PascalCoin possesses unlimited potential.”
The above summary is a mouthful to be sure, but let’s take a deep dive on how PascalCoin innovates with the SafeBox and more. Before we do this, I encourage you to first become acquainted with PascalCoin by watching the following video introduction:
https://www.youtube.com/watch?time_continue=4&v=F25UU-0W9Dk
The rest of this section will be split into 10 parts in order to illustrate most of the notable features of PascalCoin. Naturally, let’s start off with the SafeBox.
Part #1: The SafeBox
Unlike traditional UTXO-based cryptocurrencies in which the blockchain records the specifics of each transaction (address, sender address, amount of funds transferred, etc.), the blockchain in PascalCoin is only used to mutate the SafeBox. The SafeBox is a separate but equivalent cryptographic data structure that snapshots account balances. PascalCoin’s blockchain is comparable to a machine that feeds the most important data – namely, the state of an account – into the SafeBox. Any node can still independently compute and verify the cumulative Proof-of-Work required to construct the SafeBox.
The PascalCoin whitepaper elegantly highlights the unique historical independence that the SafeBox possesses:
“While there are approaches that cryptocurrencies could use such as pruning, warp-sync, "finality checkpoints", UTXO-snapshotting, etc, there is a fundamental difference with PascalCoin. Their new nodes can only prove they are on most-work-chain using the infinite history whereas in PascalCoin, new nodes can prove they are on the most-work chain without the infinite history.”
Some cryptocurrency old-timers might instinctively balk at the idea of full nodes eschewing the entire history for security, but such a reaction would showcase a lack of understanding on what the SafeBox really does.
A concrete example would go a long way to best illustrate what the SafeBox does. Let’s say I input the following operations in my calculator:
5 * 5 – 10 / 2 + 5
It does not take a genius to calculate the answer, 25. Now, the expression “5 \ 5 – 10 / 2 + 5”* would be forever imbued on a traditional blockchain’s history. But the SafeBox begs to differ. It says that the expression “5 \ 5 – 10 / 2 + 5”* should instead be simply “25” so as preserve simplicity, time, and space. In other words, the SafeBox simply preserves the account balance.
But some might still be unsatisfied and claim that if one cannot trace the series of operations (transactions) that lead to the final number (balance) of 25, the blockchain is inherently insecure.
Here are four important security aspects of the SafeBox that some people fail to realize:
(1) SafeBox Follows the Longest Chain of Proof-of-Work
The SafeBox mutates itself per 100 blocks. Each new SafeBox mutation must reference both to the previous SafeBox mutation and the preceding 100 blocks in order to be valid, and the resultant hash of the new mutated SafeBox must then be referenced by each of the new subsequent blocks, and the process repeats itself forever.
The fact that each new SafeBox mutation must reference to the previous SafeBox mutation is comparable to relying on the entire history. This is because the previous SafeBox mutation encapsulates the result of cumulative entire history except for the 100 blocks which is why each new SafeBox mutation requires both the previous SafeBox mutation and the preceding 100 blocks.
So in a sense, there is a single interconnected chain of inflows and outflows, supported by Byzantine Proof-of-Work consensus, instead of the entire history of transactions.
More concretely, the SafeBox follows the path of the longest chain of Proof-of-Work simply by design, and is thus cryptographically equivalent to the entire history even without tracing specific operations in the past. If the chain is rolled back with a 51% attack, only the attacker’s own account(s) in the SafeBox can be manipulated as is explained in the next part.
(2) A 51% Attack on PascalCoin Functions the Same as Others
A 51% attack on PascalCoin would work in a similar way as with other Proof-of-Work cryptocurrencies. An attacker cannot modify a transaction in the past without affecting the current SafeBox hash which is accepted by all honest nodes.
Someone might claim that if you roll back all the current blocks plus the 100 blocks prior to the SafeBox’s mutation, one could create a forged SafeBox with different balances for all accounts. This would be incorrect as one would be able to manipulate only his or her own account(s) in the SafeBox with a 51% attack – just as is the case with other UTXO cryptocurrencies. The SafeBox stores the balances of all accounts which are in turn irreversibly linked only to their respective owners’ private keys.
(3) One Could Preserve the Entire History of the PascalCoin Blockchain
No blockchain data in PascalCoin is ever deleted even in the presence of the SafeBox. Since the SafeBox is cryptographically equivalent to a full node with the entire history as explained above, PascalCoin full nodes are not expected to contain infinite history. But for whatever reason(s) one may have, one could still keep all the PascalCoin blockchain history as well along with the SafeBox as an option even though it would be redundant.
Without storing the entire history of the PascalCoin blockchain, you can still trace the specific operations of the 100 blocks prior to when the SafeBox absorbs and reflects the net result (a single balance for each account) from those 100 blocks. But if you’re interested in tracing operations over a longer period in the past – as redundant as that may be – you’d have the option to do so by storing the entire history of the PascalCoin blockchain.
(4) The SafeBox is Equivalent to the Entire Blockchain History
Some skeptics may ask this question: “What if the SafeBox is forever lost? How would you be able to verify your accounts?” Asking this question is tantamount to asking to what would happen to Bitcoin if all of its entire history was erased. The result would be chaos, of course, but the SafeBox is still in line with the general security model of a traditional blockchain with respect to black swans.
Now that we know the security of the SafeBox is not compromised, what are the implications of this new blockchain paradigm? A colorful illustration as follows still wouldn’t do justice to the subtle revolution that the SafeBox ushers. The automobiles we see on the street are the cookie-and-butter representation of traditional blockchain systems. The SafeBox, on the other hand, supercharges those traditional cars to become the Transformers from Michael Bay’s films.
The SafeBox is an entirely different blockchain architecture that is impressive in its simplicity and ingenuity. The SafeBox’s design is only the opening act for PascalCoin’s vast nuclear arsenal. If the above was all that PascalCoin offers, it still wouldn’t come close to achieving the unicorn status but luckily, we have just scratched the surface. Please keep on reading on if you want to learn how PascalCoin is going to shatter the cryptocurrency industry into pieces. Buckle down as this is going to be a long read as we explore further about the SafeBox’s implications.
Part #2: 0-Confirmation Transactions
To begin, 0-confirmation transactions are secure in PascalCoin thanks to the SafeBox.
The following paraphrases an explanation of PascalCoin’s 0-confirmations from the whitepaper:
“Since PascalCoin is not a UTXO-based currency but rather a State-based currency thanks to the SafeBox, the security guarantee of 0-confirmation transactions are much stronger than in UTXO-based currencies. For example, in Bitcoin if a merchant accepts a 0-confirmation transaction for a coffee, the buyer can simply roll that transaction back after receiving the coffee but before the transaction is confirmed in a block. The way the buyer does this is by re-spending those UTXOs to himself in a new transaction (with a higher fee) thus invalidating them for the merchant. In PascalCoin, this is virtually impossible since the buyer's transaction to the merchant is simply a delta-operation to debit/credit a quantity from/to accounts respectively. The buyer is unable to erase or pre-empt this two-sided, debit/credit-based transaction from the network’s pending pool until it either enters a block for confirmation or is discarded with respect to both sender and receiver ends. If the buyer tries to double-spend the coffee funds after receiving the coffee but before they clear, the double-spend transaction will not propagate the network since nodes cannot propagate a double-spending transaction thanks to the debit/credit nature of the transaction. A UTXO-based transaction is initially one-sided before confirmation and therefore is more exposed to one-sided malicious schemes of double spending.”
Phew, that explanation was technical but it had to be done. In summary, PascalCoin possesses the only secure 0-confirmation transactions in the cryptocurrency industry, and it goes without saying that this means PascalCoin is extremely fast. In fact, PascalCoin is capable of 72,000 TPS even prior to any additional extensive optimizations down the road. In other words, PascalCoin is as instant as it gets and gives Nano a run for its money.
Part #3: Zero Fee
Let’s circle back to our discussion of PascalCoin’s 0-confirmation capability. Here’s a little fun magical twist to PascalCoin’s 0-confirmation magic: 0-confirmation transactions are zero-fee. As in you don’t pay a single cent in fee for each 0-confirmation! There is just a tiny downside: if you create a second transaction in a 5-minute block window then you’d need to pay a minimal fee. Imagine using Nano but with a significantly stronger anti-DDOS protection for spam! But there shouldn’t be any complaint as this fee would amount to 0.0001 Pascal or $0.00002 based on the current price of a Pascal at the time of this writing.
So, how come the fee for blazingly fast transactions is nonexistent? This is where the magic of the SafeBox arises in three ways:
(1) PascalCoin possesses the secure 0-confirmation feature as discussed above that enables this speed.
(2) There is no fee bidding competition of transaction priority typical in UTXO cryptocurrencies since, once again, PascalCoin operates on secure 0-confirmations.
(3) There is no fee incentive needed to run full nodes on behalf of the network’s security beyond the consensus rewards.
Part #4: Blockchain Size
Let’s expand more on the third point above, using Ethereum as an example. Since Ethereum’s launch in 2015, its full blockchain size is currently around 2 TB, give or take, but let’s just say its blockchain size is 100 GB for now to avoid offending the Ethereum elitists who insist there are different types of full nodes that are lighter. Whoever runs Ethereum’s full nodes would expect storage fees on top of the typical consensus fees as it takes significant resources to shoulder Ethereum’s full blockchain size and in turn secure the network. What if I told you that PascalCoin’s full blockchain size will never exceed few GBs after thousands of years? That is just what the SafeBox enables PascalCoin to do so. It is estimated that by 2072, PascalCoin’s full nodes will only be 6 GB which is low enough not to warrant any fee incentives for hosting full nodes. Remember, the SafeBox is an ultra-light cryptographic data structure that is cryptographically equivalent to a blockchain with the entire transaction history. In other words, the SafeBox is a compact spreadsheet of all account balances that functions as PascalCoin’s full node!
Not only does the SafeBox’s infinitesimal memory size helps to reduce transaction fees by phasing out any storage fees, but it also paves the way for true decentralization. It would be trivial for every PascalCoin user to opt a full node in the form of a wallet. This is extreme decentralization at its finest since the majority of users of other cryptocurrencies ditch full nodes due to their burdensome sizes. It is naïve to believe that storage costs would reduce enough to the point where hosting full nodes are trivial. Take a look at the following chart outlining the trend of storage cost.

* https://www.backblaze.com/blog/hard-drive-cost-per-gigabyte/
As we can see, storage costs continue to decrease but the descent is slowing down as is the norm with technological improvements. In the meantime, blockchain sizes of other cryptocurrencies are increasing linearly or, in the case of smart contract engines like Ethereum, parabolically. Imagine a cryptocurrency smart contract engine like Ethereum garnering worldwide adoption; how do you think Ethereum’s size would look like in the far future based on the following chart?


https://i.redd.it/k57nimdjmo621.png

Ethereum’s future blockchain size is not looking pretty in terms of sustainable security. Sharding is not a fix for this issue since there still needs to be full nodes but that is a different topic for another time.
It is astonishing that the cryptocurrency community as a whole has passively accepted this forever-expanding-blockchain-size problem as an inescapable fate.
PascalCoin is the only cryptocurrency that has fully escaped the death vortex of forever expanding blockchain size. Its blockchain size wouldn’t exceed 10 GB even after many hundreds of years of worldwide adoption. Ethereum’s blockchain size after hundreds of years of worldwide adoption would make fine comedy.
Part #5: Simple, Short, and Ordinal Addresses
Remember how the SafeBox works by snapshotting all account balances? As it turns out, the account address system is almost as cool as the SafeBox itself.
Imagine yourself in this situation: on a very hot and sunny day, you’re wandering down the street across from your house and ran into a lemonade stand – the old-fashioned kind without any QR code or credit card terminal. The kid across you is selling a lemonade cup for 1 Pascal with a poster outlining the payment address as 5471-55. You flip out your phone and click “Send” with 1 Pascal to the address 5471-55; viola, exactly one second later you’re drinking your lemonade without paying a cent for the transaction fee!
The last thing one wants to do is to figure out how to copy/paste to, say, the following address 1BoatSLRHtKNngkdXEeobR76b53LETtpyT on the spot wouldn’t it? Gone are the obnoxiously long addresses that plague all cryptocurrencies. The days of those unreadable addresses will be long gone – it has to be if blockchain is to innovate itself for the general public. EOS has a similar feature for readable addresses but in a very limited manner in comparison, and nicknames attached to addresses in GUIs don’t count since blockchain-wide compatibility wouldn’t hold.
Not only does PascalCoin has the neat feature of having addresses (called PASAs) that amount to up to 6 or 7 digits, but PascalCoin can also incorporate in-protocol address naming as opposed to GUI address nicknames. Suppose I want to order something from Amazon using Pascal; I simply search the word “Amazon” then the corresponding account number shows up. Pretty neat, right?
The astute reader may gather that PascalCoin’s address system makes it necessary to commoditize addresses, and he/she would be correct. Some view this as a weakness; part #10 later in this segment addresses this incorrect perception.
Part #6: Privacy
As if the above wasn’t enough, here’s another secret that PascalCoin has: it is a full-blown privacy coin. It uses two separate foundations to achieve comprehensive anonymity: in-protocol mixer for transfer amounts and zn-SNARKs for private balances. The former has been implemented and the latter is on the roadmap. Both the 0-confirmation transaction and the negligible transaction fee would make PascalCoin the most scalable privacy coin of any other cryptocurrencies pending the zk-SNARKs implementation.
Part #7: Smart Contracts
Next, PascalCoin will take smart contracts to the next level with a layer-2 overlay consensus system that pioneers sidechains and other smart contract implementations.
In formal terms, this layer-2 architecture will facilitate the transfer of data between PASAs which in turn allows clean enveloping of layer-2 protocols inside layer-1 much in the same way that HTTP lives inside TCP.
To summarize:
· The layer-2 consensus method is separate from the layer-1 Proof-of-Work. This layer-2 consensus method is independent and flexible. A sidechain – based on a single encompassing PASA – could apply Proof-of-Stake (POS), Delegated Proof-of-Stake (DPOS), or Directed Acyclic Graph (DAG) as the consensus system of its choice.
· Such a layer-2 smart contract platform can be written in any languages.
· Layer-2 sidechains will also provide very strong anonymity since funds are all pooled and keys are not used to unlock them.
· This layer-2 architecture is ingenious in which the computation is separate from layer-2 consensus, in effect removing any bottleneck.
· Horizontal scaling exists in this paradigm as there is no interdependence between smart contracts and states are not managed by slow sidechains.
· Speed and scalability are fully independent of PascalCoin.
One would be able to run the entire global financial system on PascalCoin’s infinitely scalable smart contract platform and it would still scale infinitely. In fact, this layer-2 architecture would be exponentially faster than Ethereum even after its sharding is implemented.
All this is the main focus of PascalCoin’s upcoming version 5 in 2019. A whitepaper add-on for this major upgrade will be released in early 2019.
Part #8: RandomHash Algorithm
Surely there must be some tradeoffs to PascalCoin’s impressive capabilities, you might be asking yourself. One might bring up the fact that PascalCoin’s layer-1 is based on Proof-of-Work and is thus susceptible to mining centralization. This would be a fallacy as PascalCoin has pioneered the very first true ASIC, GPU, and dual-mining resistant algorithm known as RandomHash that obliterates anything that is not CPU based and gives all the power back to solo miners.
Here is the official description of RandomHash:
“RandomHash is a high-level cryptographic hash algorithm that combines other well-known hash primitives in a highly serial manner. The distinguishing feature is that calculations for a nonce are dependent on partial calculations of other nonces, selected at random. This allows a serial hasher (CPU) to re-use these partial calculations in subsequent mining saving 50% or more of the work-load. Parallel hashers (GPU) cannot benefit from this optimization since the optimal nonce-set cannot be pre-calculated as it is determined on-the-fly. As a result, parallel hashers (GPU) are required to perform the full workload for every nonce. Also, the algorithm results in 10x memory bloat for a parallel implementation. In addition to its serial nature, it is branch-heavy and recursive making in optimal for CPU-only mining.”
One might be understandably skeptical of any Proof-of-Work algorithm that solves ASIC and GPU centralization once for all because there have been countless proposals being thrown around for various algorithms since the dawn of Bitcoin. Is RandomHash truly the ASIC & GPU killer that it claims to be?
Herman Schoenfeld, the inventor behind RandomHash, described his algorithm in the following:
“RandomHash offers endless ASIC-design breaking surface due to its use of recursion, hash algo selection, memory hardness and random number generation.
For example, changing how round hash selection is made and/or random number generator algo and/or checksum algo and/or their sequencing will totally break an ASIC design. Conceptually if you can significantly change the structure of the output assembly whilst keeping the high-level algorithm as invariant as possible, the ASIC design will necessarily require proportional restructuring. This results from the fact that ASIC designs mirror the ASM of the algorithm rather than the algorithm itself.”
Polyminer1 (pseudonym), one of the members of the PascalCoin core team who developed RHMiner (official software for mining RandomHash), claimed as follows:
“The design of RandomHash is, to my experience, a genuine innovation. I’ve been 30 years in the field. I’ve rarely been surprised by anything. RandomHash was one of my rare surprises. It’s elegant, simple, and achieves resistance in all fronts.”
PascalCoin may have been the first party to achieve the race of what could possibly be described as the “God algorithm” for Proof-of-Work cryptocurrencies. Look no further than one of Monero’s core developers since 2015, Howard Chu. In September 2018, Howard declared that he has found a solution, called RandomJS, to permanently keep ASICs off the network without repetitive algorithm changes. This solution actually closely mirrors RandomHash’s algorithm. Discussing about his algorithm, Howard asserted that “RandomJS is coming at the problem from a direction that nobody else is.”
Link to Howard Chu’s article on RandomJS:
https://www.coindesk.com/one-musicians-creative-solution-to-drive-asics-off-monero
Yet when Herman was asked about Howard’s approach, he responded:
“Yes, looks like it may work although using Javascript was a bit much. They should’ve just used an assembly subset and generated random ASM programs. In a way, RandomHash does this with its repeated use of random mem-transforms during expansion phase.”
In the end, PascalCoin may have successfully implemented the most revolutionary Proof-of-Work algorithm, one that eclipses Howard’s burgeoning vision, to date that almost nobody knows about. To learn more about RandomHash, refer to the following resources:
RandomHash whitepaper:
https://www.pascalcoin.org/storage/whitepapers/RandomHash_Whitepaper.pdf
Technical proposal for RandomHash:
https://github.com/PascalCoin/PascalCoin/blob/mastePIP/PIP-0009.md
Someone might claim that PascalCoin still suffers from mining centralization after RandomHash, and this is somewhat misleading as will be explained in part #10.
Part #9: Fair Distribution and Governance
Not only does PascalCoin rest on superior technology, but it also has its roots in the correct philosophy of decentralized distribution and governance. There was no ICO or pre-mine, and the developer fund exists as a percentage of mining rewards as voted by the community. This developer fund is 100% governed by a decentralized autonomous organization – currently facilitated by the PascalCoin Foundation – that will eventually be transformed into an autonomous smart contract platform. Not only is the developer fund voted upon by the community, but PascalCoin’s development roadmap is also voted upon the community via the Protocol Improvement Proposals (PIPs).
This decentralized governance also serves an important benefit as a powerful deterrent to unseemly fork wars that befall many cryptocurrencies.
Part #10: Common Misconceptions of PascalCoin
“The branding is terrible”
PascalCoin is currently working very hard on its image and is preparing for several branding and marketing initiatives in the short term. For example, two of the core developers of the PascalCoin recently interviewed with the Fox Business Network. A YouTube replay of this interview will be heavily promoted.
Some people object to the name PascalCoin. First, it’s worth noting that PascalCoin is the name of the project while Pascal is the name of the underlying currency. Secondly, Google and YouTube received excessive criticisms back then in the beginning with their name choices. Look at where those companies are nowadays – surely a somewhat similar situation faces PascalCoin until the name’s familiarity percolates into the public.
“The wallet GUI is terrible”
As the team is run by a small yet extremely dedicated developers, multiple priorities can be challenging to juggle. The lack of funding through an ICO or a pre-mine also makes it challenging to accelerate development. The top priority of the core developers is to continue developing full-time on the groundbreaking technology that PascalCoin offers. In the meantime, an updated and user-friendly wallet GUI has been worked upon for some time and will be released in due time. Rome wasn’t built in one day.
“One would need to purchase a PASA in the first place”
This is a complicated topic since PASAs need to be commoditized by the SafeBox’s design, meaning that PASAs cannot be obtained at no charge to prevent systematic abuse. This raises two seemingly valid concerns:
· As a chicken and egg problem, how would one purchase a PASA using Pascal in the first place if one cannot obtain Pascal without a PASA?
· How would the price of PASAs stay low and affordable in the face of significant demand?
With regards to the chicken and egg problem, there are many ways – some finished and some unfinished – to obtain your first PASA as explained on the “Get Started” page on the PascalCoin website:
https://www.pascalcoin.org/get_started
More importantly, however, is the fact that there are few methods that can get your first PASA for free. The team will also release another method soon in which you could obtain your first PASA for free via a single SMS message. This would probably become by far the simplest and the easiest way to obtain your first PASA for free. There will be more new ways to easily obtain your first PASA for free down the road.
What about ensuring the PASA market at large remains inexpensive and affordable following your first (and probably free) PASA acquisition? This would be achieved in two ways:
· Decentralized governance of the PASA economics per the explanation in the FAQ section on the bottom of the PascalCoin website (https://www.pascalcoin.org/)
· Unlimited and free pseudo-PASAs based on layer-2 in the next version release.
“PascalCoin is still centralized after the release of RandomHash”
Did the implementation of RandomHash from version 4 live up to its promise?
The official goals of RandomHash were as follow:
(1) Implement a GPU & ASIC resistant hash algorithm
(2) Eliminate dual mining
The two goals above were achieved by every possible measure.
Yet a mining pool, Nanopool, was able to regain its hash majority after a significant but a temporary dip.
The official conclusion is that, from a probabilistic viewpoint, solo miners are more profitable than pool miners. However, pool mining is enticing for solo miners who 1) have limited hardware as it ensures a steady income instead of highly profitable but probabilistic income via solo mining, and 2) who prefer convenient software and/or GUI.
What is the next step, then? While the barrier of entry for solo miners has successfully been put down, additional work needs to be done. The PascalCoin team and the community are earnestly investigating additional steps to improve mining decentralization with respect to pool mining specifically to add on top of RandomHash’s successful elimination of GPU, ASIC, and dual-mining dominance.
It is likely that the PascalCoin community will promote the following two initiatives in the near future:
(1) Establish a community-driven, nonprofit mining pool with attractive incentives.
(2) Optimize RHMiner, PascalCoin’s official solo mining software, for performance upgrades.
A single pool dominance is likely short lived once more options emerge for individual CPU miners who want to avoid solo mining for whatever reason(s).
Let us use Bitcoin as an example. Bitcoin mining is dominated by ASICs and mining pools but no single pool is – at the time of this writing – even close on obtaining the hash majority. With CPU solo mining being a feasible option in conjunction with ASIC and GPU mining eradication with RandomHash, the future hash rate distribution of PascalCoin would be far more promising than Bitcoin’s hash rate distribution.
PascalCoin is the Unicorn Cryptocurrency
If you’ve read this far, let’s cut straight to the point: PascalCoin IS the unicorn cryptocurrency.
It is worth noting that PascalCoin is still a young cryptocurrency as it was launched at the end of 2016. This means that many features are still work in progress such as zn-SNARKs, smart contracts, and pool decentralization to name few. However, it appears that all of the unicorn criteria are within PascalCoin’s reach once PascalCoin’s technical roadmap is mostly completed.
Based on this expository on PascalCoin’s technology, there is every reason to believe that PascalCoin is the unicorn cryptocurrency. PascalCoin also solves two fundamental blockchain problems beyond the unicorn criteria that were previously considered unsolvable: blockchain size and simple address system. The SafeBox pushes PascalCoin to the forefront of cryptocurrency zeitgeist since it is a superior solution compared to UTXO, Directed Acyclic Graph (DAG), Block Lattice, Tangle, and any other blockchain innovations.


THE UNICORN

Author: Tyler Swob
submitted by Kosass to CryptoCurrency [link] [comments]

How To Use The Blockchain To Protect The Trillion-Dollar Intelligent Import And Export Logistics Business

How To Use The Blockchain To Protect The Trillion-Dollar Intelligent Import And Export Logistics Business
Original Korean article https://www.jinse.com/bitcoin/284405.html published 4th December 2018. The article has been translated via Google translate. Prof. Songjie's credentials are listed at the bottom of this post.

How To Use The Blockchain To Protect The Trillion-Dollar Intelligent Import And Export Logistics Business

On November 22nd, the 2018 Global Smart Container Industry Alliance Annual Meeting and Smart Container Standards Publicity Training Conference was held in Shenzhen. Waltonchain CTO Wei Songjie delivered a speech at the scene. Professor Wei expounded the origin and development of blockchain and proposed the solution of blockchain technology applied in intelligent import and export logistics for the first time. He said that compared with the traditional way of shipping, the application blockchain can improve the time efficiency of more than 50% in the intelligent logistics industry and reduce the management cost by more than 30%.

https://preview.redd.it/fftck08ux0421.jpg?width=600&format=pjpg&auto=webp&s=827c930fc221610a98127588e3fa81d36aa3b72b
The following is the full text of the speech:
Good afternoon everyone, I am Wei Songjie. Today, the theme I gave to everyone is "blockchain: data container, pass-through transport line, trust notary". Because today's conference theme is a smart container, I also borrowed a topic called a "data container." In fact, in our information security industry, we call this a data package or a package called data. They are actually quite similar in nature, and data is also goods. For us, data is something of value.
In today's speech, I mainly talk about three parts: blockchain + digital certificate capability, blockchain + port cargo application scenarios, blockchain + intelligent import and export logistics solutions. Some of these contents are exchanged with some experts in the logistics industry. Some of the things may not be too mature and accurate. I am as a layman in this swearing, and I would like to ask you.
The blockchain has been a hot word in recent years. In my opinion, the biggest use of the blockchain is not "speculation", "sell one", "sip", these are their superficial articles. The biggest feature of this technology is its digital passability.
Dr. Zhou’s speech just said that what is the core in the container-based goods circulation industry? He said that documents are the core. For the circulation of goods, we need a list to prove. In the field of our blockchain, we call this core a pass.
Let me talk to you quickly, what is the blockchain?
In fact, this year happens to be the tenth anniversary of the blockchain. As for its origin, at the earliest, it came out as the underlying technology of Bitcoin, and its data structure is a chain structure. So what is it used for? It is used to book bitcoin. For example, who transferred to whom, how to turn, and so on. It is a distributed ledger, a public ledger, distributed meaning that there is no central bank, not a single individual has the final say. It has a wide range of applications, but most of the current applications still revolve around its financial transaction attributes.
In our field of computer science, we have used the term blockchain for less than a decade, but we have used this technology for decades. What do we use it for? In fact, we used to call it a distributed database a long time ago. That is to say, the database that everyone uses now has (several) servers. That distributed data means that instead of having a centralized server to store data, it means that the data is distributed in many different places, so we call it a distributed database.
Of course you have a database, you always have software, but also have a system. So in fact we have studied more accurate nouns. For example, let me study the distributed system for more than ten years. At the same time, we also use the blockchain-related things to achieve the measurement and circulation of this value. This has actually been used since, for example, QQ has Q coins, many games have points or coins, so this is not new. Of course, we study from the perspective of how the entire process, including the value of commodity services, is measured and quantified.
In the end, what is the main feature of the blockchain and what is it used for? Be an endorsement of trust. Therefore, we often hear people say that I can't change the data on the blockchain. You can't lie to me. It can't be fake forged, can't be lost, and so on. In fact, its core is, if you believe it or not, you believe it, if you don’t believe it.
In fact, long ago we were able to do trust-based or data-based trust and verification. But what did we call it at that time? We call cryptography. So I often talk to my students during the exchange, the blockchain thing, now it can not be said to be a gimmick. Again, we rely on it to do research, write papers, and then do projects. In fact, it is more like an application innovation—that is, combining existing technologies in a new way and using them in newer ways. Broad application scenarios.
Which combination of technologies? The core of distributed systems, peer-to-peer networks, and cryptography is these. Therefore, those people say that the blockchain is very important, or very useful, and its elements are summed up in fact. Then what effect does it use to achieve these effects? I think it is the effect of interconnection, interoperability, mutual trust, mutual benefit and mutual integration.

https://preview.redd.it/q56zn8kvy0421.jpg?width=600&format=pjpg&auto=webp&s=f00723c563bea43e476ad252e090a447d6f825c6
The Internet is easy to understand. In fact, our current information systems, including our devices, are rarely fragmented or run independently. Most of them are networked, including your mobile phone and computer. If you can't access the Internet, the mobile phone is not yet available. It’s a brick, right. No use. Now everyone can't do without the network. In fact, the blockchain is the same. Its underlying core is that it can be networked and does not depend on a specific or specific network. It does not depend on a specific server and does not depend on a specific SP (network service provider). I was able to connect to the Internet. This time we called P2P (peer-to-peer network), this is no stranger. Because a long time ago, I remember that I went to the movies and songs. The eDonkey used in that year was P2P. What it wants to achieve is interconnection, which means that you are not an individual, you are not separated, P2P technology is how to connect with others.
The second is interoperability. The reason for interoperability is because everyone wants to communicate. For example, everyone here is Chinese. If I speak a foreign language here, everyone knows English, and Japanese may understand it. But for example, Burmese and Vietnamese, you may not understand. At this time, what I said is still human, you are all human, but everyone does not understand, why is this? Because it does not have a valid specification. There is no rule that this field can only speak Chinese, or that this field can only say what you and I can understand, and this is the reason for interoperability. The blockchain, which defines a set of interworking rules or norms. Just like the (national container) standard we set here today, why should this standard be? Because if the standards are different, for example, the container you are ten meters high, my two or three meters high, then you said how to pull this truck? How can I store this warehouse? How do I load the goods? Right. This is the standard use.
The third is mutual trust. I just said, what is data? What is useful is the data. What kind of data is useful? Real is useful. Then how is it true? You have to be able to verify, or you have to be able to prove it. Therefore, the blockchain uses cryptography to achieve mutual trust. Think about it, the information system we use now, or the computer and related equipment, what is the most valuable, that is, data. In fact, if I lose a mobile phone now, I don't feel bad at all. Thousands of dollars can buy one back. But what is the pain? It's the address book, chat history, and photos inside, maybe there are sensitive photos. This kind of data is the most valuable.
The fourth is reciprocity. Blockchain It is the circulation or value sharing that can achieve this value. Of course, there is a lot of value in this system. In fact, it is a number. Just like we have mobile payments now, we rarely use money. Credit cards are not very useful, so money is a symbol for us, and the symbol is a number. It's the same, but this number is valuable, or the back of the number is money, and the blockchain can achieve this effect. Of course, if there are benefits, some people will suffer and some will take advantage of it. This is reciprocity. Our best effect is to achieve the Pareto improvement in economics. If I take advantage of it, you will not suffer. This is a win-win situation or a win-win situation. The blockchain can do this.
The fifth is called mutual integration. In other words, since everyone is living together in an ecological environment to live together, coexist, agree, and work together, then everyone must have a way to achieve consensus. For example, today, who should we listen to? Of course, we should listen to the organizer and listen to the host, because I recognize you as the host, right. But if there is a spoiler, he will not recognize it. He has not reached this consensus, and this matter is troublesome. So, inside the blockchain it has a series of algorithms and methods to achieve consensus. For us, the simplest consensus or the easiest to understand is that everyone votes. Who do you think is the moderator, who has the most votes, and who is the moderator, but the simplest (fairness) is often the hardest to achieve. But in fact, these (consensus) are the core elements of the blockchain, and what effect can be achieved with these elements, and what is the use of these effects, this is its definition. Now our country is working on the relevant standards for the blockchain, but before this appearance, whether it is the corporate or academic world, or our Internet enthusiasts or blockchain enthusiasts, they do not have a standard definition. Some people call it a distributed system, someone calls it a chain, and someone calls it a mesh structure. I have a little bit inserted here, although it is called a blockchain, but in fact, in terms of chain structure, the chain is one-dimensional, but it is a form, and the chain really has two-dimensional or even multi-dimensional structure, two Dimension is a mesh structure, multidimensional. We call it a complex mesh system. So the definition of it is really just a statement or two words.
Since I am not coming to invest, this is not a preaching. You can't just say its benefits, let's be honest, let's talk about some interesting things in this technology.
The first one is that many people say that the blockchain is very good and decentralized, but is it decentralized? This is really debatable. Absolute centralization or absolute centralization is definitely not good. After all, I am not the "center", right. If anyone is the "center", who will definitely say it. Therefore, everyone must be hoping to be able to be equal, to be able to disperse, to be able to participate in and to make decisions without their own centralization. But the blockchain, it is not really decentralized, he is not without a center, he just turned a center into a lot.
Who has the final say? Everyone has the final say. How do you say it? A lot of ways. For example, the easiest way to vote, one person, one vote, is now very popular is the calculation power, who counts fast, who has the final say. There is another way, that is, whose shares are large. One person, one vote is the same situation for everyone's shares. Based on POS (consensus mechanism), it is actually to look at the rights and interests, to see who owns the shares, and the big one he said is more than me. In addition to this, there are many other ways. So we say that the blockchain is actually multi-centered, and there is a problem with true decentralization. For example, one problem we often face now is its efficiency problem. Well, take Bitcoin as an example. Everyone often says that I can get money by buying coins. But you have to know that you are actually launching a transaction on the Bitcoin network or system, or I will transfer you a sum of money, you have to wait a long time to receive it - this is not a few minutes, a few seconds, but maybe a few Ten minutes, a few hours, or even a few days. Therefore, decentralization will have efficiency problems.
Many of the so-called public chain or blockchain systems we have seen now have this efficiency problem. In other words, he may have a process, the algorithm is correct, the technical line is right, there will be too many people, especially in China, because the most important thing in China is the user, the most important thing is the user. Scale, efficiency will have problems. So our current research direction, including our application scenarios, is mostly multi-centered. Therefore, we call this multi-centered, not a center, that is not good, it is the original system, not the real center.
The second one is called the virtual and real of trust. In fact, the data is placed on the blockchain, can you really believe it? Put the blockchain, is it true? Of course it is not the case. I put a bunch of garbage into the safe, it is still rubbish, it is not worth it. Therefore, it depends on the entire ecology of the data or the entire life cycle, especially the stage of data perception or acquisition.
We now use the blockchain, including our company, some of our projects. In fact, we use a combination of software and hardware to solve how to ensure that the data you get is first-hand data, no noise, no errors, no interference. There is no such forgery, and then put it on the blockchain immediately, so that I can guarantee that the next life cycle of this data is real and verifiable. So this is why many people think blockchain data is true. However, if you put it true, it is true. If you are on vacation, it is fake. It guarantees that this data has not been altered and can be verified, but does not guarantee its original authenticity.
The third is the truth and falsehood of our consensus. The fact is that the consensus reached by the blockchain algorithm is the correct consensus? The correct consensus is that the American president is Trump. Is he really getting a 50% (vote) +1 vote? No, everyone knows that Hillary’s votes are higher and they get the same amount of votes. It was only because of the rules of their electoral college that Trump was elected. What does this show? Explain that our consensus mechanism can actually determine whether our final consensus is a general consensus, a relative consensus, or a professional consensus, and it depends on the scenario. So, you should first think about using the blockchain, and then design a consensus mechanism. After all, there is no universal technology that is universally applicable.
The last one is called the right and wrong of the data. What does it mean? Here is to say that the data is placed on the blockchain, we can say that it can not be tampered, can not be forged, can not be changed, it will not be lost, but can this really achieve this effect? In this (blockchain) industry, we often hear news that a word is called a fork. In fact, this is to say that the original chain grows in a single item, and it grows more and more in a while. When it grows long, it splits. Why is it forked? Because there is no consensus. Because some people think that it should grow like this, some people think that they should grow into that, and then there are people on both sides to support, so they fork. So this shows that the right and wrong of the data depends on who? Depending on the user, it depends on the consensus results of the user. So these are relative, in fact nothing is absolute. Including our cryptography, are you absolutely safe, definitely not. As long as I can live long enough, then I will try hard, and one day I will be able to try it out, right. The only absolute thing in us (information technology) is that it is the quantum code, which is absolutely safe. But this is a bit of a problem.
So now, in less than a decade, the blockchain has evolved in three different phases.
We have phases 1.0, 2.0, and 3.0, but it doesn't make sense to say that the specific technology is too boring. What we have to say is that 1.0 solves the problem that is too simple, that is, to record the account or to use it as a book; 2.0 can only fulfill the contract, can we say what we say, we write the program people like to write Some conditions are judged and looped. Actually, these conditions can be written in 2.0. What is the use of this condition? We will see it later. The direction that is currently developing is 3.0, 3.0 is to do things, that is to say, you This technology can not land, this is the third point. So, we are now between 2.0 and 3.0, almost the same period from 2.4 to 2.5.
Next, we make an analogy, an analogy between blockchain and data containers.
In fact, our blockchain really has blocks. Our data is really piece by piece. Each piece is called data encapsulation. This is a bit like we put a lot of goods in this container, and then lock the container, this is called a box of goods. For us, we are called a piece of data, and then we will lock this data. The lock on us is not an electronic lock. We call it a digital lock. In fact, it is a string of numbers used for verification to be used for signature. . This is one of our forms. This is not one-dimensional, but linear and two-dimensional. Containers, there are so many boxes, or so many pieces, they are also ordered, organized, we call Organize. Just like your box is to be numbered, then your box is to be neatly tidy, you can check it when you need it, and you can find it when you need it. So this is an analogy, for a bit more fun, a little fun.
The blockchain actually has many institutions now, and many countries are also used in the logistics industry, including import and export. For example, many countries in the United States, South Korea, and the Netherlands are doing it. Headed by IBM, they have a super-books alliance, and they also offer a range of solutions that can be used by everyone. Because their technology wants to be more versatile, there are actually a lot of data inconsistencies here.
So what does it do in the field of (container)? Or, what good is it? It has to solve the problem. One is inefficiency. Dr. Zhou also said that there are too many links, and then the people involved or the roles are too many and inefficient. The other is risky because this thing is not shipped. It is risky to pay, lose or lose money, pay taxes, and clear customs.
Then if we want to use it, for example, I want to use the blockchain to try it in this field, how can we try it? I think I can make three articles around my title, the first one is for data; the second is for value; the third is for trust.
In our blockchain, first, we can do the data bearer and ensure the integrity of the data. Second, we can quantify the data, especially the quantity and value of many such goods. Our measurability; the third is that we can trust, for example, authenticity, you remember so much, remember so many words, and then so many single-sub-services, using blockchain-related techniques to ensure that it can be accepted But it is really unrealistic because people will not accept it. In fact, Dr. Zhou said that the core of this industry, we are called documents. We can e-mail the documents. This technology is very ready-made, just saying how to use it. We are also doing application innovation, so we can use this framework of distributed architecture to achieve this electronic issuance of such notes or documents. But why is it not purely distributed or purely decentralized? Because of efficiency issues. Centralization efficiency is good, so he still has a data center placed there, and then the index of the data, the summary of the data, the keywords of the data, the hash of the data on the chain, so that people can be very fast, very Efficiently find the relevant data above, and then go to the original data center to get the original data.

https://preview.redd.it/zaoeu3fyy0421.jpg?width=600&format=pjpg&auto=webp&s=37e1886e14a730d6793bc8b4dfdb31d7a9e5c73b
At the same time, we can use the blockchain to implement this digitized sequence of processes. So you will look at the picture just now. The original picture has a lot of small arrows. It is actually talking about a sequence, just the picture. It's actually talking about the order, you can take the next step to do the next step, we call it timing. In fact, the blockchain can record and string these steps, and then tell you, now the entire business or the whole of our logistics is going to where it is, where is it stuck, and then how to go in the next step? Conditional judgment. How do the blockchain judge the conditions? Blockchain 2.0, support contract, right. So what is the contract? It is the program. I can write the program, what to do next, the blockchain can do this. In other words, in fact, many times we are concerned about timing, order. Where is the difference here? For example, this is what happened in a few months, and when the matter happened, this is absolute time. But many times, we are concerned about the relative time, who is who develops before and who is after whom, this logic or this real transaction makes sense. Just like you have to work first and then get paid, this is generally normal, but it is abnormal after working first. So this order is very important.
In addition, we can implement the submission and inspection of documents based on blockchain. Because you have data to always give people a home, you can find it when you need it, the blockchain can be done very quickly. In our words, the quick meaning is that its time is not exponential growth, not linear growth, we are talking about the size of the data (to grow). And when it comes to constant time, it means that no matter how much data you have, I can find the required data between the constants, and then check and verify that it is very efficient to submit and check the data. He has an ID and then has an electronic signature, and also checks the information. This blockchain is readily available.
The last one is that we can use the blockchain to implement this kind of supervision and management service for multiple roles, which means that you may be the owner. You may be a buyer, a seller, you may be a transit broker, you may be a carrier, you may be a customs officer, you may be in any role. So how can you have so many roles in the system? Because these roles are called users in our entire system, then the users actually have different ID addresses, ready-made. Just like the currency now, what is your wallet address? How do we ensure that different users have different permissions? We actually use certificates, we call Certificate, e-Cert. The more popular ones are passwords. Of course, now we often use multi-word authentication, which means that in addition to the password, you have to have a verification code or what character you have to identify, in fact, we use a certificate here. Certificates can be used to implement the setting and probability of this privilege for different roles.
Then all these things are put together, in fact we can transform the original process into a blockchain-based process. But this picture I just said, borrowed from a document. This may be just a general or a typical existing process, in fact, all of us or all of the roles can be in different blocks or different stages, with different blocks of blockchain. Is the data package to deal with, to achieve the whole process, to achieve the whole around the goods or around the entity, you are a box or a bag, we have to surround the virtual and electronic around the entity Data management and query verification This is a whole, we call it a typical system solution.
But this system is actually in use now, but it is not used for container management or for doing this import and export. What are we actually doing with this system? We are doing traceability of some of these items, such as typical clothing or food, where is it produced, and then through which links, then who is the wholesaler, who is the retailer, where is it, and then Have you ever retired or sold it to someone, and then did you go back to repair or have a return? In fact, our system is doing this. But this is no different because for us, these are all data. In fact, the data itself does not know what it means. The data itself is a character and a binary. So now we are running some of the green systems below, but it is a layman for this product.
So today, I am also grateful to all the invitations, and I will take the courage to take our set of things and put them in a new scene. This is called application innovation. The purpose of our 3.0 is to use it in more scenarios, and to use the effects, use the performance, and then use such an impressive, or acceptable, result.

https://preview.redd.it/5alntal0z0421.jpg?width=600&format=pjpg&auto=webp&s=58a685723da47dcb63019df15f987e086654d9a1
In the end, we return to our title, because my title says, the blockchain data container usually has a trusted notary, and each of these is actually meaningful. For example, the data container, which is actually implemented or for a specific scenario, such as the production and sales scene of the clothing we just mentioned, is actually the electronic standardization and intelligence of the data. This involves a lot of existing and popular technologies. For example, if there is more data, how do you analyze it? Here we will use data analysis, data mining and even data modeling methods. Data modeling that you may hear often is machine learning or deep learning, so this is part of intelligence. Standardization you are all experts. The pass-through transport line actually implements this automatic persistence and metering, but these three words have been discussed in general. Finally, trusting the notary, it realizes that around the authenticity, anti-counterfeiting and traceability of the data, it is not only reliable but also usable to build such a thing, not only usable, but also usable, not only usable, but also It is a system that is easy to use.
Finally, I am very grateful to everyone for spending more than 20 minutes. I am listening to my own industry or my own circle as a layman in this industry. I think we have to make the entire smart container to establish its industrial chain, ecological chain, value chain, etc. I think this is completely inseparable or impossible to leave information technology. Because a while ago our country strongly advocated that we call internet+, Internet+, in fact, it is now more accurate and accurate. It should be called information technology+. Otherwise, the internet+ you said may be artificial intelligence+, and then the future is Big data +, but they are all called information technology in our business, that is, Information Technology. Therefore, we are very eager to have the opportunity to use our knowledge in the field of information technology, and the meager ability to make a combination with everyone in the industry, including the specific and typical application scenarios, to truly realize our industry. A transformational upgrade of our industry. Then we realize the industrialization of our entire country called the industry 2.0 or the country we call the information age. Ok, thank you all.


Profile of Prof. Wei Songjie:
Doctor of Engineering (graduated from the University of Delaware), Associate Professor of Nanjing University of Science and Technology, Core Member and Master Supervisor of Network Space Security Engineering Research Institute, Block Chain Technology expert in the field of computer network protocol and application, network and information security. Has published more than 20 papers and applied for 7 invention patents. Previously worked at Google, Qualcomm, Bloomberg and many other high-tech companies in the United States, served as R&D engineer and technical expert; has a wealth of experience in computer system design, product development and project management.
submitted by Yayowam to CryptoCurrency [link] [comments]

We are building a secure mobile wallet system called AirGap

AirGap.it is a wallet solution, allowing the secure storage of secrets on a mobile phone with an approach of two mobile apps. Depending on the security needed these apps can be installed on separate devices or on the same device.
To get the highest security, the AirGap Vault application is installed on a dedicated or old smartphone, which will never be connected to any network again. With the enhanced entropy concept that adds video, audio, accelerator and touch data to the entropy seed alongside the device’s pre-shipped secure random generator it is possible to generate a cryptographically secure seed used for the secret generation on that very same device. This secret never leaves the device it was generated on. The private key is saved in the secure enclave of the mobile device and needs multi-step biometric authentication every time it is accessed to perform cryptographic primitives.
AirGap Wallet on the other hand will be installed on a user’s everyday phone. With this app, users can manage their portfolio of wallets and their valuations. AirGap Wallet deals only with publicly available information as opposed to AirGap Vault, which handles the private key.
How does a transaction work? Detailed step by step guide.
  1. Users can create a new transaction with an address, amount and a fee in AirGap Wallet.
  2. A QR code with this transaction is generated.
  3. This QR code is scanned with AirGap Vault, ensuring one-way communication only with QR codes.
  4. To sign the transaction the secure enclave is accessed with biometric authentication.
  5. The signed transaction is displayed in a QR code.
  6. The QR code is scanned by AirGap Wallet and broadcasted to the blockchain.
What if I want to manage smaller amounts?
AirGap Vault and AirGap Wallet can also be installed on the same device. In this case, the communication between the two apps works with app switching through an URL scheme. This allows the two apps to be entirely encapsulated, which is crucial: For example, AirGap Vault does not have any network permissions and thus is unable to send information out over the network, guaranteed by the operating systems sandboxing.
Which coins and token do you support?
Currently we support the Aeternity (AE) ERC20 Token, Ethereum and Bitcoin. We plan to extend this list in the future. These are all managed by the same private key/mnemonic secret.
We would be more than happy to get your feedback, comments and suggestions. You can also find more information on AirGap.it or on our Telegram channel.
Test our first version of AirGap Vault Android and AirGap Wallet Android. The iOS versions are currently in review, reach out to test it over Testflight.
submitted by Gurkee to ethereum [link] [comments]

The Meaning is the Ruse

I just wanted to take the time to extend u/rrockwe1's interpretation of the game, as it was succinct and similar to where I instinctually landed. Actually, this turned into more of a provocation - I'm looking as to whether this is even the right direction rather than if I've stuck all the details. Sloppy, I know.
https://www.reddit.com/NeverBeGameOvecomments/bfzd79/the_ruse_is_the_lack_of_ruse/
Conspiracy is the "heart of the MGS" series, but I would add - as it's flavored by the technological mediums we partake in. The games are not traditional games but loosely connected simulations to discover a conspiracy and the power dynamics of those involved at the top of it. Not unlike our own world.
Setup:
To start, I would highly recommend everyone on this sub to check out Dan Carlin's SuperNova in the East episode 1. The "incidents" he talks about each play out like a Metal Gear game. We can't forget that Kojima, born in 63, is reacting to events that happened in the immediate generation prior, as we all do. Think Marshall McLuhan's idea of the rear-view mirror while listening. For this interpretation, the relevant takeaway from the podcast is that due to the changes in communication technologies from the industrial revolution, the equilibrium for conflict settled into subterfuge. It was when Japan became sloppy with this new normal (like a winning poker player staying at the table for too long) and accidentally ushered this country towards the most important identity-altering period (WW2) since the Meiji Restoration. For Kojima, I'm imagining/empathizing that if you don't understand this phase shift you don't understand your reality, an American reconstructed post-nuclear Mc-Meiji mess.
Now how does this relate to us internetted folk with our new communication mediums? Id love to also introduce this sub to Jordan Greenhall. In the linked post, he goes on to analyze the QAnon phenomenon and is able to abstract some properties that are very reminiscent of what I experienced through NBGO.
Q is the most recent and most important example of a widely distributed self-organizing collective intelligence. We’ve actually seen many precursors. Cicada 3301 is a famous example. Even the I Love Bees ARG for Halo 2. Perhaps Bitcoin is the most important precursor to Q.
These “self-organizing collective intelligences” (SOCI), are a new kind of socio-cultural phenomenon that is beginning to emerge in the niche created by the Internet. They involve attractive generator functions dropped into the hive mind that gather attention, use that attention to build more capacity and then grow into something progressively real and self-sustaining.
The Q SOCI is, for the most part, about sensemaking. It is combing through the billions of threads of “what might be real” and “what might be true” that have been gathered into the Internet and it is slowly trying to weave them into a consistent, coherent and congruent fabric.
I, like u/rrockwe1, think this ruse was designed. Meta-gaming design is not unheard of in videogames before as another favorite game of mine, FEZ, pulled a similar move in a simpler form.
Also a Gamasurtra article on the design style:
"There's a fourth really big influence that I haven't been honest about," Fish continued. "Myst. There's a lot of Myst in Fez, in fact I'd call it a 'Mystroidvania.' It's a huge open nonlinear world, with lots of super obtuse metapuzzles everywhere. The world has its own alphabet and numeric system."
"I don't know if that is still going to fly today, that's a school of design that's really very old school. There's a high barrier of entry for that second part to the game, and I hope there will be things that will take internet forums weeks to decipher. I want people to talk about that weird thing that they don't think they were supposed to find in Fez."

To begin:
Now we all know Kojima has had mixed feelings towards his own series as he's intended every MGS from 2 onward to be the last in the series (most likely returning due to corporate pressure).
This quote from 2014 about MGSV is telling:
"So in a way I guess I'm taking advantage of that to try new things, because every time I work on any game, be it Metal Gear or something else, I try to make new things. So for me, my challenge right now working on Metal Gear is, while preserving the elements that make it Metal Gear, to do all the new things I really want to do."
He's kept things new by innovatively making marketing Metal Gear part of the game experience; bringing an added self-awareness to gaming, fandom, and technology. While the Fez Meta-game was designed to solve newly enabled internet-forum-oriented puzzles, the metal gear Meta-games have been much more difficult to interpret, as they are often part of the artistic statement. However, Kojima has deliberately moved "the game" aspect to encapsulate before, during and now After release*.* Thus as this relates to MGSV, I believe NGBO and similar sensemaking forums will go down as part of "the game" just as much as we can easily state that Metal Gear Marketing before release has been part of each other game in the series. What is MGS2 if you didn't pay attention to the codec briefing marketing of the game.
For me though, MGSV is an artistic attempt whose meta-game is an attempt to implode the series's own need for more Canon while simultaneously commenting brilliantly on the reality around him.

Imploding Canon - Kojima on Fandom
Can the quest for further interpretation fuel and mask different needs (like a need to simulate violence) and can that devolve into self-defeating patterns or as Skullface put it "an endless loop of action and reaction?"
Do we play metal gear to gain an understanding of Kojima's critical worldview (anti-war, anti-nuclear, anti-authority themes) or to get gratifying rushes of pixelated violence that glorify what he condemns?
These questions are what have plagued the metal gear universe, with the "action and reaction" being fan adulation of a new installment in the series -> leading to corporate pressure -> leading to another unplanned sequel with an intended aesthetic to make us realize the absurdity of the universe he's constructed -> only for our interpretations to fall partially flat on release -> leading to fan adulation...
However, this exact dilemma we fans face when playing a metal gear game is nearly identical to those the characters face in MGSV. The need for more canon becomes self-defeating for the likes of Kaz, Venom, and SkullFace trying to enact revenge. Kaz and Venom need the canon to continue so they can manifest new objects for their revenge - which is the intent of the drawn-out nature of the second act. We, as players, seek something as deeply unmet as the characters in MGSV's need for revenge is. This juxtaposition is an attempt at making you self-aware about this - to reflect back the tragically flawed state its characters inhabit on to you and draw the connection to your fandom.
This is a game not lacking in canon but is anti-canon. Canon that consumes the need for further canon.
The only constructive answer he gives is "to exit", which is embodied via Quiet's character. Quiet, written, displayed, and expressed with purposeful alignment to nearly every stereotypical Videogame trope on women, is a prisoner of the medium she's embedded within. She exits her role within the canon when sexually assaulted by a soldier (a proxy for videogamers at large) or to put it another way, when her role was taken to it's extreme (shock therapy). We need to do the same as Metal Gear fans who are cast within the game as Venom.
With his anti-canon deployed, Kojima can now "exit" in peace that the series is put to rest psychically for fans. Instead of clamoring for more.

How "to exit" - or how Kojima prepared me for the 21st century
MGSV's narrative is bare bones on the surface but speaks volumes to hardcore Metal Gear fans.
You, as part of the most knowledgable/obsessive sect of Metal Gear fans, know something is up with this game. Something hasn't sat right in your gut and it continually doesn't sit right. Thus, you naturally sought solace and understanding within a place like NGBO. We came together and exhausted every possible interpretation we could think of as to what this game is. We started doing the homework, building our best ideas and challenged simplistic notions. This collaboration via learning to definitively seek truth as a group is the point of the Ruse Cruise and serves as the antithesis of MGS2's outlook. Now if you are Kojima, what would build this muscle the best within your fans?
  1. Create a conspiracy, reveal its truth, and reward fans for their hardcore vigilance
  2. Fabricate all the necessary parts for the facade of a conspiracy yet deliberately having none to force fans through the emotional ringer of figuring out what is true. A gaslighting job.
For me, I settled on it being the latter which truly pushed me as to what was healthy, valid, and useful to believe in. The meaning and wisdom from this game was transmitted/felt for me once I "exited" and realized the insanity I let myself believe. In this interpretation, we serve as art form to others - showing what happens when a fanbase loses its collective mind and pieces it back together again.
Now, why would anyone want to encourage this group behavior? Marshall McLuhan has another nifty quote that points us in the right direction with this:
"World War III is a guerrilla information war with no division between military and civilian participation."
There's this running joke within the ChapoTrapHouse fan community (not a fan, I just observe where metal gear gets talked about) that life is beginning to resemble a Metal Gear game. Biologically genetically altered dystopia, check. Insider leaks of grand governmental conspiracies, check. Zany political characters seizing power, check.
Metal Gear Solid V's Ruse cruise was a simulation for a reality that's to come/is here like how MGS2 was a simulation for MGS1. We know we are lied to by authorities and we know the internet is a mess at being an alternative. Thus, Chaos. For Kojima, my guess is that he wanted to kickstart a version of grassroots behavior that will be important in combatting this "truth recession," because telling us via MGS1-4 doesn't seem to work in changing people's behavior, we have to actually crash and burn in doing it. WW3 is here, we got a slice of training.

Thanks For the Read :)
Also,
(my edgiest take is that I think he's embodying his own vision from MGSV. Kojima is Ahab/Venom for Death Stranding's development).
submitted by badco37 to NeverBeGameOver [link] [comments]

Bitcoin dev IRC meeting in layman's terms (2016-01-14)

Once again my attempt to summarize and explain the weekly bitcoin developer meeting in layman's terms. Link to last summarisation
Disclaimer
Please bear in mind I'm not a developer so some things might be incorrect or plain wrong. There are no decisions being made in these meetings, but since a fair amount of devs are present it's a good representation. Copyright: Public domain

Logs

Main topics

Versionbits

background

BIP 9 Currently softforks have been done by the isSuperMajority mechanism, meaning when 95% of the last 1000 blocks have a version number higher than X the fork is deployed. A new way of doing this is currently being worked on and that uses all bits of the version number, appropriately being called versionbits. So instead of a fork happening when the version is larger than (for example) 00000000011 (3), a fork happens when (for example) the 3rd bit is up (so 00100000011). This way softforks can be deployed simultaneous and independent of each other.

meeting comments

Morcos is volunteering to take over championing this proposal as CodeShark and Rusty are busy on other things. He'll review both implementations and then decide on which implementation he'll base his work upon. He notes that if non-core implementations are trying to do something else (and are using nVersion for their signaling) while segregated witness is being deployed, not conflicting will be important so users of other versions can also support segregated witness. If there's an agreement with this approach it's necessary that versionbits is ready before the segregated witness deployment. jtimon has some suggestions to make the implementation less complicated and more flexible.

meeting conclusion

Morcos will champion the new reference implementation for BIP9: Versionbits.

Status of segregated witness

background

Segregated witness changes the structure of transactions so that the signatures can be separated from the rest of the transactions. This allows for bandwidth savings for relay, pruning of old signatures, softforking all future script changes by introducing script versions and solves all unintentional forms of malleability. During the last scaling bitcoin conference Pieter Wuille presented a way of doing this via a softfork, and proposed increasing the maximum amount of transactions in a block by discounting signature data towards the total blocksize. Segregated witness is part of the capacity increase roadmap for bitcoin-core. More detailed explanations: - By Pieter Wuille at the San Francisco bitcoin developer meetup (more technical) - By Andreas Antonopoulos in the let's talk bitcoin podcast (less technical)

meeting comments

Segnet, the testnet for segregated transactions, will be going to it's 3rd version soon. Luke-Jr has assigned all the segregated witness BIPs to a 14x range. Currently there are 4 BIPs: 141, 142, 143 and 144.

Status of 0.12 bitcoin-core release

background

Bitcoin Core 0.12 is scheduled for release around February and introduces a lot of fixes and improvements. (release notes) There's a release candidate 0.12rc1 available at https://bitcoin.org/bin/bitcoin-core-0.12.0/test/

meeting comments

Luke-Jr feels PR's #7149, #7339 and #7340 should have been in 0.12, but are now really late and possibly impractical to get in. For gitian builders: 0.12rc1's osx sig attach descriptor fails due to a missing package (that's not actually needed). Rather than using the in-tree descriptor, use the one from #7342. This is fixed for rc2. "fundrawtransaction" and "setban" should be added to the release notes. At some point it makes more sense to document these commands elsewhere and link to it in the release notes, as they've become very lengthy. Wumpus thinks the release notes have too much details, they're not meant to be a substitute for documentation.

meeting conclusion

Close PR #7142 as it's now part of #7148 Everyone is free to improve on the release notes, just submit a PR.

consensus code encapsulation (libconsensus)

background

Satoshi wasn't the best programmer out there, which leaves a pretty messy code. Ideally you'd have the part of the code that influences the network consensus separate, but in bitcoin it's all intertwined. Libconsensus is what eventually should become this part. This way people can more easily make changes in the non-consensus part without fear of causing a network fork. This however is a slow and dangerous project of moving lots of code around.

meeting comments

jtimon has 4 libconsensus related PRs open, namely #7091 #7287 #7311 and #7310 He thinks any "big picture branch" will be highly unreadable without merging something like #7310 first. The longest "big picture branch" he currently has is https://github.com/jtimon/bitcoin/commits/libconsensus-f2 He'll document the plan and "big picture" in stages: 1. have something to call libconsensus: expose verifyScript. (Done) 2. put the rest of the consensus critical code, excluding storage in the same building package (see #7091) 3. discuss a complete C API for libconsensus 4. separate it into a sub-repository Wumpus notes he'd like to start with 3 as soon as possible as an API would be good to guide this.

meeting conclusion

review #7091 #7287 #7311 and #7310

Locktime PRs

background

BIP 68 Consensus-enforced transaction replacement signaled via sequence numbers. BIP 112 CHECKSEQUENCEVERIFY. BIP 113 Median time-past as endpoint for lock-time calculations. In short: BIP 68 changes the meaning of the sequence number field to a relative locktime. BIP 112 makes that field accessible to the bitcoin scripting system. BIP 113 enables the usage of GetMedianTimePast (the median of the previous 11 blocks) from the prior block in lock-time transactions.

meeting comments

We need to make a choice between 2 implementations, namely #6312 and #7184. PR #7184 is a result of the CreateNewBlock optimisations not being compatible with #6312. jtimon thinks it could be merged relatively soon as #7184 is based on #6312 which has plenty of testing and review.

meeting conclusion

Close #6312 in favor of #7184. Morcos will fix the open nits on #7184 btcdrak will update the BIP-text

Participants

wumpus Wladimir J. van der Laan btcdrak btcdrak morcos Alex Morcos jtimon Jorge Timón Luke-Jr Luke Dashjr MarcoFalke Marco Falke jonasshnelli Jonas Schnelli cfields Cory Fields sipa Pieter Wuille kanzure Bryan Bishop droark Douglas Roark sdaftuar Suhas Daftuar Diablo-D3 Patrick McFarland 

Comic relief

19:54 wumpus #meetingstop 19:54 wumpus #stopmeeting 19:54 btcdrak haha 19:54 MarcoFalke #closemeeting 19:54 wumpus #endmeeting 19:54 lightningbot` Meeting ended Thu Jan 14 19:54:26 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 
submitted by G1lius to Bitcoin [link] [comments]

Bitcoin Unlimited - Bitcoin Cash edition 1.6.0.0 has just been released

Download the latest Bitcoin Cash compatible release of Bitcoin Unlimited (1.6.0.0, April 24th, 2019) from:
 
https://www.bitcoinunlimited.info/download
 
This is a major release of Bitcoin Unlimited which it is compatible with the upcoming May 2019 BCH protocol upgrade; this release is also compatible with all the already activated Bitcoin Cash network upgrades, namely:
List of notable changes and fixes contained in BUcash 1.6.0.0:
 
Release notes: https://github.com/BitcoinUnlimited/BitcoinUnlimited/blob/dev/doc/release-notes/release-notes-bucash1.6.0.md  
PS Ubuntu PPA repository is currently being updated to serve for BUcash 1.6.0.0.
submitted by s1ckpig to bitcoin_unlimited [link] [comments]

Bitcoin Unlimited - Bitcoin Cash edition 1.6.0.0 has just been released

Download the latest Bitcoin Cash compatible release of Bitcoin Unlimited (1.6.0.0, April 24th, 2019) from:
 
https://www.bitcoinunlimited.info/download
 
This is a major release of Bitcoin Unlimited which it is compatible with the upcoming May 2019 BCH protocol upgrade; this release is also compatible with all the already activated Bitcoin Cash network upgrades, namely:
List of notable changes and fixes contained in BUcash 1.6.0.0:
 
Release notes: https://github.com/BitcoinUnlimited/BitcoinUnlimited/blob/dev/doc/release-notes/release-notes-bucash1.6.0.md
 
PS Ubuntu PPA repository is currently being updated to serve for BUcash 1.6.0.0.
submitted by s1ckpig to Bitcoincash [link] [comments]

Python Programming Tutorial - Encapsulation - YouTube OOPs - Encapsulation, Inheritance, Abstraction, Polymorphism. 3. Encapsulation - Basics of Object Oriented Programming (C#) Bitcoin Code Tutorial - YouTube The Bitcoin Code Scam - LIVE PROOF - YouTube

Get code examples like This work creates CConnman. The idea is to begin moving data structures and functionality out of globals in net.h and into an instanced class, in order to avoid side-effects in networking code. Eventually, an (internal) api begins to emerge, and as long as the conditions of that api are met, the inner-workings may be a black box. For now (for ease), a single global CConnman is created. Bitcoin cryptography library. This project implements the cryptographic primitives used in the Bitcoin system, especially elliptic curve operations and hash functions. The code is written in two independent versions in C++ and Java. It includes a test suite of over a thousand test vectors that cover every feature provided by the library. Contributing to Bitcoin Core, a personalaccount In January of this year, I moved to New York to take a job contributing full time to open source Bitcoin projects. These are some of my experiences in those first few months. First of all, I recognize that . trending; Bitcoin C++ Code Ethereum . Bitcoin C++ Code . Mar 27, 2018 DTN Staff. twitter. pinterest. google plus. facebook. Contributing To ... Encapsulation At The Class Level. The Factory Pattern Have a look at the following example of code. Suppose we have a set fo class to represent and create different kinds of persons. Also, we would need a way to do the same in code as well. Also, we would not like the users of the class not to know, that there are underlying implementations of ...

[index] [8317] [2438] [13517] [24165] [45632] [3208] [27469] [4154] [24343] [22629]

Python Programming Tutorial - Encapsulation - YouTube

BEEN A VICTIM OF A SCAM? GET YOUR MONEY BACK HERE: www.scamxposed.com/recovery The Bitcoin Code get the ScamXposed treatment and I prove beyond all doubt tha... In this python programming video tutorial you will learn about encapsulation in detail. In an object oriented python program, you can restrict access to meth... Encapsulation, one of the pillars of Object Oriented Programming (OOP) Skip navigation ... OOP - Encapsulation Tutorial (Actionscript) - Duration: 14:55. Enok Madrid 4,322 views. 14:55 . C# in ... In this Java programming tutorial for beginners, you will learn about getters and setters, encapsulation, and access modifiers (public and private). I will p... This video explains all the features of Object Oriented Programming language includes Encapsulation, Inheritance, Abstraction, Polymorphism.

#