DEX Aggregators and the DeFi Boom
What Is the Need for DEX Aggregation?
What are the advantages of DEX aggregators?
Why Traders Love DEX Aggregators
Why are DEX aggregators on demand?
DEX Aggregation: Closing the Gap Between Centralised and Decentralised Market Making
Fast and Convenient Trading Execution
The functionality of limit orders opens up many benefits in decentralised market making, such as:
Credit rating score on crypto markets.
The rationale behind the creation of an ongoing credit rating process & tail risk hedging
Credit rating score on crypto markets August 14th 2020

Main objective: morph the concept of debt towards “forward commitment”

How the credit rating score fits in…

The current risk factors and the longer term approach

A simple mathematical approach
The current document develops the rationale behind the creation of an ongoing community based credit rating process
1. Main objective: morph the concept of debt towards “forward commitment”
Credit creation has historically been industrialized from the need of governments, warlords, kings to sponsor the well being of their country by first issuing debt towards their people, next issuing currency as “tokens” of their good faith. In order to keep the system working in a socially stable manner, the governments relied more and more few players which obtained a banking licence, ie they had the right to store the currency (legal tender one that only the government could mint or print for all the people) on behalf of others in exchange for securing the financing needs of the government in question. This is has to what is known today as “centralized finance”. The problem currently is that the economic stakes are too big both for banks and their supporting governments: they cannot afford to bail out the whole economic value that is at stake. Thus they can only use central banks to print as much money as needed in order to keep solvent their banking system. They rely on individual initiatives to generate more wealth, something which makes the problem of being outsized worse by the day as the “recipe” succeeds. This preference for bailing their banking system as the expense of the people economic interests induces more and more inequalities: the richest ones have more and more excess liquidity and wealth, while the poorest ones cannot access to good credit conditions. The crypto markets offer new opportunities.
The idea is to reverse the “burden of proof” in economic terms in full. So far, only the governments could prove that a national economy was sustainable in proving that they were solvent, a thing which they achieved by keeping solvent their centralized credit generation system. The internal costs nowadays of a government to maintain this organization is growing to such a level of sophistication that the SMEs or the man on the street face a higher and higher bar to actually obtain liquidity (or credit) in appropriate conditions. In simple words, people are not getting credit unless they can commit a lot of assets or they can a very expensive interest rate to the government ultimately. Basically the problem is that everyone willing to borrow has to prove its ability to beat the internal cost of functioning of very expensive governments or else very expensive financial systems. The problem is quite clear through the QE policies of central banks or the latest bailouts: they all are paid by the people in different forms (unemployment, diminution of median revenue despite the rise of median revenue, indirect taxes and so on…)
Now the idea through the crypto markets is to empower people so that they can “prove” with data that they are solvent by enabling them to first “commit” that they will pay back in a foreseeable future the money that they can get instantly with no more guaranty than their “recorded word”. Since the record is here, if they fail they become accountable towards their group of appurtenance, their “community” in other words. Thus, some may and will fail to pay back but since the traceability is complete there is actually very little risk on the long run that “forgiving under conditions” this failure generates undue currency and induce inflation or else capital loss for the other members of their community. Indeed whenever the community bails out one individual for the sake of the community interest, it preserves its future common well being and mostly avoid a future massive generic bailout….in that regard it prevents a lot of the moral hazard risk that has regularly plagued the traditional financial systems that are centralized credit systems. Indeed if one community appears to be too complacent about debt forgiveness, it will be alone in failing while the others will remain insulated, therefore precluding many of the recent systemic financial failures.
So the whole point lies around this idea that we do not generate “loans” or “debt”, but instead that we generate “future commitments” all along. The “currency” is generated still and allows people to transact, cooperate and secure their common future. Debt is not a shame any longer but an ongoing commitment to pay later what has not been paid yet in due time. It becomes of second order importance on an individual basis. What really matters is the running amount of “future commitments”. Freeing the currency issuance from a centralized debt control frees local businesses and empowers them to generate their own ring of currency creation –see the WIR example in Switzerland and this actually provides an opportunity to get more community jobs to those who cannot be qualified or productive enough in the economy. This issue will spread like fire in the coming years as AI progress will fill 99% of the current human jobs by the next 30 years or so. As a result, the community of human beings will have achieved ever higher levels of self subsistence but mostly because machines will be simply more productive. Paradoxically people will be jobless and technically unproductive as per our current standards, but all this progress will have a sense IF and only IF there are humans beings to benefit from all this progress. This brings in the concept of “universal minimum revenue”. But this also suggests that it will go farther: the revenue of most people will be based on a universal scale where the participation of everyone will be estimated on the person’s “data”. This is where the crypto markets will play a central role: they will allow to manage the pool of currency according to everyone’s merit in a selfsufficient economy. In other words the subsistence of all of us, be that in food terms, education terms, healthcare terms, housing terms will be secured by the principle of “minimum” welfare. Yet, depending of their contributions, we will not get the same benefits and this will have to be done fairly so.
2. How the credit rating score fits in…
Here are the fundamental principles as shown in the 3 slides initially but some more developments.
Rating grid on cryptos: from AAA down to CCC to collectively govern the deployment of crypto markets……

Base ground: crypto markets constitute a much cheaper financial hub than the usual “trusted 3rd party” payment system

The goal amounts to benchmarking the counterparty risk through a rating scale

The principle of the economics is « shared governance »

Base ground: crypto markets constitute a much cheaper financial hub than the usual “trusted 3rd party” payment system

Nakamoto 2008 third party makes transactions reversible but at a cost that local economies cannot afford. Cryptos make any transaction irreversible BUT exposes to frauds

Typical financial transaction cost: 5$ for a $1 billion I need to put true figures here any idea where I can get them?

Crypto transaction cost for a $1 bln equivalent is 0.2$ on bitcoin and 0.05$ on ethereum I need to put true figures here any idea where I can get them?

Cost slashed down to 0.002$ on exchanges, can become a gain on Defi platform through staking I need to put true figures here any idea where I can get them?
2. The goal amounts to benchmarking the counterparty risk through a rating scale

Many risks : defaults, Mark to market, Gas risks appear in the ultimate cost I will need to put a price on all of them in sort of Value At risk terms

Over a high number of transactions, one always faces a risk to lose the whole amount involved at least on one trade. That risk is never erased in full but the average risk can be very low and balanced largely by the overall reduced execution cost. Here I want to use a list of incidents to offer some statistics to quantify that the headline news is really bad but also to emphasize that the loss for the community overall is super low, actually much lower than in financial crisis. More I want to emphasize that from one schock to the next the intermaediate value generation is much higher. Do you have any source to provide here?

Aggregating all these risks together and building a distribution allows to « tranche out » the highest probability and reach a AAA like risk, a BBB like risk, a “first loss” like risk. Whatever your operation risk is, he more transactions you make the most likely your average cost is going to be: you can always get reliable estimates on the decent time horizon of what you ‘expected loss’ can be. This is typically what already happens on subprime car loans in the US these days. Typically the “business” serves as a much better “collateral” than the “actual asset” that backs the loan. Conclusion: you need to choose one preferred path in the crypto market in the first place depending on your business. Thus a Defi platform, an exchange will opt to face immediate and frequent operation defaults, therefore secure their more risk averse clients. Insurers might offer “tail” protection to these exchanges provided they have managed to build say “a 2 year proven track record” of properly absorbing the execution risks on behalf of their customers. Likewise, some high frequency entities might offer “mezzanine” insurance to the exchanges in return for preferential execution fees. There contracts as such become part of a mutual governance pattern thus involving “customers”, “insurers”, “traders”, exchanges themselves through a shared token staking “data and respective commitments”. Here the idea is to sketch some fine balance between the “credit generation” that spontaneously comes with “commitments” and the ultimate “counterparty risk” that is next redistributed through this “tranching risk” process. This is the backbone of “governance”. Here I have in mind what is customary in managing SPVs that house assets and issue securities. Thus “governance” would allow “tranching” and next “leveraging up” since all this counterparty risk that has accumulated by issuing more cryptos “synthetically” is next recycled through the tranches for say “real life investors” in the crypto universe. You can thus reuse a given exchange or a given defi protocol as a silo for the sake for a given community of interests. Maybe I misunderstood the technology here but, even for a given exchange it is always possible to replicate the exchange on a smaller insulated scale provided the participants form a “willing community to commit”.
3. The principle of the economics is « shared governance »

The basic milestone is exhibited in this chart: no matter how high the probability of a standalone default is, the level of sophistication and layering to day allows to control the autocorrelation risk for these defaults. Thus a reckless investor may very well lose all if he solely look to minimize his transaction costs, but everyone else can drastically reduce the costs way below the ones that are in application in the traditional financial markets. The whole idea is to process many smaller trades with a cross default correlation that will be <100% and as close to zero as possible. The connex idea is to participate in the development of the whole governance and extract some additional return that in itself is a return on a much more strategic investment.
The economics of such strategy can be framed as follows:

Over many small transactions, you may lose once with 100% certainty, but how often would you lose 10 times in a row over say 100 transactions?

Even with a rather high probability, the series of losses can be addressed with careful selection (at a cost yet) as events flow in

As per an ongoing consensus building, some ultra safe paths return less, but cost less and offer predictability. The key variable is the business purpose, its intrinsic return and own path to success in capital terms over time. Think of space conquest for example or global warming fight : these projects cannot be addressed by secured traditional financial processes, but they that can find support through crypto tokens. Indeed if we just look at former space programs, some of their intermediate breakthroughs gave way to immediate consumption products and popular applications but it has always been difficult to “value” the long term popular commitment around the ultimate goal of space endeavors. Crypto markets allow to build prior community based consensus and generate governance driven incentives around that matter. It used to be the sole decision of rich governments with a permanent public debate as to whether this public money would not be better used for hospitals or education. Yet, it turned out that some findings in space research had critical applications, but subsequent and initially unforeseen, into health and education. No government would entitled to stake its public endorsement in such issues while some communities would actually volunteer in a fashion that NGOs already have. The problem occurs today with global warming and remains insufficiently addressed for quite similar reasons.

As per consensus too, some less costly but riskier paths offer better gross immediate returns which may well finance long term tail exposures. Simply knowing in advance the riskiest and most immediately rewarding paths, help design finer investment strategies. This will alone allow us to increase the leverage on crypto markets without needing to forcefully improve the basic computing capability of the networks. This view here simply leverages on the “proof of stake” principle.
3. The current risk factors and the longer term approach
Now that the stakes behind this simple distribution curve have been identified, let’s see what the components are that contribute both to induce such default risks of many kinds and to induce some leeway in picking the actual correlation risks that come along these outright standalone idiosyncratic defaults. The goal here is to apply well chartered default joint distribution analysis that is commonly applied in traditional finance.
Here is a short list of operational and financial risk factors that should enter into the buildup of such distribution….You can list already 5 classes of risk that all would act as “triggers” for either idiosyncratic risks or/and systemic risk…

Financial risks

Technical risks

Operational risks

Human risks

“Killer” new technology

Financial

Supply and demand (driven by the hype, BTC price movement)

Interest rate algorithm driven by supply and demand i.e. compound or fixed

Compound = demand high interest rate high, demand low interest rate low


Ethereum price fluctuations > margin calls and liquidations
Built on top of Ethereum (DeFi native tech update

Congestion in layer0 protocol can mean information inefficiencies and liquidity risks

Dependent on a successful Ethereum2.0 update
Gaas fees aren’t sustainable
Lack of incentive mechanism, covering fees, holdings, payouts don't make sense (Stability fees)
Inflation
Overcollateralization to address liquidity and credit risk  the risks that a lender will not be able to exit a position or be paid back in full.
The underlying assets become worthless.
The protocol asset tokonomics failure including the rewards, fees redemption.
Liquidity within the protocol (Synthatix unlimited liquidity, avatar)
Sustainability of the economic returns of the yield token > i.e. is there a fixed supply that will eventually run out
Tokenomics > token supply, market cap etc.
Collateral risk

Overcollateralization and collateral make up risk due to volatility differences within individual tokens

Can we use a VaR model here?
Liquidation policies of the protocol
Is there insurance available against the products or the contracts i.e. Opyn only allowing on Dai and Ether
Dumpamentals > manipulation vulnerabilities, mostly comes down to token distribution (I.e. are founders holding the majority) and also exchanges they are listed on (i.e. are they yet to be listed on CEX like Binance or Coinbase or are they still only on balancer or uniswap > shift from DEX to CEX)
2. Technical

Smart contract code bugs (10/10)

Cyber security (DDoS)

Exploitation of code by hackers (Flash loans on Aave expose it to more possibility of exploitation) (10/10)

No redundancy > SC are self executing and non reversible (10/10)

Oracle risk in liquidating the positions (9/10)

Can the oracles run in highly congested networks

Security of the protocol (10/10)

Buffer overflows, dangling pointers, and stack exhaustion (9/10)

Improper API use (7/10)

Insufficient testing (8/10)

Hardware incompatibility (2/10)

3. Operational

Built on top of Ethereum >

Adoption, usage, its extremely difficult to use and enter in to the space

Ethereum [Non] Scalability

Stablecoin Failure (Risk of DAI collapse and other crypto token collapse (Defaults)

Risk of yield calculations

Exposure of the protocol to other SC, DeFi protocols etc.

Governance structure

Centralised or decentralised

Transitioning or fixed

Fork likelihoods with community splits
Barrier to entry within the ecosystem

How adaptable or interoperable is the current model

Scalability

Some protocols are developed in decentralised, community don’t know

Validation of information (Are oracles used or is it centralised sources?)

Audited or not? Has the code been validated

Time protocol has been operational for

Privacy, keeping your holdings as secret

Team behind the project > transparent or opaque

Smart Contract Openness

Regulatory jurisdiction and licensing

Market share and dominance

Poor or fragmented documentation on important vertices: usage, risks

Oracle could provide malicious data, and administrator could change a system parameter or governance procedures could be coopted

No insurance

4. Human

Loss of private keys by the holders

Business and geopolitics

Unable to meet guaranteed return > what is the financial situation of the actual staker > can they make the collaratol margin calls

Useability risk > complicated to understand and only designed for native crypto users

UX developer priority
5. New data points and methods to improve the utility of the score:

Address additional risk factors including centralization (governance) risks, oracle risks, and market liquidity risk via liquidation policies

Break out score subcomponents into individual scores

Decentralized methods for validating market metadata

Adapt the model for DeFi products beyond lending

DAOify the management of this scoring algorithm
Example how to evaluate a project/protocol:
https://medium.com/nexusmutual/understandingrisksindefi1uniswape5e790692635
4. A simple mathematical approach
So we have the ultimate goal with this ultimate distribution curve that sets the ultimate credit grades. We have the ingredients of the risks and we have now to set the correlation factors and includes the fact that we are always transitioning. It is thus also a question of designing a sort of “flight simulator” by projecting the most likely series of successive transitions.
Yet, this credit scoring makes sense only if this induces an ongoing governance pattern. Here are the proposed rules to set the economics of this initiative:

Further the math modeling of the rating/ risk distribution to help people navigate through the current crypto world bear in mind that current risk factors might be addressed by the protocol release (people work in it but confidentially so far)

Keep digging on this “reverse governance” principal where participants are the stakeholders and are accountable. Drill on the concept of “shared collateral” as opposed to getting an insurance and launching one individual business to get rich quick

Flag the remaining “systematic risks” that remain in place structurally ( see the Gas explosion on Ethereum right after Yam got hacked). Participants would not care about the governance rules since they are not incentivized to promote a well crafted governance model. The reason is that this is a standalone business proposition similar to finance old style

Define a sort of “issuance policy” valuation framework around the sustainability of the number of tokens versus, debt forgiveness, value generation, offerdemand imbalance. This is an extrapolation of central bank monetary policy. It is also a question of growing the aggregate “GDP” or “market cap” of the token. Any new token has to either “beat” or “lose” against BTC or ETH to bring some interest… if the new token is moving as a steady spread in price vs BTC or ETH it is making no new value since trading it involves additional costs while generating no additional dynamics. At the end of the day, one has to deploy the token so that its price moves randomly as a completely independent random variable. Very strong selling point for finance people: the crypto market has an obvious governance rule and that is to sponsor “independent” tokens rather than copycats.

Ability to generate the best arbitrage between rewards and predictability of rewards by partially tokenizing real life assets in part so that they maximize their return while they can issue or withdraw so that they manage “live” their risk/return. The ultimate goal is to provide capital guaranteed and minimal Mark To Market by issuing the equivalent of 6 months rolling AAA tranches. The idea is to source some valuation insurance through the crypto market upon real life assets that are risky. The idea is to have risky assets in mark to market terms like stocks, they are staked in crypto world in order to source insurance and possibly extra return . You do not want to trade the cryptos, you just want to invest on a defi platform. You get real life funding from the AAA short term tranches.

How can one look at the financial risks in every token that is considered?

Anchor the model on Vitalik Butyrin trinity comment.
General plan for the mathematical model in two folds:

Correlation matrix, loss per default matrix, dynamics

Economics: how the leverage creation is contributing to create more value for the system?
Correlation matrix, loss per default matrix, dynamics
The idea is to say that every transaction runs through a series of risks of losses: defaults, lockups, fees, slippage, etc… They happen in succession and may cumulate. More there may be several transactions in a row that could actually suffer losses until the slaughter is gone. Third, each risk conveys its part of randomness in itself. At last, the cryptomarkets offer different routes to execute a transaction, and these many possibilities do not convey the best risk/reward profile, depending on the underlying business. So there are AAA rated routes, BBB routes and quite risky routes. The whole idea behind the initiative is to build an aggregate database of losses so that the estimates are known at best.
Inventory of the many mathematical tools involved:

A vector of average loss per identified risk =risk 1 risk 2 risk n1 risk n . To be sure here, risk i is measured as an average loss observed on such identified risk, either through historical record or simulation. On the way, the variance and covariance appearing between other risk factors are also measured (observed or/and modeled for the sake of database completion)

A correlation matrix between these risks since they are random in essence. There are many ways to model the distribution but the most portable and intuitive one is the joint Gaussian distribution, since it works out of tail events. A more dedicated model for tail events must deployed but this is not the purpose of the current model since it solely targets business as usual conditions. The correlation matrix appears as follows:

11=1 1n=covar(risk 1,risk n)risk 1risk n ⋱ 1n=n1=covar(risk 1,risk n)risk 1risk n covar(risk 1,risk n)risk 1risk n nn=1

The idea next is rather simple: use the fact that this correlation matrix is real and symmetrical. This allows us to use the spectral theorem that tells us that this matrix is diagonal in a well chosen orthogonal basis of eigen vectors. And next we use the singular decomposition theorem to eventually find quickly the diagonal elements (since the singular decomposition is to be the one providing the eigen values). To that aim one can also use a Cholesky decomposition of the correlation matrix to find a square root matrix, or one can find the main eigen vectors on the correlation via a Lagrange optimization under constraint that is typical in machine learning algorithms.

At this stage we start measuring some aggregate loss estimates by incorporating several random variables: the fact that paths can be random through exchanges or defi protocols for example, the fact that the number of transactions and their standalone amounts may vary a lot too, the fact some autocorrelation in losses are random too (aside from the observed correlation). From the 2 steps before we secured that we have the idiosyncratic average loss per default event, and their oneoff correlation arranged through “root vectors” that allow us to simulate what may happen. One way to get an intuition is as follows: the square root eigen vectors allow you to simulate most of the past observed oneoff variance. Thus, this is now all about running the simulations properly. So we need to map the many different possible transaction channels. Here again this can be achieved through designing the “graph” of the possible paths by means of simply a code similar to a smart contracts with “gates”, “loops” throughout a virtual and simplified crypto world. And once we have that capability, we just have to run Monte Carlo simulations through this grid using the additional risk factors mentioned right above, namely that paths are not 100% controlled anyway, and that the amounts at stake in every payment can also be quite random. This “one size fits all possible paths and profiles” simulation will provide an average risk of course but also the whole distribution of ultimate outcomes.

Now this is reverse engineering process that is underway: through the simulation some path parts can be attributed a certain ultimate losses, some full paths happen to be quite “safe” or, to the opposite, “risky”. The aggregate distribution will allows us to “tranche” or “discriminate” the many paths, and therefore identified stages in every path that are either AAA or BBB or perilous.

What about the contagion risk then? The idea is to say that actually all these figures will fluctuate overtime and in good times of course the autocorrelation risk will drift lower, something which will induce lower average running losses and lower correlations between the different elements. The mechanism is typical: when people get less exposed they find new routes, succeed and thus “observe” some diversification because the correlations of losses are indeed lower. This pushes the business to higher levels, spur more investments and volumes which on aggregate support even lower losses and correlations….until we come across a contagious problem that may just be a temporary crisis. How to measure and model that phenomenon from the framework set above out of any profit making influence? The answer is quite simple at the root….”recombine the average event”. Imagine you have just one risk: you observe it, measure its distribution and you wonder whether one day it may snowball like in almost a second while it used so far to be idiosyncratic and like isolated in time… How many times could it happen? You do not know… What matters most: what people infer from it or what they really suffer in the current case? Here you know: this only what people think that matters anyway because this is the only variable that will set the future cost of transacting in your market. So now imagine you have this loss event for which you know it average loss risk 1. And you wonder if, instead of happening once and “done”, it suddenly happen twice, thrice, n times…How likely it is in reasonable terms, ie a consistent fashion over time? Here you say that the second loss can only occur if a first one occurred in the first place. Here you would say that the probability of this first loss to occur is actually equal to risk 1 if you took the initial care to price all your losses as a proportion of your transaction amount. This is something you would normally do anyway. Now you would write then that your probability of loss would “probably be”:
risk 11+risk 1 In summary you would opt to say that since this probability is not nil it may compound over itself. There is here a voluntary confusion between the “loss as a percentage” and the “probability” of such loss on the amount at risk. So to be clear the probability of one loss is risk 1, and the probability of 2 losses is risk 11+risk 1. Now what happens if you have a 3rd loss in a row? What is the consistent way to pursue down this line of reasoning? You can either say this becomes risk 11+risk 11+risk 1 or, because you see that then your 3 successive losses are happen “at once” you cannot discriminate between them which one really was “first”. So you might as well claim you cannot discriminate which one is “second” and which one if “third” (you know that must have a “first” to start this speculation here but is that so clear when you start envisioning many ones that are “simultaneous” after the facts?). So might say that, after the first one, you have 2 equal possibilities that are speculations of yours and thus you had rather say that your probability in the current circumstances is risk 11+12risk 11+risk 13. Equally so you would proceed extrapolating that you may have “n” such “simultaneous” losses. This is clearly NOT what you have observed and expect. So you would the aggregate risks of “contagion” as risk 11+12risk 11+risk 13….risk 1n⏟n times compounding of the same event in a tail scenario You realize that rather than risk 1 you have instead erisk 11. But why would you stop here right? You can next say that there might be a “level 2” crisis and actually your probability is going to be eerisk 11 and so on. This compounding surely brings you to 100% after a couple of compounds. Thus when one considers just one risk, and one observes the fluctuations, one can also measure the “contagion level” that matches with the day to day to even longer term changes

How can we use that with the many different risks and the correlation matrix? Well if one assumes that day to day the observations are independent one from the others, if only because we blindly ignore consistently such tail risk for the sake of keeping ourselves busy (otherwise we go and hide in a cave), then we can say that we would act certainly randomly and we would add up the many losses through the variance of the observed losses rather than the losses themselves. We would remain blind and looking for immediate survival thus rather than adding up. The starting point is that thanks to the spectral theorem we can actually use the eigen vector basis found with the correlation matrix to actually infer an equivalent diagonal matrix for the “eigen losses”, the latter being a given linear combination of the outright losses times the corresponding eigen value. Let’s put some formulas on now….Here is first the decomposition that allows us to extract a diagonal matrix out of the correlation matrix:

M=1 1n ⋱ ij 1 ij 1 1n 1 . The spectral theorem tells you that there exist a group of eigen vectors ek=ek 1 ek 2 ek n1 ek n , such that <ekel>=0 ∀l≠k, ∀k∈1,2,…n,<ekek>=1 and Mek=kek where k is the kth eigen value. Then we can form a matrix A such that the eigen vector “k”, namely ek, is the kth column of the matrix A. Then one can write that transposeTAA=Id=1 0 ⋱ 0 1 0 1 0 1 and that: transposeTAMA=1 0 ⋱ 0 n2 0 n1 0 n =D
We can alternatively write the reverse relationship:
ADtransposeTA=M since transposeTAA=Id=AtransposeTA
If now we introduce a new set of eigen vectors using a purely algebraic expression for the square root, usingk=k and observing that there are cases where k<0 which implies that k is then a pure imaginary number, we can simplify the context of our calculations. We thus consider the new set of vectors : fk=k ek=kek 1 kek 2 kek n1 kek n . We then observe that for any vector of risks that is expressed not in the original basis where we have recorded the many risks, their times series, their variances, their correlation factors, but now in this new basis of vectorsfk, we arrive at a very useful result….
Let’s indeed consider any combination of real observable risks that we write as: u=i=1nuirisk ii where the vector i has all its coordinates set at 0 but one that is the ith coordinate. We here express the vector of risks that are conveyed by one transaction across the whole set define by the many risks we have empirically flagged. We then know that since the matrix that morphs i into ei is the matrix A, we can infer that there is the matrix that inversely morphs ei into i is the matrix transposeTA. Thus Ai=ei and transposeTAei=i
Thus we can write that the original risk vector u can be equivalently be expressed in a unique way into the eigen vector basis:
u=i=1nuirisk ii=i=1nuirisk itransposeTAei
u=transposeTAi=1nuirisk iei
So the risk vectors can be equivalently be simulated for statistical simulations equivalently from the original canonical basis of the vectors i, or from the basis of eigen vectors. But do we measure the same aggregate risk in the end?
Another element to consider is how we ultimately compute the final loss given our starting methodology. We started with observing some loss, some variance and some correlation. At the end of the day we would care of the aggregate standard deviation that combines the many different idiosyncratic standard deviations (the risk i) their possible pairwise correlation (expressed with the correlation matrix) as a sum of these factors. Generically, in the canonical basis we look after : TranspTM. is quantified in the canonical basis. It is just one particular case where of u where all the ui=1. Expressing this last expression using now the eigen vectors, we get to
transposeTeiMei=i
If now we care about the variance that a given vector u=i=1nuirisk ii=i=1nu'ii is going to generate, we can as well write Varu=transposeTuMu=transposeTi=1nu'iiMi=1nu'ii. Recalling now that ADtransposeTA=M, we see that there is a spontaneous relation with the eigen vectors that pop up:
Varu=transposeTi=1nu'iiMi=1nu'ii=transposeTi=1nu'iiADtransposeTAi=1nu'ii
Looking more closely at transposeTAi=1nu'ii, we can express it as :
transposeTAi=1nu'ii=i=1nu'itransposeTAi=i=1nu'iei
And since the left hand side transposeTi=1nu'iiA is just the transpose of the expression above we arrive at the following key result:
Varu=transposeTi=1nu'iiMi=1nu'ii=transposeTi=1nu'ieiDi=1nu'iei
Since D is a diagonal matrix we thus get a fundamental result that will help a lot:
Varu=i=1nu'i2i
Therefore the ultimate risk factor that we care about with a given risk vector like u is Varu=i=1nu'i2i
We here conclude that any correlated set of nxn interactions can be reduced to a set of just n random risks that are pairwise all independent. Therefore we can run in parallel over the n different risk dimension the same measure of contagion risk.

However, although correlations tend to be steady and evolve in clusters, they do change and we shall also include the fact that, the observed correlation matrix itself tends to have its non diagonal elements spiking to 100% when a severe crisis arises, exactly when the many defaults autocorrelate. So we need to take into accounts the fact that the eigen vectors are independent only “locally”. So we need also to recombine the combinations eigen risk 11+12eign risk 11+eign risk 13….eign risk 1N1⏟N1 times compounding of the same event in a tail scenario with the combinations of eigen risk 21+12eign risk 21+eign risk 23….eign risk 2N2⏟N2 times compounding of the same event in a tail scenario, and the combinations of eigen risk 31+12eign risk 31+eign risk 33….eign risk 3N3⏟N3 times compounding of the same event in a tail scenario, and so on like this with the “n” different eigen vectors over the “N” potential successions that may differ from one eigen vector to the next. There is no simple way to address that. But there is a rather simple way still to express the fact that the markets would rapidly “seize” if just one risk factor went through a particularly “high number N”.

The point is that all these competing risks aggregate through a concave function that measure the distance within the sphere of radius 1 in dimension n. This is not a linear function and the contagion/tail risk comes from the fact that, past a certain threshold market players loose confidence and generate what their fear the most. Thus the geometric intuition is that there is a strong magnetic field around the point O, center of the sphere and there are attractive magnetic fields acting like storms from the surface of the sphere. Thus whenever one risk tends towards the surface of the sphere it drags all the others, something which speeds even more the extraction from the central magnetic field. Thus the center and the sphere act as magnetic poles of a condenser. One quite simple way to represent that pattern is to make all the “N”s equal. Next it is more a question of studying whether the magnetic field is a good representation. In this case the strength of the magnetic force is an inverse function of the curvature. Thus in “O” it is theoretically infinite: ie you do nothing and no wrong will happen. But if you start building risks, they rock you away from “O” and the repeal force is weaker while the “fear factor” makes you ne attracted towards the sphere of radius 1 (equivalent to a total loss). At that moment in time, you reach a certain inner radius between 0 and 1 and you are submitted to opposing magnetic forces. We can get an intuition from this approach that is: when the inner surface has a radius that is half the one of the sphere, ie 0.5, the market is ready to break ie reverse back fast. This can be much more refined since the force might in the inverse of R² instead of R….Or the opposite, it might be in a power law of R that is positive but lower then 1. This is interesting to measure such an attracting mechanism because it may uncover intermediate magnetic poles.
Economics: how the leverage creation is contributing to create more value for the system?
Now that we have profiled the model for this rating scale, we need to better define the “magnetic forces” so that this index genuinely serves to facilitate the whole mechanism of value creation beyond the sole legal tender currency accounts. The whole point of generating this credit rating scale is to address one fundamental issue of traditional finance, often called as the “inconsistent trinity”. As the chart below depicts, you have to choose between 3 big anchors if you want, as a “country”, to optimize your wealth looking forward. Indeed you need to consider 3 factors: the value of your currency versus your trading partners, the incentive for them to invest in your “country” and not others, and your autonomy in printing money internally aside from the 2 former factors. Flemming and Mundel in 1963 coined this problem the “unholy trinity” finding some inspiration from the ISLM model applied to “open economies”. Milton Friedman would use this principle extensively to become the speaker of the liberal government policies that have been right and center since the 1980ies. Friedman, confronted with this policy advocated a policy of “laisser faire”, as opposed to some Keynesian government led subsidy policy, on the grounds that: "There is likely to be a lag between the need for action and government recognition of the need; a further lag between recognition of the need for action and the taking of action; and a still further lag between the action and its effects."[. In that Friedman distanced himself from another key figure in economics, Jan Tinbergen. Tinberger supported the use of econometrics to actually improve the efficiency of the economy through government policies. His premise was that the government was to support first the social interests and would do it efficiently so. That was in 1969… And in 1986, he went into oblivion, superseded by the school of Economics of Chicago, led by Ayn Rand, Friedman, Greenspan and the successive chairmen of the Federal Reserve Bank of the USA….. The problem that had shown in the course of the seventies in many countries was that centralized economies generated both capital (read mutual and spontaneous trust) and inflation (read ongoing destruction of savings and therefore immediate liquidity).
Thus if we try to summarize the big policy swings since the early 20th century when all this fractional capital economy started we find 4 stages. From say 1890 till 1935, this was the Gold standard whereby governments opted to sacrifice in priority their monetary autonomy, in the hope that they would sponsor free capital mobility (see chart below) and incur some leeway on their currency that would be seen “as good as gold”. Next, between 1935 and 1985, governments instead set the focus on their free monetary autonomy, something which sponsored further “free capital mobility” at the expense of the management of their currency value. This is why for example we would have President Nixon depeg the USD for good. This is also why we would come across oil embargoes, inflation spikes in western countries, “star wars” with Reagonomics, globalization and terrorism. Thus, observing first that enslaving monetary policy to gold was not optimal, observing next that targeting maximum monetary autonomy had indeed devastating “lags”, the idea was to target maximum economic prosperity through liberalism, ie the least government intervention one way or another. Nowadays we have been on the new paradigm related to this unholy trinity whereby the social interests are best served by the government through a liberal policy: let the individual and private entities run the economy on the short run.
Today, the social wellbeing is dominantly perceived as shortterm driven (see the influence of earnings seasons of stock markets), materialistic (consumption), and centered on the “individual” (see the impact of “reputation” media like Facebook or Twitter). It matters much less these days to have the “right technology” inland or a strong middle class that consumes, and it matters much more to design and distribute the right product to a greater number of people worldwide from wherever the product can be processed and shipped.
The old fashion way of developing such an economic model was to use “trading”, “derivatives”, “securitizations”, “offbalance sheet” vehicles, “tax credits”. Yet, due to the centralized nature of our financial system the liberal policies always ended up governments to make drastic choices due to “counterparty risks and contagion risks”. The crypto markets have blossomed since 2008 in the shadow of the last financial crisis that has uncovered the inefficiency of the current regulations. In short, governments cannot even guaranty now their banking system, their depositors….This means that they are not capable of reversing a fraudulent transaction by law if it is too large. This is not the standalone transaction itself but its “butterfly effect” subsequent impact that is at stake here.
Thus this credit rating score is a tool of selfgovernance that matters a lot. There must be some consensual governance that has to be enacted through which shortterm and shortsighted interests have to be balanced with longer term objectives. We could quote at least 2 of them that are closely related and will end in a deadlock as they are today. The first one is the climate change. Countries will always be too slow to adapt for they just cannot afford to pay for it by issuing massive additional debt loads. The second one is the space conquest that we must face as we all “want more and more and more”. Yet in this quite consensual desire we consume more and more and more and more natural resources that we transform and do NOT regenerate. Again, no government today can address this efficiently, especially being aware of this unholy trinity. Only space conquest can tangibly further this desire of ours. The common denominator is that, unless we solve these two issues, our demographics to be put to a halt and this is again going to raise dramatic social problems between the “haves” and the “have nots”. Again no government, no military might can provide an acceptable long term solution. This is where the crypto markets will play a critical role in their further development. They sort of “solve” the problem of “dimension” that the governments fail more and more to address individually and collectively. The ‘Lag’ effect is crucial and central. This is also the fitting piece of this credit scale that we propose here. We have to manage this scoring system not as a fixed pattern but as a moving target. Otherwise we might grant AAAs all along no matter the price, no matter the recent value creation, no matter the otherwise speculative part in it. What do we know?
What we know is what history taught us on and on: we do not need crypto markets at all to generate enough financial leverage through credit line openings to make the whole system blow. We also learnt that the gold standard, the stringent ruling of interest rates, the closer monitoring of banks or big companies is preventing the tail risk. Rather one might argue that the more prohibitive a system is, the more prone it is to incentivize smuggling and any sorts of traffic. The core argument here is that if a regulation prohibiting some consumption is going counter a spontaneous demand, it is simply making the economic incentive for traffickers to be even bigger. We know also technically that securitizing the future cashflows is the common technique to manufacture extra financial leverage. The underlying “asset” is the pledge of some collateral in the form “assets” of many kinds. One recent prevailing “asset” is a series of intangible ones like goodwill, reputation, reliability, quality, design, ease of use, data and so on. Intangible does not mean “unpalatable”. Quite the opposite actually! We are more and more entering a purely hedonistic economy whereby the satisfaction is granted and we all look for something else as a result. Usually the core trigger for a leverage to happen well is precisely when its effect is palatable or expected to be so, even before it actually produces tangible effects. This is usually what BOTH make new endeavors become reality AND spur undue speculation where we regularly hear of “new world”, “new paradigm”, “new…” whatever. Thus the past securitization bubble, and the past other bubbles alike, all exhibited the same pattern… First a new form of leverage becomes better and better accepted since it turns out to work in front of an audience that otherwise was weary. Next the initial straight effect on the “real economy” looks so promising that now the audience swings progressively from a “fear of the devil” into a fear of “missing out”. On the way the real generation of wealth shows its new face and the speculation mounts. At last, when the audience becomes aware of a tangible growing acceptance, itself being supported by bold new wealth generation, the solvency of the new leverage mechanism is pushed to its limits way before it could prove its real value.
Thus the credit scoring process here must provide both the prospective gain that is extracted from a leveraging operation or from the ongoing development of the crypto markets. It must allow for the progressive development of an autonomous governance protocol. Typically so far, businesses had rather go for the next mile, taking more risks through more leverage to make more money, rather than pause. Doing so they are generating the “lag effects” that Milton Friedman singled out for governments and that induce soon or later a new crisis. The problem is here anyway, inducing some processes that obey so called “time delay differential equations” which most of the time induce “singularities” that one can model as “Hopf Bifurcations” rather simply. How to navigate through that and keep the whole boat afloat?
When one generates leverage, that most of the time can be expressed through a securitization model, one creating liquidity/credit to oneself and partners. This leverage induces an additional purchasing power that makes prices rise accordingly. This in turn induces a wealth effect rather quickly which empowers businesses to leverage themselves up more on the follow. This is how crazy things can happen as well as things that were not possible before. How to estimate whether this purely technical enrichment has a chance to match with subsequent genuine value? One can measure that through the profit generation as much as through the defaults that may occur through the crypto markets. It is quite likely for example that if the machine is roaring too fast, more technical defaults of transactions will happen, more insurance contracts will be either subscribed or activated, more fraudulent businesses will show up. Thus the “N” that we mentioned before is to grow and some early contagion features might show up as well. From that standpoint the businesses using the most dangerous route to settle their transactions will offer a very useful information and as such should find some appropriate reward directly from the markets themselves. Thus the many possible businesses ranging from “high leverage/low operational risk” to “low leverage/high operational risk” should be able to contribute to the same database of records over time.
This rating scale would thus serve both businesses purpose to map their own interest into this universe and would also serve the interest of insurers which will look for the bigger picture to set their own prices. Let’s consider one business having whatever assets in real life. The latter are to be “committed” through a crypto pool in order for the business to retrieve say 80% of the recognized value of these assets. The business remains in charge of the assets and would lose control of them if the business fails to keep their value above 80% of their original value. In return, the business keeps more or less 70% of the gross return provided by these assets and rewards here its partners. The business can thus replicate the same operation 4 times instead of just time and therefore can potentially multiply its gains by 4 here out of the leveraging costs. It has indeed to surrender 30% of the gross return. So in fact, the business multiplies actually its gains by 4*0.7=2.8. This business is in fact relying more and more on its ability to manage the assets that it pledged in the crypto pool. Why would the others do that? Answer: they run for a lower risk/reward profile and look to source as many different assets as they to optimize their own strategy. Note that they can next do exactly what the first business is. But they will have different figures and leverage ratios. This leveraging phenomenon is quite quick and spontaneous in the crypto markets since only a decentralized trust setup governs everything. In good times, all the profiles make money and this induces a rise in prices, and of course a bigger risk of downside move in prices next. This is what is called “mark to market” in traditional finance. One can understand that all this chain reverses track if suddenly the price to purchase new assets goes down. Then the past deals are suboptimal and they are dumped by investors. This is this consideration that recently pushed all the financial players to move to privatetoprivate markets: they then spared themselves with disrupting mark to market risks and could run this leveraging contest for the longer run. But in the meantime they have built a very opaque and massive market.
Thus in the crypto markets we should push for a democratic election of the triggers that grant a AAA rating, a BBB rating or else a “junk” rating. Some players will always prefer the wilde side and others the “safe” side. The idea is to let the ratings float so that we avoid both undue and blind speculation and provide timely support when needed. The whole issue now is to measure the value generation. Indeed, as much as it is pretty straightforward to measure the instant wealth effect of a given batch of new deals, it is much less obvious to estimate the contemporaneous actual wealth generation, the famous “lag” on the new deals coming. But this is doable over time thanks to contributions coming from the community.
Now we have a couple of benchmarks to measure at least the pace of value creation:

How would users of the credit scoring value the savings in terms of risks and cost of investigations?

What is the value of the data that is contributed through anecdotal evidence, propositions and open source developments?

What is the economic incentive that participants evaluate through their contribution to the shared governance?

How this ongoing mapping and monitoring is actually sponsoring the emergence of new cryptos, new businesses, new models between the scarcity ones and the “flooding” ones? The point being here that each new crypto should exhibit a clear diversification from other cryptos in risk terms, something which supports higher diversification and therefore higher financial leverage looking forward.

How is the crypto markets facilities help improve the current ratio profitability/predictability? It is possible to stake a floating part of real life assets into the crypto world to get higher accruing yields. This idea is on the line of the fractionalization of assets but here this is only partial. How is it improving the return on assets? The anchoring of real assets into the crypto world through through the floating part allows for 6 month AAA tranche that permit infinite leverage in theory.

How can one look at the financial risks in every token that is considered and compare them through this framework? The better we can the least reserve capital we need.

Anchor the model on Vitalik Butyrin “trinity” comment how the governance as designed helps solve the original unholy trinity mentioned above? We can anchor our analysis of value generation by recalling that transposed into the crypto world this unholy trinity becomes:

Irreversible fraudulent losses to avoid (equivalent to capital mobility)

Ultimate average loss minimized with maximization of deals (equivalent to exchange rate management)

Processing time to be higher with higher flows or higher fraud rates (equivalent to monetary autonomy)

One last point to make: this unholy trinity is not proved. It is more an “after the facts” convenient framework that justified liberalism and crypto markets on the follow. The strongest asset of the crypto market lies in its floating commitment to sponsor growth of value in that, while we would always want “more” the value chain might actually morph quite quickly…. Something which a usual constitution does not allow.